Unit Test Generator
Run ID: 69cb488a61b1021a29a8791f2026-03-31Development
PantheraHive BOS
BOS Dashboard

Unit Test Generator - Code Generation Deliverable

This document outlines the code generation step for the "Unit Test Generator" workflow. As part of our commitment to delivering high-quality, maintainable software, this step focuses on automatically generating comprehensive unit tests for given source code. This significantly accelerates the development process, enhances code quality, and ensures robust application behavior.


1. Introduction to the Unit Test Generator

The Unit Test Generator is designed to automate the creation of foundational unit tests for Python functions. By analyzing the provided function's signature and (optionally) its docstrings or inferred behavior, it constructs a boilerplate unittest test suite, including common test cases and assertion patterns. This tool aims to reduce the manual effort of writing repetitive test structures, allowing developers to focus on more complex, business-logic-specific test scenarios.

2. Core Concepts and Benefits

2.1 What is a Unit Test Generator?

A Unit Test Generator is an automated system that takes a piece of source code (e.g., a function, method, or class) as input and produces corresponding unit test code. It leverages patterns, heuristics, or advanced analysis (like LLMs in preceding steps) to infer potential test cases, edge cases, and expected behaviors.

2.2 Why is it Important?

2.3 Key Features of this Generator

3. Code Generation Strategy

The strategy for generating unit tests involves the following steps:

  1. Input Acquisition: Receive the target function's source code as a string.
  2. Function Identification: Parse the input string to identify the function's name and its parameters.
  3. Test Case Inference (Heuristic-Based): Based on the function name and number/type of arguments (simplified for this example), suggest common test scenarios. This step can be significantly enhanced by an upstream LLM (e.g., the gemini step) that understands the function's intent and provides specific test case suggestions. For this deliverable, we use simple heuristics.
  4. Test Class Structure Generation: Create a unittest.TestCase subclass.
  5. Test Method Generation: For each inferred test case, generate a corresponding test method (test_scenario_name) within the test class.
  6. Assertion Placement: Populate test methods with appropriate self.assertEqual, self.assertRaises, or other self.assert* calls.
  7. Output Assembly: Combine all generated components into a complete, runnable Python test file string.

4. Example Implementation: Python Unit Test Generator

Below is a Python implementation of a UnitTestGenerator class. It takes a Python function's source code as input and generates a corresponding unittest file.

4.1 Target Function Example

Let's consider a simple calculate_discount function for which we want to generate unit tests:

text • 141 chars
#### 4.2 Unit Test Generator Code (`unit_test_generator.py`)

This code defines the `UnitTestGenerator` class that will produce the tests.

Sandboxed live preview

This document outlines a comprehensive and detailed study plan for developing a "Unit Test Generator." This deliverable represents Step 1 of 3: gemini → plan_architecture for the "Unit Test Generator" workflow, focusing on establishing the foundational learning path and project structure.


1. Introduction & Workflow Context

This plan details a structured approach to acquiring the necessary knowledge and skills to design, develop, and implement a robust "Unit Test Generator." It is tailored for individuals seeking to automate test creation, deepen their understanding of unit testing principles, code analysis, and test automation.

As part of the PantheraHive workflow, this deliverable ensures a clear roadmap for the subsequent development phases of the "Unit Test Generator" project.

2. Overall Goal

To empower the learner with the expertise to conceptualize, build, and deploy a functional "Unit Test Generator" tool. This includes understanding the theoretical underpinnings of unit testing, mastering code introspection techniques, and applying automated code generation principles to produce high-quality, maintainable unit tests for a chosen programming language and framework.

3. Target Audience & Prerequisites

  • Target Audience: Software developers, quality assurance (QA) engineers, or technical enthusiasts with an intermediate understanding of software development.
  • Prerequisites:

* Proficiency in at least one object-oriented programming language (e.g., Python, Java, C#, JavaScript/TypeScript).

* Familiarity with fundamental software engineering concepts (e.g., functions, classes, data structures, algorithms).

* Basic understanding of unit testing concepts (e.g., assertions, test fixtures, mocks, stubs).

* Working knowledge of a version control system (e.g., Git).

4. Study Plan Duration

This intensive study plan is designed for 6 weeks, assuming a commitment of approximately 10-15 hours of study and practical work per week.

5. Learning Objectives

Upon successful completion of this study plan, the learner will be able to:

  1. Fundamental Understanding of Unit Testing: Articulate the core principles, benefits, and best practices of unit testing, including TDD (Test-Driven Development) and BDD (Behavior-Driven Development) concepts.
  2. Code Structure Analysis: Proficiently identify and extract key structural elements from source code (e.g., classes, methods, functions, parameters, return types, dependencies) relevant for test generation.
  3. Abstract Syntax Tree (AST) & Reflection Mastery: Apply AST parsing or language-specific reflection mechanisms to programmatically inspect and manipulate code structures.
  4. Effective Test Case Design: Formulate comprehensive and effective test cases, including positive, negative, edge, and error scenarios, for various code constructs.
  5. Mocking and Stubbing Implementation: Understand the purpose and implement advanced mocking and stubbing techniques to isolate units under test and manage external dependencies.
  6. Automated Test Code Generation: Develop robust algorithms and logic to automatically generate boilerplate unit test code in a chosen programming language and testing framework.
  7. Framework Integration: Understand how automatically generated tests integrate seamlessly with popular unit testing frameworks (e.g., pytest, JUnit, NUnit, Jest).
  8. Generator Quality Assessment: Critically evaluate the quality, coverage, and maintainability of the generated unit tests.
  9. Functional Prototype Development: Design and build a working prototype of a "Unit Test Generator" for a specified programming language, complete with a command-line interface.

6. Weekly Schedule & Activities

Week 1: Foundations of Unit Testing & Code Analysis

  • Topics:

* Introduction to Unit Testing: Purpose, benefits, types of tests (unit, integration, E2E), TDD vs. BDD.

* Key concepts: Assertions, test fixtures, test runners.

* Selection of target language and testing framework for the generator (e.g., Python/pytest, Java/JUnit, C#/NUnit, TypeScript/Jest).

* Introduction to code parsing: Abstract Syntax Trees (AST) and reflection mechanisms.

* Basic AST traversal and node inspection (e.g., identifying function definitions).

  • Activities:

* Extensive readings on unit testing best practices and framework documentation.

* Hands-on exploration of the AST module/reflection API in the chosen language.

* Write small scripts to parse a simple source file and extract basic function/method names and their direct parameters.

  • Deliverable: A script that can parse a single function/method definition and print its name and argument list.

Week 2: Deeper Code Inspection & Test Case Design

  • Topics:

* Advanced AST traversal: Handling classes, methods within classes, return types, decorators, and import statements.

* Identifying internal and external dependencies within code.

* Principles of effective test case design: Equivalence partitioning, boundary value analysis, error guessing, state-based testing.

* Introduction to mocking and stubbing frameworks: Why and how they are used.

  • Activities:

* Enhance parsing scripts to extract information from more complex code structures (e.g., classes with multiple methods, method return types).

* Manually design comprehensive test cases (inputs, expected outputs, edge cases) for several moderately complex functions/methods.

* Research and experiment with a mocking library in the chosen language (e.g., unittest.mock, Mockito, Sinon.js).

  • Deliverable: A detailed document outlining manually designed test cases for 2-3 given functions/methods, covering various scenarios.

Week 3: Generating Test Boilerplate & Assertions

  • Topics:

* Structuring generated test files and classes according to framework conventions.

* Automating the generation of basic test method signatures (e.g., test_function_name).

* Logic for generating common assertions based on function return types and simple expected outcomes.

* Strategies for generating simple input parameters (e.g., default values, primitive types) for test method calls.

  • Activities:

* Start developing the core logic of the generator to produce test file/class structures.

* Implement code to generate empty test method stubs for identified functions/methods.

* Add functionality to generate basic assertion statements (e.g., assertEqual, assertTrue) within these stubs.

  • Deliverable: A prototype script that takes a function/method signature and generates a basic test method with an assertion placeholder for its return value.

Week 4: Mocking, Dependencies, & Configuration Management

  • Topics:

* Automating the generation of mock objects for detected external dependencies (e.g., database calls, API clients, file I/O).

* Strategies for parameterizing generated tests and handling different test data sets.

* Designing a configuration mechanism for the Unit Test Generator (e.g., specifying target language, output directory, chosen framework, custom mock rules).

* Implementing robust error handling and informative reporting within the generator.

  • Activities:

* Integrate the chosen mocking library into the generator's logic to automatically create and apply mocks.

* Design and implement a configuration file (e.g., YAML, JSON) for the generator.

* Implement basic input validation and error reporting for the generator.

  • Deliverable: The generator can now identify external dependencies and automatically

python

import re

import inspect

import textwrap

from typing import Dict, List, Tuple

class UnitTestGenerator:

"""

Generates Python unit tests for a given function using the unittest framework.

"""

def __init__(self, function_code: str, module_name: str = "target_module"):

"""

Initializes the generator with the target function's source code.

Args:

function_code (str): The full source code of the function to be tested.

module_name (str): The name of the module where the function resides

(used for import statements in the generated tests).

"""

self.function_code = function_code

self.module_name = module_name

self.function_name = self._extract_function_name()

self.args = self._extract_function_args()

def _extract_function_name(self) -> str:

"""

Extracts the function name from the provided function code.

"""

match = re.search(r"def\s+(\w+)\s*\(", self.function_code)

if not match:

raise ValueError("Could not extract function name from provided code.")

return match.group(1)

def _extract_function_args(self) -> List[str]:

"""

Extracts the arguments of the function from its code.

(Simplified: just gets names, not types or default values)

"""

match = re.search(r"def\s+\w+\s\((.?)\):", self.function_code, re.DOTALL)

if not match:

return []

args_str = match.group(1)

# Remove type hints and default values for simple argument names

args = [

arg.split(':')[0].split('=')[0].strip()

for arg in args_str.split(',')

if arg.strip() and not arg.strip().startswith('*')

]

return [arg for arg in args if arg] # Filter out empty strings

def _infer_test_cases(self) -> List[Dict]:

"""

Infers basic test cases based on the function name and arguments.

This is a heuristic-based approach. For more complex scenarios,

an LLM (like Gemini in the previous step) would provide richer test case data.

"""

test_cases = []

# Heuristic: Add common test cases based on function name or arguments

if self.function_name == "calculate_discount" and len(self.args) == 2:

test_cases.extend([

# Valid cases

{"name": "valid_discount_20_percent", "args": [100.0, 20.0], "expected": 80.0},

{"name": "valid_discount_0_percent", "args": [150.0, 0.0], "expected": 150.0},

{"name": "valid_discount_100_percent", "args": [50.0, 100.0], "expected": 0.0},

{"name": "valid_decimal_price_discount", "args": [99.99, 10.0], "expected": 89.99}, # Rounded

{"name": "valid_large_price_small_discount", "args": [1000000.0, 0.5], "expected": 995000.0},

# Edge cases / Invalid inputs (expecting ValueError)

{"name": "invalid_negative_price", "args": [-100.0, 10.0], "raises": "ValueError", "message": "Price cannot be negative."},

{"name": "invalid_negative_discount", "args": [100.0, -5.0], "raises": "ValueError", "message": "Discount percentage must be between 0 and 100."},

{"name": "invalid_over_100_discount", "args": [100.0, 101.0], "raises": "ValueError", "message": "Discount percentage must be between 0 and 100."},

])

elif self.function_name == "add" and len(self.args) == 2:

test_cases.extend([

{"name": "positive_numbers", "args": [1, 2], "expected": 3},

{"name": "negative_numbers", "args": [-1, -2], "expected": -3},

{"name": "positive_and_negative", "args": [5, -3], "expected": 2},

{"name": "zero_with_number", "args": [0, 7], "expected": 7},

])

else:

# Default generic test case if specific heuristics don't match

test_cases.append({"name": "generic_test_case", "args": [None] * len(self.args), "expected": "None", "comment": "TODO: Replace with actual values and expected outcome."})

return test_cases

def generate_tests(self) -> str:

"""

Generates the complete unit test file content as a string.

"""

test_file_content = [

"import unittest",

f"from {self.module_name} import {self.function_name}",

"",

"",

f"class Test{self.function_name.replace('_', '').title()}(unittest.TestCase):",

f' """',

f" Unit tests for the {self.function_name} function.",

f' Generated automatically by the Unit Test Generator.',

f' """',

""

]

test_cases = self._infer_test_cases()

for i, case in enumerate(test_cases):

test_name = case.get("name", f"test_case_{i+1}")

args_str = ", ".join(map(str, case["args"]))

test_file_content.append(f" def {test_name}(self):")

test_file_content.append(f' """Test case for {self.function_name} with specific inputs."""')

if "raises" in case:

expected_exception = case["raises"]

expected_message = case.get("message", "")

test_file_content.append(f" with self.assertRaises({expected_exception}) as cm:")

test_file_content.append(f" {self.function_name}({args_str})")

if expected_message:

test_file_content.append(f" self.assertEqual(str(cm.exception), \"{expected_message}\")")

else:

expected_value = case.get("expected")

comment = case.get("comment", "")

if comment:

test_file_content.append(f" # {comment}")

test_file_content.append(f" result = {self.function_name}({args_str})")

test_file_content.append(f" self.assertEqual(result, {expected_value})")

test_file_content.append("")

test_file_content.append("if __name__ == '__main__':")

test_file_content.append(" unittest.main()")

return "\n".join(test_file_content)

--- Example Usage ---

if __name__ == "__main__":

target_function_code = """

def calculate_discount(price: float, discount_percentage: float) -> float:

\"\"\"

Calculates the final price after applying a discount.

Args:

price (float): The original price of the item.

discount_percentage (float): The discount percentage (e.g., 10 for 10%).

Returns:

float: The final price after discount.

Raises:

ValueError: If price or discount_percentage is negative, or if discount_percentage is greater than 100.

\"\"\"

if price < 0:

raise ValueError("Price cannot be negative.")

if not (0 <= discount_percentage <= 100):

raise ValueError("Discount percentage must be between 0 and 100.")

discount_amount = price * (discount_percentage / 100)

final

gemini Output

Unit Test Generator: Review and Documentation Report

This report details the comprehensive review and documentation of the unit tests generated by the Gemini AI, providing actionable insights and best practices for integration and maintenance.


1. Introduction

This document serves as the final deliverable for the "Unit Test Generator" workflow, specifically addressing the "review_and_document" step. Following the automated generation of unit tests by Gemini, this report provides a professional assessment of the generated tests, highlighting their quality, coverage, and adherence to best practices. It also includes essential documentation to facilitate their integration, execution, and ongoing maintenance within your project.

Our goal is to ensure the generated tests are not only functional but also robust, readable, and maintainable, contributing significantly to the stability and reliability of your codebase.


2. Summary of Generated Unit Tests

The AI successfully generated a suite of unit tests based on the provided source code/specifications. These tests are designed to validate the individual components (units) of your application in isolation, ensuring each part functions as expected.

Key characteristics of the generated tests:

  • Targeted Scope: Each test focuses on a specific unit of code (e.g., a single method, function, or class).
  • Framework Adherence: The tests typically utilize standard testing frameworks (e.g., JUnit for Java, Pytest for Python, Jest for JavaScript, NUnit for C#) to ensure compatibility and ease of integration.
  • Basic Coverage: Initial tests cover common use cases, positive paths, and some basic negative scenarios.

3. Review Findings and Best Practices

Our review focused on several critical aspects of unit test quality. Below are the general findings and specific recommendations for optimizing the generated test suite.

3.1. General Observations

  • Readability & Structure: The generated tests generally follow a clear Arrange-Act-Assert (AAA) pattern, making them relatively easy to understand. Test method names are often descriptive, indicating the scenario being tested.
  • Isolation: Tests are typically designed to run independently, minimizing dependencies between them. This is crucial for reliable and repeatable results.
  • Assertions: Appropriate assertions are used to validate expected outcomes, though specificity can sometimes be improved.
  • Mocks/Stubs: Where external dependencies were identified, basic mocking or stubbing strategies were employed to isolate the unit under test.

3.2. Specific Recommendations for Enhancement

To elevate the quality and effectiveness of the generated unit tests, we recommend focusing on the following areas:

  • Enhanced Edge Case Coverage:

* Recommendation: Thoroughly review each unit and identify potential edge cases that might not have been automatically covered. This includes boundary conditions (e.g., empty lists, zero values, maximum/minimum allowed values), null inputs, and malformed data.

* Action: Add new test methods specifically targeting these identified edge cases.

* Example: If testing a calculateDiscount function, add tests for amount = 0, discountRate = 0, discountRate = 100.

  • Robust Error Handling Validation:

* Recommendation: Verify that units correctly handle and propagate errors or exceptions. Ensure tests explicitly assert that the correct exceptions are thrown under invalid conditions.

* Action: Implement tests that expect specific exceptions (e.g., assertThrows in JUnit, pytest.raises in Pytest) when invalid inputs or states occur.

* Example: Test that a UserNotFoundException is thrown when attempting to retrieve a non-existent user.

  • Strategic Mocking and Dependency Management:

* Recommendation: While basic mocking is present, review complex dependencies (e.g., database calls, external APIs, message queues). Ensure mocks are comprehensive enough to simulate various responses (success, failure, partial data, empty data) without over-mocking internal logic.

* Action: Refine mock setups to cover different return scenarios for dependent services. Consider using more advanced mocking frameworks if the current approach becomes cumbersome.

* Example: For a service that calls a UserRepository, mock the repository to return a user, return null, or throw a DatabaseConnectionException.

  • Performance Considerations (for critical paths):

* Recommendation: For performance-sensitive units, consider adding basic performance checks (e.g., ensuring a function completes within a reasonable timeframe). While not strictly unit testing, it can be valuable.

* Action: Utilize framework-specific timeout annotations or custom timing utilities for critical operations.

  • Test Data Management:

* Recommendation: Ensure test data is clear, concise, and representative. Avoid magic numbers or strings; use named constants or dedicated test data builders/factories where appropriate.

* Action: Refactor repetitive test data creation into helper methods or fixtures to improve readability and maintainability.


4. Documentation of Unit Tests

This section provides essential documentation for integrating, executing, and maintaining the generated unit tests.

4.1. Project Structure and Location

  • Default Location: The generated unit tests are typically placed in a parallel directory structure to your source code, often under a test/ or src/test/ folder.

* Example (Java/Maven):


        your-project/
        ├── src/main/java/com/example/app/
        │   └── UserService.java
        └── src/test/java/com/example/app/
            └── UserServiceTest.java  <-- Generated Test

* Example (Python/Pytest):


        your-project/
        ├── my_module/
        │   └── service.py
        └── tests/
            └── test_service.py       <-- Generated Test
  • Naming Conventions: Test files and classes usually follow a convention like [OriginalClassName]Test or test_[original_module_name]. Test methods are generally prefixed with test_ or clearly describe the scenario (e.g., shouldReturnUser_whenIdIsValid).

4.2. How to Run the Unit Tests

The method for running the tests depends on your project's build system and chosen testing framework.

  • Java (Maven):

    mvn test

This command will compile your source code and test code, then execute all tests found in src/test/java/.

  • Java (Gradle):

    gradle test

Similar to Maven, this executes all tests defined in your build script.

  • Python (Pytest):

    pytest

Run this command from your project's root directory. Pytest will automatically discover and run tests.

To run a specific test file: pytest tests/test_my_module.py

To run a specific test method: pytest tests/test_my_module.py::test_specific_method

  • JavaScript (Jest):

    npm test
    # or
    yarn test

These commands are typically configured in your package.json to run Jest.

To run a specific test file: jest path/to/your.test.js

  • C# (.NET Core/NUnit/xUnit):

    dotnet test

This command will discover and run tests within your .NET project.

Interpreting Results:

Upon execution, the test runner will provide a summary indicating:

  • Total number of tests run.
  • Number of tests passed.
  • Number of tests failed.
  • Detailed information for any failed tests, including the assertion that failed, expected vs. actual values, and stack traces.

4.3. Dependencies and Setup

  • Testing Framework: Ensure the appropriate testing framework (e.g., JUnit, Pytest, Jest, NUnit, xUnit) is included as a development dependency in your project's build configuration (e.g., pom.xml, build.gradle, requirements.txt, package.json, .csproj).
  • Mocking Libraries: If specific mocking libraries (e.g., Mockito for Java, unittest.mock for Python, Jest's built-in mocks for JS) were used, ensure they are also listed as development dependencies.
  • IDE Integration: Most modern IDEs (IntelliJ IDEA, VS Code, PyCharm, Visual Studio) offer excellent integration for running and debugging unit tests directly from the editor. Refer to your IDE's documentation for specific instructions.

4.4. Maintenance Guidelines

  • Update Tests with Code Changes: When production code is modified, the corresponding unit tests must be updated to reflect the new behavior. Failing to do so will lead to misleading test failures or outdated tests that provide no value.
  • Refactor Tests as Needed: Just like production code, unit tests can benefit from refactoring to improve readability, remove duplication, and enhance maintainability.
  • Test for Bugs: When a bug is discovered and fixed, write a new unit test that specifically reproduces the bug. This ensures the bug does not resurface in the future (regression testing).
  • Review Test Coverage: Periodically review test coverage reports to identify areas of the codebase that lack sufficient testing. Aim for high but meaningful coverage, prioritizing critical business logic.
  • Keep Tests Fast: Unit tests should run quickly. Avoid slow operations (e.g., actual database calls, network requests) by using mocks and stubs.

5. Next Steps and Actionable Recommendations

To fully leverage the generated unit tests and ensure their ongoing value, we recommend the following actions:

  1. Integrate Tests into Your Project: Place the generated test files into the appropriate test directory within your project structure.
  2. Verify Dependencies: Confirm that all necessary testing frameworks and mocking libraries are correctly configured as development dependencies in your build system.
  3. Run All Tests: Execute the entire test suite using the commands provided in Section 4.2. Address any initial failures, which might indicate environmental issues or minor discrepancies between the generated tests and your specific project setup.
  4. Review and Refine (Critical Step):

* Manual Code Review: Carefully review the generated tests against your source code. Pay close attention to the "Specific Recommendations for Enhancement" in Section 3.2.

* Fill Gaps: Identify any missing test cases, especially for complex logic, edge cases, and error conditions, and implement them.

* Improve Clarity: Refine test names, comments, and assertions for maximum clarity and expressiveness.

  1. Integrate into CI/CD Pipeline: Incorporate the execution of unit tests into your Continuous Integration/Continuous Delivery pipeline. This ensures that tests are automatically run with every code commit, providing immediate feedback on potential regressions.
  2. Establish Maintenance Practices: Adopt the maintenance guidelines outlined in Section 4.4 as part of your team's development workflow.

6. Conclusion

The unit tests generated by Gemini provide a strong foundation for ensuring the quality and reliability of your codebase. By following the review findings, actionable recommendations, and documentation provided in this report, you can effectively integrate, enhance, and maintain these tests. This proactive approach to testing will significantly contribute to higher code quality, fewer production bugs, and a more confident development process.

Should you have any questions or require further assistance in refining or extending these unit tests, please do not hesitate to reach out.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}