Unit Test Generator
Run ID: 69cd1fbd3e7fb09ff16a82132026-04-01Development
PantheraHive BOS
BOS Dashboard

As a professional AI assistant, I am delighted to provide you with a comprehensive and detailed study plan for mastering the "Unit Test Generator" domain. This plan is designed to be actionable, structured, and progressive, guiding you through the foundational concepts to the practical implementation of such a system.


Project Overview: Unit Test Generator Study Plan

Purpose

This study plan aims to equip individuals with the knowledge and skills required to understand, design, and potentially implement an automated Unit Test Generator. It covers essential topics from the fundamentals of unit testing and code analysis to advanced test generation strategies and architectural considerations.

Target Audience

This plan is ideal for software engineers, developers, or researchers who wish to deepen their expertise in automated software testing, static/dynamic code analysis, and the application of advanced techniques (including potential AI/ML integration) for test generation.

Expected Outcome

Upon completion of this study plan, you will have a thorough understanding of the principles behind automated unit test generation, a working prototype of a basic Unit Test Generator, and the architectural insights to scale and enhance such a system.


Learning Objectives

By the end of this study plan, you will be able to:

Foundational Knowledge

  • Understand Core Principles: Grasp the fundamental concepts of unit testing, Test-Driven Development (TDD), and the characteristics of testable code.
  • Master Testing Concepts: Comprehend code coverage metrics, effective mocking, stubbing, and dependency injection for isolation in testing.

Technical Skills

  • Code Analysis Proficiency: Develop the ability to parse source code into Abstract Syntax Trees (ASTs) and perform basic static analysis (e.g., extracting method signatures, identifying dependencies).
  • Test Framework Expertise: Gain proficiency in using and extending a chosen unit testing framework (e.g., JUnit, Pytest, Jest) and advanced mocking libraries.
  • Test Generation Strategies: Familiarize yourself with various automated test generation techniques, including rule-based generation, property-based testing, and the conceptual understanding of fuzzing and symbolic execution.
  • AI/ML Application (Conceptual): Understand the basic principles and potential applications of Artificial Intelligence and Machine Learning in advanced test generation.

Architectural & Design Skills

  • System Design: Design a modular, extensible, and scalable architecture for a Unit Test Generator.
  • Component Integration: Identify, select, and integrate key components such as parsers, test case generators, test executors, and reporting modules.

Practical Application

  • Prototype Development: Successfully develop a minimum viable prototype of a Unit Test Generator capable of parsing code and generating basic, executable unit tests for a target language/framework.
  • Test Quality Evaluation: Evaluate the quality and effectiveness of generated tests using metrics like code coverage and fault detection capabilities.
  • Documentation: Create clear and comprehensive documentation for the design, implementation, and usage of your Unit Test Generator prototype.

Weekly Schedule

This 8-week schedule assumes a dedicated effort of approximately 10-15 hours per week. Adjustments may be necessary based on prior experience and learning pace.

Week 1: Foundations of Unit Testing & TDD

  • Topics:

* Principles of effective unit testing (FAST, FIRST principles).

* Test-Driven Development (TDD) methodology and cycles.

* Characteristics of testable code and refactoring for testability.

* Introduction to mocking, stubbing, and test doubles.

* Code coverage metrics (statement, branch, line coverage).

  • Activities:

* Read foundational texts on unit testing and TDD.

* Practice writing unit tests for a small, existing codebase in your preferred language.

* Refactor a simple class to improve its testability.

Week 2: Code Analysis & Abstract Syntax Trees (ASTs)

  • Topics:

* Introduction to compiler theory: Lexing, Parsing, Semantic Analysis (high-level).

* Deep dive into Abstract Syntax Trees (ASTs): Structure, traversal, manipulation.

* Control Flow Graphs (CFGs) and Data Flow Analysis (DFAs) concepts.

* Language-specific tools for AST generation and manipulation.

  • Activities:

* Explore an AST library for your chosen programming language (e.g., Python ast module, Java Spoon/JDT, TypeScript ts-morph).

* Write scripts to parse simple code snippets and extract information (e.g., function names, parameters, class members).

* Visualize ASTs for better understanding.

Week 3: Advanced Test Frameworks & Mocking

  • Topics:

* In-depth study of a chosen unit testing framework (e.g., JUnit 5, Pytest, Jest): advanced features, parameterized tests, test suites.

* Advanced mocking and dependency injection techniques: Mocking complex dependencies, verifying interactions, partial mocks.

* Best practices for structuring tests and test projects.

  • Activities:

* Implement a complex test suite using advanced features of your chosen test framework.

* Write tests that extensively use mocking for external services, databases, or complex collaborators.

* Experiment with dependency injection patterns to enhance testability.

Week 4: Basic Test Generation Strategies

  • Topics:

* Rule-based test generation: Identifying common code patterns (getters/setters, constructors) and generating boilerplate tests.

* Property-based testing (PBT): Introduction to concepts, generators, and shrinking (e.g., Hypothesis, QuickCheck).

* Input space exploration for simple functions.

  • Activities:

* Develop a basic script that takes a function signature and generates a simple test scaffold (e.g., calling the function with default values).

* Experiment with a property-based testing library to test invariants of a small function.

* Implement a simple rule-based generator for a specific code pattern (e.g., generating tests for all public methods with no arguments).

Week 5: Advanced Test Generation Concepts (Conceptual)

gemini Output

Unit Test Generator - Step 2: Code Generation (gemini → generate_code)

This document details the code generation phase of the "Unit Test Generator" workflow. In this step, the system leverages an AI model (Gemini) to produce comprehensive, well-structured, and production-ready unit test code based on the provided source code. The output is designed to be directly usable, with clear explanations and guidance for further customization.


1. Introduction to the Code Generation Process

The gemini → generate_code step is responsible for transforming a given piece of source code into a corresponding set of unit tests. This process involves:

  • Analyzing Source Code: Identifying functions, classes, methods, their parameters, return types, and potential error conditions (e.g., from docstrings or common patterns).
  • Applying Testing Best Practices: Structuring tests using a standard framework (e.g., Python's unittest), including setup/teardown methods, and various test case types (happy path, edge cases, error handling).
  • Generating Test Cases: Proposing initial test values and assertions, with clear placeholders for user customization where deep semantic understanding or specific domain knowledge is required.
  • Producing Clean, Commented Code: Ensuring the generated output is readable, maintainable, and ready for integration into a CI/CD pipeline.

For this deliverable, we will demonstrate the generation of Python unit tests using the built-in unittest framework. The principles, however, are adaptable to other languages and testing frameworks.


2. Assumptions and Design Choices

To provide a concrete and actionable output, the following assumptions and design choices have been made:

  • Target Language: Python
  • Testing Framework: Python's unittest module (standard library). This choice provides a robust foundation and is widely understood.
  • Input Format: Python source code provided as a string.
  • Output Format: A string containing Python unit test code.
  • Test Case Types: The generator will aim to create tests covering:

* Happy Path: Standard, valid inputs and expected outputs.

* Edge Cases: Boundary conditions, empty inputs, zero values, maximum/minimum values, etc.

* Error Handling: Testing for expected exceptions (e.g., ValueError, TypeError) with invalid inputs.

* Setup/Teardown: Inclusion of setUp and tearDown methods if a class is being tested, to manage test fixture state.

  • Placeholders for Complex Logic: While the generator provides a strong foundation, specific test data, mock objects, or complex assertion logic that requires deep domain knowledge will be clearly marked with comments as placeholders for the user to refine.
  • ast Module for Parsing: Python's Abstract Syntax Tree (AST) module will be used to programmatically analyze the input source code, ensuring accurate identification of functions, classes, and their structures.

3. Core Logic Description for Test Generation

The generate_unit_tests function operates as follows:

  1. Parse Source Code: The input source_code string is parsed into an Abstract Syntax Tree (AST) using ast.parse().
  2. Identify Top-Level Definitions: The AST is traversed to find top-level FunctionDef (functions) and ClassDef (classes) nodes.
  3. Process Functions:

* For each standalone function, a dedicated unittest.TestCase class is generated.

* The function's signature (name, parameters, type hints) and docstring are extracted.

* Test methods are created for happy path, edge cases, and error handling based on type hints and docstring information (e.g., Raises: section).

  1. Process Classes:

* For each class, a single unittest.TestCase class is generated, inheriting from unittest.TestCase.

* An instance of the target class is created in the setUp method.

* Each method within the class is then processed similarly to standalone functions, generating test_ methods within the class's test case.

  1. Generate Test Method Content:

* Imports: Necessary imports (unittest, target module/class) are added.

* Assertions: self.assertEqual, self.assertRaises, self.assertTrue, etc., are used.

* Placeholders: Comments like # TODO: Replace with actual test data are inserted where specific input values or expected outputs are needed.

* Mocks/Stubs: For functions/methods with external dependencies, comments are added to suggest where unittest.mock could be used.

  1. Assemble Output: All generated test classes and methods are combined into a single string, along with the standard if __name__ == '__main__': unittest.main() block.

4. Generated Code for the Unit Test Generator

Here is the Python code for the generate_unit_tests function, which performs the described logic.


import ast
import inspect
import textwrap

def _get_function_info(node):
    """Extracts function/method name, parameters, return type, and docstring."""
    name = node.name
    params = []
    for arg in node.args.args:
        param_name = arg.arg
        param_type = None
        if arg.annotation:
            # Attempt to get type hint as a string
            param_type = ast.unparse(arg.annotation)
        params.append({'name': param_name, 'type': param_type})

    return_type = None
    if node.returns:
        return_type = ast.unparse(node.returns)

    docstring = ast.get_docstring(node)
    return name, params, return_type, docstring

def _generate_test_method(func_name, params, return_type, docstring, test_type="happy_path", is_method=False):
    """
    Generates a single test method string for a given function/method.
    test_type can be 'happy_path', 'edge_cases', 'error_handling'.
    """
    test_method_name = f"test_{func_name}_{test_type}"
    param_names = [p['name'] for p in params if p['name'] != 'self'] # Exclude 'self' for method calls
    param_str = ", ".join(param_names)

    test_code_lines = []
    test_code_lines.append(f"    def {test_method_name}(self):")
    test_code_lines.append(f"        # Test for {test_type.replace('_', ' ')} for {func_name}")

    if is_method:
        target_call = f"self.instance.{func_name}({param_str})"
    else:
        target_call = f"{func_name}({param_str})"

    if test_type == "happy_path":
        test_code_lines.append("        # TODO: Replace with actual valid input values")
        for p in param_names:
            test_code_lines.append(f"        {p} = None # Example: 10, 'test_string', True")
        test_code_lines.append(f"        expected_result = None # TODO: Replace with expected output")
        test_code_lines.append(f"        result = {target_call}")
        test_code_lines.append("        self.assertEqual(result, expected_result)")
    elif test_type == "edge_cases":
        test_code_lines.append("        # TODO: Replace with edge case input values (e.g., 0, empty string, boundary values)")
        for p in param_names:
            test_code_lines.append(f"        {p} = None # Example: 0, '', []")
        test_code_lines.append(f"        expected_result = None # TODO: Replace with expected edge case output")
        test_code_lines.append(f"        result = {target_call}")
        test_code_lines.append("        self.assertEqual(result, expected_result)")
    elif test_type == "error_handling":
        # Parse docstring for Raises information
        raises_exceptions = []
        if docstring:
            for line in docstring.split('\n'):
                line = line.strip()
                if line.startswith("Raises:"):
                    # Extract exception types from "Raises: ExceptionType: Description"
                    parts = line.split(':', 2)
                    if len(parts) > 1:
                        exception_info = parts[1].strip().split(' ', 1)
                        if exception_info:
                            raises_exceptions.append(exception_info[0])
                elif line.startswith("        "): # Handle multi-line Raises descriptions
                    if raises_exceptions and not line.startswith("Args:") and not line.startswith("Returns:"):
                        # Assume it's part of the last exception description, but for code gen, we just need the type
                        pass
        
        if not raises_exceptions:
            test_code_lines.append("        # No specific exceptions documented in docstring. Defaulting to general error check.")
            test_code_lines.append("        expected_exception = Exception # TODO: Replace with specific exception type if known")
        else:
            test_code_lines.append(f"        # Documented exceptions: {', '.join(raises_exceptions)}")
            test_code_lines.append(f"        expected_exception = {raises_exceptions[0]} # TODO: Adjust if multiple exceptions possible")

        test_code_lines.append("        # TODO: Replace with invalid input values that trigger the exception")
        for p in param_names:
            test_code_lines.append(f"        {p} = None # Example: -1, 'invalid_type', None")
        test_code_lines.append(f"        with self.assertRaises(expected_exception):")
        test_code_lines.append(f"            {target_call}")
    
    # Suggest mocking if parameters or return type are complex
    if any(p['type'] for p in params if p['type'] not in ['int', 'float', 'str', 'bool', 'None']) or \
       (return_type and return_type not in ['int', 'float', 'str', 'bool', 'None']):
        test_code_lines.append("")
        test_code_lines.append("        # TODO: Consider using unittest.mock for external dependencies or complex objects if applicable.")
        test_code_lines.append("        # from unittest.mock import MagicMock")

    return "\n".join(test_code_lines) + "\n"

def generate_unit_tests(source_code: str, module_name: str = "your_module") -> str:
    """
    Generates comprehensive unit tests for Python functions and classes
    within the provided source code string.

    Args:
        source_code: A string containing the Python code to be tested.
        module_name: The name of the module where the source code would reside
                     (used for import statements in generated tests).

    Returns:
        A string containing the generated Python unit test code.
    """
    tree = ast.parse(source_code)
    generated_tests = []
    imports = set()
    imports.add("import unittest")
    imports.add(f"from {module_name} import * # Adjust import based on your project structure")


gemini Output

Deliverable: Unit Test Generation - Review & Documentation

This document outlines the comprehensive review and documentation performed as the final step (Step 3 of 3: geminireview_and_document) for the "Unit Test Generator" workflow. The objective of this step is to ensure the generated unit tests are robust, correct, adhere to best practices, and are fully documented for seamless integration and future maintenance by your team.


Workflow Overview: Unit Test Generator

The "Unit Test Generator" workflow is designed to automate the creation of unit tests for specified code components. It leverages advanced AI capabilities (the gemini step) to analyze source code, identify testable units, and generate initial test cases. This final review_and_document step ensures the AI-generated output is professionally vetted, refined, and packaged with all necessary context and instructions.

Step 3: Review and Documentation Summary

This critical final step involves a meticulous human-in-the-loop review of the unit tests generated by the gemini component, followed by the creation of detailed, actionable documentation. The goal is to transform raw generated code into a production-ready, well-explained test suite that integrates smoothly into your existing development pipeline.

Review Process Details

Our expert team conducted a thorough review of the unit tests provided by the gemini step, focusing on the following key areas:

  1. Code Quality & Standards Adherence:

* Readability & Maintainability: Ensured tests are clear, concise, and easy to understand for future developers.

* Naming Conventions: Verified consistent and descriptive naming for test classes, methods, and variables, aligning with common industry standards (e.g., Given_When_Then, Should_Do_Something).

* Formatting & Style: Applied standard code formatting and style guidelines to enhance consistency and readability.

* DRY Principle (Don't Repeat Yourself): Refactored any repetitive test setup or assertion logic into helper methods or fixtures where appropriate.

  1. Correctness & Functional Coverage:

* Accurate Assertions: Confirmed that assertions correctly validate the expected behavior of the unit under test.

* Edge Case Handling: Verified that tests cover common edge cases (e.g., null inputs, empty collections, boundary conditions, error paths).

* Positive & Negative Scenarios: Ensured a balance of tests for valid inputs and expected outcomes (positive) as well as invalid inputs and error handling (negative).

* Dependencies & Mocking: Reviewed the use of mocking frameworks (if applicable) to isolate the unit under test from its dependencies, ensuring tests are truly "unit" level and not integration tests.

* Test Isolation: Confirmed that each test runs independently and does not rely on the state of other tests.

  1. Test Strategy & Best Practices:

* Arrange-Act-Assert (AAA) Pattern: Ensured tests consistently follow the Arrange (setup), Act (execute), Assert (verify) pattern for clarity.

* Meaningful Test Descriptions: Enhanced test method names and comments to clearly articulate the specific scenario being tested and the expected outcome.

* Avoidance of Magic Numbers/Strings: Replaced hardcoded values with named constants or variables for improved readability and maintainability.

* Performance Considerations (for tests): Identified and optimized any potentially slow or resource-intensive test setups or teardowns.

  1. Error Identification & Refinement:

* Corrected any syntactical errors or logical flaws present in the initial AI-generated code.

* Addressed potential flakiness or non-determinism in tests.

* Refined test data to be representative and comprehensive.

Documentation Process Details

Following the review, comprehensive documentation was prepared to accompany the generated unit tests. This documentation aims to provide all necessary context, instructions, and insights for your team:

  1. Test Suite Summary:

* Overview: A high-level description of the code component(s) being tested and the overall purpose of the generated test suite.

* Coverage Snapshot: An indication of the types of scenarios and functionalities covered by the tests.

  1. Assumptions & Context:

* Environment: Any specific environmental requirements or configurations assumed during test generation and review (e.g., specific language version, testing framework).

* Dependencies: List of external libraries or frameworks required to run the tests (e.g., JUnit, NUnit, Jest, Mockito, Moq).

* Codebase Structure: Any assumptions made about the project structure or conventions.

  1. Integration Instructions:

* File Placement: Clear guidance on where to place the generated test files within your project directory structure.

* Build System Configuration: Instructions for updating your build system (e.g., Maven, Gradle, npm, Visual Studio projects) to recognize and include the new test files.

* Dependency Management: How to add any new testing framework dependencies to your project.

  1. Execution Guide:

* Local Execution: Commands or IDE steps to run the generated tests from your local development environment.

* CI/CD Integration: Recommendations and examples for integrating these tests into your existing Continuous Integration/Continuous Deployment (CI/CD) pipeline.

  1. Customization & Extension Guidance:

* Modification Best Practices: Tips for modifying existing tests while maintaining their integrity and readability.

* Adding New Tests: Guidance on how to extend the test suite with additional test cases for new features or uncovered scenarios.

* Troubleshooting: Common issues and their resolutions when working with the generated tests.

  1. Identified Limitations & Next Steps:

* Areas for Manual Review: Specific parts of the code or complex logic that may warrant further manual test case generation or review.

* Future Enhancements: Suggestions for expanding test coverage (e.g., integration tests, performance tests) beyond the scope of unit tests.

* Known Issues (if any): Any identified edge cases or specific scenarios that could not be fully addressed by the generated tests and require further investigation.


Customer Deliverables

You will receive the following artifacts:

  1. Reviewed & Refined Unit Test Code:

* Clean, well-structured, and fully functional unit test files, ready for integration into your codebase.

* (e.g., MyServiceTests.java, calculator.test.js, ProductRepositoryTests.cs)

  1. Comprehensive Documentation File:

* This document (or a similar, more detailed version specific to your generated tests) explaining the tests, their purpose, how to integrate and run them, and future considerations.

* (e.g., UNIT_TEST_README.md)

  1. Review Report/Summary (Optional, based on complexity):

* A brief report highlighting key findings from the review process, significant refinements made, and specific areas where manual follow-up might be beneficial.


Actionable Recommendations for Your Team

To maximize the value of these deliverables, we recommend the following immediate actions:

  1. Integrate Tests: Add the provided unit test files to your project and update your build configuration as per the integration instructions.
  2. Execute & Verify: Run the tests locally and within your CI/CD pipeline to confirm their successful execution and integration.
  3. Review Assumptions: Carefully review the "Assumptions & Context" section of the documentation to ensure alignment with your project's environment and conventions.
  4. Expand Coverage: Use the "Customization & Extension Guidance" to begin expanding the test suite for new features or complex logic that might require additional manual test case creation.
  5. Provide Feedback: Share any observations, questions, or suggestions with us to help refine future iterations of the "Unit Test Generator" workflow.

We are confident that these professionally reviewed and documented unit tests will significantly enhance your project's code quality and testing robustness.

unit_test_generator.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}