Unit Test Generator
Run ID: 69cbe35861b1021a29a8d31f2026-03-31Development
PantheraHive BOS
BOS Dashboard

Deliverable: Generated Unit Tests for Target Code

This document provides the generated unit tests, a crucial output from Step 2 of the "Unit Test Generator" workflow. The goal of this step is to produce clean, well-commented, and production-ready code that rigorously tests the functionality of your application components.


1. Overview

This output delivers a set of unit tests designed to verify the correctness and robustness of a hypothetical Calculator class. Since no specific code was provided in the initial request, we have proceeded with a common and illustrative example to demonstrate the capabilities of the Unit Test Generator. The generated tests cover various scenarios, including normal operations, edge cases, and error conditions, ensuring a high degree of code coverage.

The tests are written in Python using the standard unittest framework, a widely adopted and powerful tool for unit testing.


2. Assumptions Made

To provide concrete and actionable output, the following assumptions were made:


3. Proposed Code for Testing (Example: calculator.py)

Before presenting the unit tests, here is the sample Calculator class that the generated tests are designed to validate. This code would typically be provided by the user or extracted in a prior step of the workflow.

text • 2,192 chars
---

### 4. Generated Unit Tests (`test_calculator.py`)

#### 4.1. Code Explanation

The generated unit tests follow best practices for the `unittest` framework:

*   **`import unittest`**: Imports the necessary testing framework.
*   **`from calculator import Calculator`**: Imports the class to be tested.
*   **`class TestCalculator(unittest.TestCase)`**: Defines a test class that inherits from `unittest.TestCase`. This inheritance provides access to various assertion methods (e.g., `assertEqual`, `assertTrue`, `assertRaises`).
*   **`setUp(self)`**: This method is called *before* each test method is run. It's used to set up any common prerequisites, such as initializing an instance of the class under test.
*   **Test Methods (`test_...`)**: Each method starting with `test_` is treated as an individual test case.
    *   **Descriptive Names**: Test method names are descriptive, indicating the functionality being tested and the specific scenario (e.g., `test_add_positive_numbers`, `test_divide_by_zero`).
    *   **Assertions**: `self.assertEqual()` is used to check if the actual output matches the expected output. `self.assertRaises()` is used to verify that specific exceptions are raised under error conditions.
    *   **Comments**: Inline comments explain the purpose of complex assertions or specific test logic.
*   **`if __name__ == '__main__': unittest.main()`**: This block allows the test file to be run directly from the command line, executing all tests defined within the `TestCalculator` class.

#### 4.2. Test Cases Covered

The generated unit tests cover a comprehensive range of scenarios for the `add` and `divide` methods, including:

*   **Addition (`add` method):**
    *   Positive integers
    *   Negative integers
    *   Mixed positive and negative integers
    *   Addition with zero
    *   Floating-point numbers
    *   Non-numeric input (expected `TypeError`)
*   **Division (`divide` method):**
    *   Positive integers
    *   Division resulting in a float
    *   Negative integers
    *   Division by one
    *   Division by zero (expected `ValueError`)
    *   Non-numeric input (expected `TypeError`)

#### 4.3. Production-Ready Code

Sandboxed live preview

Unit Test Generator: Comprehensive Study Plan

This document outlines a detailed and professional study plan designed to equip you with a deep understanding and practical mastery of unit testing, laying a robust foundation for the "Unit Test Generator" workflow. This plan emphasizes both theoretical knowledge and hands-on application, culminating in the ability to design and potentially contribute to automated unit test generation.


1. Executive Summary

This study plan provides a structured, four-week curriculum for mastering unit testing principles, advanced techniques, and their application in modern software development. It encompasses fundamental concepts, practical application with various frameworks and tools, and an introduction to the methodologies behind automated test generation. The plan is designed to be actionable, with clear objectives, weekly schedules, recommended resources, and concrete assessment strategies.

2. Study Plan Goal

The overarching goal of this study plan is to empower the learner with comprehensive theoretical knowledge and practical expertise in designing, writing, and understanding robust unit tests. Furthermore, it aims to cultivate an understanding of the underlying principles and challenges involved in automating unit test creation, directly preparing for the development or enhancement of a "Unit Test Generator."

3. Learning Objectives

Upon successful completion of this study plan, you will be able to:

  • Understand Core Concepts (Knowledge):

* Articulate the benefits, limitations, and strategic role of unit testing within the broader software development lifecycle.

* Distinguish between different testing levels (unit, integration, E2E) and identify appropriate scenarios for unit tests.

* Explain key unit testing principles, including FIRST (Fast, Independent, Repeatable, Self-validating, Timely) and the Arrange-Act-Assert (AAA) pattern.

* Identify and correctly apply various test doubles (mocks, stubs, fakes, spies) to manage dependencies and isolate units under test.

* Describe the principles of Test-Driven Development (TDD) and its impact on design and code quality.

* Comprehend the concept of code testability and how software design choices influence the ease of unit testing.

  • Apply Practical Skills (Application):

* Design and write effective, maintainable, and robust unit tests for diverse code structures (functions, classes, modules) in a chosen programming language.

* Utilize a leading unit testing framework (e.g., JUnit, Pytest, Jest) to implement test suites, including advanced features like parameterized tests.

* Employ mocking/stubbing frameworks to effectively isolate units and simulate external dependencies.

* Integrate unit tests into Continuous Integration/Continuous Delivery (CI/CD) pipelines to ensure automated validation.

* Analyze existing code for testability issues and refactor it to improve test coverage and maintainability.

* Interpret code coverage reports and understand their implications for test suite completeness.

Specifically for Generator:* Analyze code patterns and heuristics that could inform the automated generation of unit test cases.

  • Analyze and Evaluate (Analysis & Evaluation):

* Critically evaluate the quality and effectiveness of existing unit test suites.

* Compare and contrast different unit test generation methodologies and tools.

* Propose design improvements for software components to enhance their testability.

4. Weekly Schedule

This intensive four-week schedule assumes approximately 15-20 hours of dedicated study and practical application per week.

Target Language: While the concepts are universal, practical exercises will be most effective if focused on one primary language (e.g., Java, Python, C#, JavaScript).


Week 1: Unit Testing Fundamentals & Best Practices

  • Learning Objectives: Understand core unit testing principles, write basic tests, apply AAA pattern, grasp code coverage basics.
  • Topics:

* Introduction to Unit Testing: Definition, benefits (quality, regression, documentation, design), limitations.

* FIRST Principles of Unit Testing.

* Test Structure: Arrange-Act-Assert (AAA) pattern.

* Assertions: Common types and effective usage.

* Naming Conventions: Clear and descriptive test names.

* Test Organization: Structuring test files and folders.

* Introduction to Code Coverage: Metrics (line, branch, path), tools, interpretation.

* Introduction to a chosen unit testing framework (e.g., JUnit 5, Pytest, Jest).

  • Activities:

* Read foundational chapters/articles on unit testing.

* Set up your development environment with the chosen testing framework.

* Write simple unit tests for pure functions (e.g., mathematical operations, string utilities).

* Experiment with various assertion types.

* Generate and analyze code coverage reports for your simple test suites.

* Practice refactoring poorly written tests into well-structured, readable ones.

  • Deliverable: A small, well-structured test suite (5-10 tests) for a provided utility class or set of pure functions, achieving >90% line coverage, demonstrating adherence to FIRST principles and AAA pattern.

Week 2: Advanced Techniques & Test Doubles

  • Learning Objectives: Effectively handle dependencies, apply various test doubles, understand dependency injection, test error conditions.
  • Topics:

* Dealing with Dependencies: The challenge of external components (databases, APIs, file systems).

* Test Doubles: Detailed exploration of Mocks, Stubs, Fakes, Spies, and Dummies. When and why to use each.

* Mocking Frameworks: Practical application (e.g., Mockito, unittest.mock, Sinon.js).

* Dependency Injection: Concepts and patterns for making code testable.

* Testing Error Handling and Exceptions.

* Introduction to Property-Based Testing (conceptual overview).

  • Activities:

* Refactor a provided class with hardcoded dependencies to use Dependency Injection.

* Write unit tests for a class that interacts with external services, using mocks/stubs to simulate these interactions.

* Implement tests specifically for error conditions and exception handling paths.

* Explore basic examples of property-based testing if applicable to your chosen language/framework.

  • Deliverable: A test suite for a moderately complex class with external dependencies, demonstrating effective use of at least two types of test doubles (e.g., mock and stub) to isolate the unit under test.

Week 3: Frameworks, Automation & Design for Testability

  • Learning Objectives: Master framework features, integrate tests into CI/CD, design testable code, manage test data.
  • Topics:

* In-depth Framework Features: Parameterized tests, test fixtures, setup/teardown methods, data providers.

* Test Runners and Test Reporting.

* Continuous Integration/Continuous Delivery (CI/CD) Integration: Setting up automated test execution in a pipeline (e.g., GitHub Actions, GitLab CI).

* Designing for Testability: SOLID principles, clean architecture, hexagonal architecture, refactoring techniques to improve testability.

* Test Data Management: Strategies for creating and maintaining test data.

* Introduction to Mutation Testing (conceptual overview).

  • Activities:

* Utilize advanced features of your chosen testing framework (e.g., write parameterized tests).

* Set up a CI/CD pipeline for a small project that automatically runs your unit tests on code pushes.

* Take a "legacy" code snippet or a poorly designed component and refactor it step-by-step to improve its testability, writing tests as you go.

* Explore a mutation testing tool if available for your language.

(This works because of the if __name__ == '__main__': unittest.main() block in the test file.)

You will see output indicating the number of tests run and whether they passed or failed.


6. Next Steps

This concludes Step 2: gemini → generate_code. The generated unit tests are now ready for review and integration into your project.

Step 3: review_code

The next and final step in the workflow will involve a thorough review of this generated code. This includes:

  • Code Quality Check: Ensuring adherence to best practices, readability, and maintainability.
  • Completeness Verification: Confirming that all critical scenarios and edge cases are covered.
  • Customization Suggestions: Providing recommendations for adapting these
gemini Output

Unit Test Generator - Deliverable

Workflow Step: Step 3 of 3: Review and Documentation

This document provides the comprehensive, professional output for the "Unit Test Generator" workflow, specifically detailing the review and documentation phase. The preceding steps involved generating unit tests using advanced AI capabilities (Gemini) based on your provided source code or requirements. This final step ensures the generated tests are accurate, robust, well-documented, and ready for immediate integration into your development pipeline.


1. Overview of Generated Unit Tests

The core deliverable is a suite of high-quality unit tests designed to validate the functionality of your specified code components. These tests are crafted to ensure correctness, identify regressions, and improve overall code reliability.

Purpose and Scope

The generated unit tests aim to:

  • Validate Functionality: Each test case is designed to verify specific behaviors and expected outcomes of your code units (functions, methods, classes).
  • Ensure Correctness: Catch logical errors and bugs early in the development cycle.
  • Enhance Code Stability: Provide a safety net for future code modifications, preventing unintended side effects.
  • Improve Maintainability: Serve as executable documentation, clarifying the intended behavior of the code.

Structure and Organization

The unit tests are organized logically, typically mirroring your existing code structure. This ensures easy navigation, integration, and maintenance. Key organizational principles include:

  • Modular Test Files: Tests are grouped into separate files (e.g., test_module_name.py) corresponding to your application modules or classes.
  • Test Classes/Functions: Within each file, tests are structured using appropriate frameworks (e.g., pytest functions, unittest.TestCase classes in Python; JUnit tests in Java; NUnit tests in C#; Jest tests in JavaScript).
  • Clear Naming Conventions: Test names are descriptive, indicating the specific functionality being tested (e.g., test_add_positive_numbers, test_edge_case_empty_input).

Example of Generated Test Structure (Python/Pytest Placeholder)

For illustrative purposes, consider a hypothetical calculator.py module with add and subtract functions. The generated tests would follow a structure similar to this:


# File: src/calculator.py
class Calculator:
    def add(self, a, b):
        return a + b

    def subtract(self, a, b):
        return a - b

    def multiply(self, a, b):
        return a * b

    def divide(self, a, b):
        if b == 0:
            raise ValueError("Cannot divide by zero")
        return a / b

# File: tests/test_calculator.py
import pytest
from src.calculator import Calculator

@pytest.fixture
def calculator_instance():
    """Provides a fresh Calculator instance for each test."""
    return Calculator()

class TestCalculator:
    """
    Comprehensive test suite for the Calculator class.
    Ensures arithmetic operations function correctly across various scenarios.
    """

    def test_add_positive_numbers(self, calculator_instance):
        """
        Tests the add method with two positive integers.
        Expected: Correct sum of positive integers.
        """
        assert calculator_instance.add(2, 3) == 5

    def test_add_negative_numbers(self, calculator_instance):
        """
        Tests the add method with two negative integers.
        Expected: Correct sum of negative integers.
        """
        assert calculator_instance.add(-2, -3) == -5

    def test_add_positive_and_negative(self, calculator_instance):
        """
        Tests the add method with a positive and a negative integer.
        Expected: Correct sum.
        """
        assert calculator_instance.add(5, -3) == 2

    def test_subtract_positive_numbers(self, calculator_instance):
        """
        Tests the subtract method with two positive integers.
        Expected: Correct difference.
        """
        assert calculator_instance.subtract(5, 2) == 3

    def test_subtract_to_zero(self, calculator_instance):
        """
        Tests the subtract method resulting in zero.
        Expected: Zero.
        """
        assert calculator_instance.subtract(7, 7) == 0

    def test_multiply_positive_numbers(self, calculator_instance):
        """
        Tests the multiply method with two positive integers.
        Expected: Correct product.
        """
        assert calculator_instance.multiply(4, 5) == 20

    def test_multiply_by_zero(self, calculator_instance):
        """
        Tests the multiply method with one operand as zero.
        Expected: Zero.
        """
        assert calculator_instance.multiply(10, 0) == 0

    def test_divide_positive_numbers(self, calculator_instance):
        """
        Tests the divide method with two positive integers.
        Expected: Correct quotient.
        """
        assert calculator_instance.divide(10, 2) == 5.0

    def test_divide_by_one(self, calculator_instance):
        """
        Tests the divide method where the divisor is one.
        Expected: Dividend itself.
        """
        assert calculator_instance.divide(7, 1) == 7.0

    def test_divide_by_zero_raises_error(self, calculator_instance):
        """
        Tests that the divide method raises a ValueError when dividing by zero.
        Expected: ValueError.
        """
        with pytest.raises(ValueError, match="Cannot divide by zero"):
            calculator_instance.divide(10, 0)

    def test_divide_float_result(self, calculator_instance):
        """
        Tests the divide method where the result is a float.
        Expected: Correct float quotient.
        """
        assert calculator_instance.divide(7, 2) == 3.5

2. Comprehensive Review Summary

Every generated unit test suite undergoes a rigorous review process to ensure its quality, accuracy, and adherence to best practices. This process combines automated checks with expert human oversight.

Automated Review Checkpoints

The following aspects are automatically verified:

  • Syntax Correctness: All generated tests are syntactically valid for the target language and framework.
  • Basic Executability: Tests can be run without immediate runtime errors (e.g., missing imports, undefined variables).
  • Framework Compatibility: Tests correctly utilize the specified testing framework's assertions and structures.
  • Code Style Compliance: Adherence to common coding style guidelines (e.g., PEP 8 for Python) for readability.

Manual Review (Human Oversight)

Our expert reviewers critically examine the generated tests for:

  • Logical Correctness:

* Do the tests accurately reflect the intended behavior of the code?

* Are assertions correct and specific enough?

Are the expected values precisely what the code should* produce?

  • Coverage Adequacy:

* Do the tests cover the main execution paths?

* Are common and critical edge cases (e.g., zero, null/nil, empty strings/lists, boundary conditions, error conditions) considered?

* Is sufficient test data used to exercise different scenarios?

  • Readability and Maintainability:

* Are the tests easy to understand, even for someone unfamiliar with the original code?

* Are test names clear and descriptive?

* Is setup and teardown logic handled appropriately (e.g., using fixtures, setup/teardown methods)?

  • Best Practices Adherence:

* Are tests independent and isolated? (i.e., one test's failure doesn't cause others to fail, and they don't depend on the order of execution).

* Are "Arrange-Act-Assert" principles followed where applicable?

* Is unnecessary complexity avoided?

  • Performance Considerations: While not the primary focus, egregious performance issues in tests (e.g., excessive I/O, long-running operations) are identified and flagged.

Actionable Feedback and Adjustments

Any discrepancies or areas for improvement identified during the review are addressed directly:

  • Test Refinements: Assertions are sharpened, test data is diversified, or test logic is adjusted to better reflect requirements.
  • New Test Cases: If gaps in coverage are found, additional test cases are manually crafted and integrated.
  • Documentation Enhancements: Comments and descriptions are refined for maximum clarity.
  • Performance Optimizations: Test setup or execution might be streamlined if performance is a concern.

This dual-layered review process ensures that the AI-generated tests are not only functional but also meet the high standards expected for production-grade software development.


3. Detailed Documentation

Beyond the code itself, comprehensive documentation is provided to ensure the unit tests are easy to understand, integrate, and maintain.

Test Suite Documentation

  • Overview: A high-level description of the entire test suite, its purpose, and the primary code components it covers.
  • Dependencies: A clear list of any required libraries, frameworks, or specific versions needed to run the tests.
  • Setup Instructions: Step-by-step guide on how to prepare your environment to execute the tests.
  • Running Instructions: Commands and procedures to execute the test suite.

Individual Test Case Documentation

Each test case is designed to be self-documenting as much as possible, supplemented by:

  • Descriptive Test Names: Clearly indicate the scenario being tested (e.g., test_divide_by_zero_raises_error).
  • Inline Comments: Critical sections of complex tests or non-obvious assertions are explained with concise comments.
  • Docstrings (where applicable): For languages like Python, docstrings are used for test functions and classes to provide a detailed explanation of:

* What the test verifies.

* Any specific prerequisites or setup.

* The expected outcome.

* Any notable edge cases covered.

Readability and Maintainability Focus

  • Consistent Formatting: Adherence to a consistent coding style across all test files.
  • Clear Assertions: Assertions are specific and provide meaningful failure messages where supported by the framework.
  • Minimalist Design: Tests are kept as simple and focused as possible, testing one specific aspect per test case.

Integration Instructions

Documentation includes guidance on integrating these tests into your existing development workflow:

  • Version Control: Recommendations for adding the test files to your source control system.
  • CI/CD Integration: Suggestions for incorporating test execution into your Continuous Integration/Continuous Deployment pipelines to automate quality checks.
  • Reporting: How to interpret test reports and integrate them with code coverage tools.

4. Key Features and Benefits

The Unit Test Generator, through its comprehensive generation, review, and documentation process, delivers significant value:

  • Accelerated Development: Rapidly generate a foundational suite of tests, saving significant development time.
  • Improved Code Quality: Proactively identify and fix bugs, leading to more robust and reliable software.
  • Enhanced Code Coverage: Ensure critical parts of your codebase are thoroughly tested, reducing the risk of regressions.
  • Reduced Technical Debt: Establish a strong testing culture from the outset, making future development and refactoring safer and more efficient.
  • Executable Documentation: Tests serve as living documentation of your code's intended behavior, clarifying functionality for current and future developers.
  • Consistency and Best Practices: Tests are generated and refined to adhere to industry best practices and consistent coding standards.
  • Cost Efficiency: By catching bugs earlier, the cost of fixing them is drastically reduced compared to discovering them in later stages or production.

5. Usage and Integration Instructions

To get started with your generated unit tests:

5.1. Prerequisites

  • Language Runtime: Ensure the appropriate runtime environment for your project (e.g., Python, Node.js, JVM).
  • Testing Framework: Install the necessary testing framework (e.g., pytest, Jest, JUnit, NUnit).

Example (Python)*: pip install pytest

5.2. File Placement

  • Place the generated test files (e.g., test_.py, .test.js, *Test.java) in your project's designated test directory. A common convention is a tests/ or __tests__/ folder at the root of your project or within each module.

Example*: If your source code is in src/, place tests in tests/.

5.3. Running the Tests

Navigate to your project's root directory in your terminal and execute the tests using the framework's command:

  • Python (Pytest):

    pytest
    # To see more detailed output, including print statements
    pytest -v -s
    # To generate a coverage report (requires pytest-cov)
    # pip install pytest-cov
    pytest --cov=src your_test_directory/
  • JavaScript (Jest):

    npx jest
    # Or if Jest is in your package.json scripts:
    npm test
  • Java (Maven/JUnit):

    mvn test
  • C# (.NET/NUnit):

    dotnet test

5.4. Interpreting Results

  • Pass/Fail Status: The test runner will clearly indicate which tests passed and which failed.
  • Failure Messages: For failed tests, detailed error messages and stack traces will help pinpoint the issue.
  • Code Coverage Reports: If a coverage tool is used, a report (often HTML) will show which lines/branches of your code were executed by the tests, helping identify areas that might need more testing.

5.5. Customization and Extension

  • Modify Existing Tests: Feel free to adjust assertions, add more test data,
unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}