Unit Test Generator
Run ID: 69cd119e3e7fb09ff16a7a062026-04-01Development
PantheraHive BOS
BOS Dashboard

This deliverable outlines the execution of Step 2 of 3 for the "Unit Test Generator" workflow, focusing on generating clean, well-commented, and production-ready unit test code.


Unit Test Generator: Step 2 of 3 - Generate Code

This step involves generating comprehensive unit tests for a given piece of code. Since no specific code was provided in the initial prompt, we have generated unit tests for a common and illustrative example: a Calculator class. This example demonstrates best practices for unit testing, including handling various input types, edge cases, and expected exceptions.

The generated code uses pytest, a popular and powerful Python testing framework, known for its simplicity and extensive features.


1. Target Code for Unit Testing (Example: calculator.py)

To provide a concrete example of unit test generation, we've created a Calculator class with basic arithmetic operations. This class is designed to handle both integers and floats and includes basic input validation.

File: calculator.py

text • 413 chars
---

## 2. Generated Unit Tests (using `pytest`)

Below are the generated unit tests for the `Calculator` class. These tests cover various scenarios, including positive cases, negative numbers, floating-point arithmetic, edge cases (like adding zero, multiplying by zero, dividing by one), and error handling (TypeErrors for non-numeric input, ValueErrors for division by zero).

**File: `test_calculator.py`**

Sandboxed live preview

Study Plan: Architecting an AI-Powered Unit Test Generator

This detailed study plan is designed to guide developers, AI engineers, and software architects through the essential knowledge and skills required to plan and design an intelligent Unit Test Generator, leveraging advanced AI capabilities, specifically focusing on Google's Gemini models.

The plan is structured to provide a comprehensive understanding, from fundamental unit testing principles and code analysis to advanced AI/ML techniques for code generation and system architecture.

1. Introduction to the Study Plan

The goal of this study plan is to equip you with the expertise needed to conceptualize, design, and architect an AI-powered Unit Test Generator. This involves understanding the core principles of unit testing, mastering code analysis techniques, exploring strategies for automated test case generation, and critically, learning how to integrate and leverage Large Language Models (LLMs) like Gemini for intelligent test code synthesis.

This plan emphasizes practical application and architectural thinking, preparing you to lead the development of sophisticated, automated testing solutions.

2. Overall Goal of the Study

To gain comprehensive knowledge and practical skills in:

  • Unit Testing Fundamentals: Deep understanding of best practices, frameworks, and methodologies.
  • Code Analysis: Proficiency in parsing code structure and extracting semantic information.
  • Automated Test Generation Strategies: Familiarity with various techniques for generating effective test cases.
  • AI/ML for Code Generation: Understanding how LLMs, particularly Gemini, can be applied to generate high-quality, context-aware unit tests.
  • System Architecture Design: Ability to design a robust, scalable, and maintainable architecture for an AI-powered Unit Test Generator.

3. Weekly Schedule

This 6-week schedule provides a structured learning path. Each week builds upon the previous one, culminating in the architectural design of your Unit Test Generator.

Week 1: Fundamentals of Unit Testing & Code Analysis (20-25 hours)

  • Learning Objectives:

* Understand the core principles and benefits of unit testing, Test-Driven Development (TDD), and Behavior-Driven Development (BDD).

* Become familiar with common unit testing frameworks (e.g., pytest for Python, JUnit for Java, Jest for JavaScript).

* Learn the basics of Abstract Syntax Trees (ASTs) and how to parse code programmatically to extract structural information.

* Comprehend concepts of test doubles (mocks, stubs, fakes) and their importance in isolating units.

  • Topics:

* What is Unit Testing? Characteristics of good tests (FIRST principles).

* TDD vs. BDD methodologies.

* Code coverage metrics and tools.

* Introduction to ASTs: parsing, traversing, and extracting information from source code (e.g., function signatures, class definitions, variable usage).

* Static code analysis tools and their role.

  • Activities:

* Read articles/tutorials on unit testing best practices and TDD.

* Set up and write basic unit tests using a chosen framework (e.g., pytest).

* Practice parsing simple code snippets into ASTs and extracting function names and parameters (e.g., using Python's ast module).

* Explore an existing open-source static analyzer or code linter.

Week 2: Advanced Test Generation Strategies & Testability Design (20-25 hours)

  • Learning Objectives:

* Explore various algorithmic and heuristic approaches for automated test case generation.

* Understand how to identify effective test inputs, including edge cases and boundary conditions.

* Learn about property-based testing and its advantages.

* Understand design patterns that enhance code testability (e.g., Dependency Injection).

  • Topics:

* Input domain analysis: equivalence partitioning, boundary value analysis.

* Control flow and data flow analysis for test generation.

* Fuzz testing introduction.

* Property-based testing (e.g., Hypothesis for Python, QuickCheck for Haskell/F#).

* Design for testability: dependency injection, inversion of control.

* Code smells related to testability.

  • Activities:

* Implement a simple rule-based test generator for a small function (e.g., generating tests for a function that adds two numbers, including edge cases like negative numbers, zeros).

* Experiment with a property-based testing library.

* Analyze existing open-source automated test generators (e.g., EvoSuite, Pex, Pynguin) to understand their strategies.

* Refactor a small piece of "untestable" code to improve its testability using design patterns.

Week 3: Introduction to AI/ML for Code & Test Generation (25-30 hours)

  • Learning Objectives:

* Understand the fundamentals of applying AI/ML, particularly LLMs, to software engineering tasks.

* Learn basic concepts of natural language processing (NLP) applied to code.

* Gain an overview of Transformer architecture and its role in modern LLMs.

* Be introduced to prompt engineering principles for code generation.

* Understand the capabilities and ethical considerations of using models like Gemini for code tasks.

  • Topics:

* Code as data: tokenization, code embeddings.

* Sequence-to-sequence models for code generation (e.g., code completion, translation).

* Introduction to Large Language Models (LLMs) and the Transformer architecture.

* Prompt Engineering for code: few-shot learning, instruction tuning, role-playing prompts.

* Overview of Google's Gemini models: capabilities, API access (Vertex AI), safety filters, and limitations.

* Ethical considerations in AI-generated code and tests.

  • Activities:

* Experiment with a code embedding model (e.g., through a public API or a small open-source model).

* Access the Gemini API (via Google Cloud Vertex AI) and start with basic code generation prompts (e.g., generate a function that does X, generate a docstring for function Y).

* Analyze the quality and potential biases of AI-generated code.

* Read foundational papers on LLMs for code.

Week 4: Architecting an AI-Powered Unit Test Generator (30-35 hours)

  • Learning Objectives:

* Design the high-level architecture for an AI-powered Unit Test Generator.

* Identify and define the key components and modules of the system.

* Understand the data flow and interaction between different parts, especially the AI module.

* Plan for scalability, maintainability, and extensibility.

* Define the API contracts and integration points for the Gemini model.

  • Topics:

* System decomposition: identifying core modules (e.g., Code Parser, Test Strategy Engine, AI Test Generator, Test Formatter, Feedback Loop).

* Data pipeline design: input code ingestion, intermediate representation, output test code.

* Integration patterns for LLMs: API calls, batch processing, handling rate limits and context windows.

* Error handling, logging, and monitoring strategies.

* Considerations for different programming languages.

* Defining the Minimal Viable Product (MVP) features.

  • Activities:

* Draft a detailed architectural diagram for your Unit Test Generator, including components, data stores, and communication channels.

* Write a preliminary design document outlining the purpose, scope, key features, and architectural decisions.

* Define the API specifications for how the "AI Test Generator" component will interact with the "Code Parser" and "Test Formatter" components.

* Research best practices for deploying and managing AI models in production.

Week 5: Deep Dive into Gemini for Code & Test Generation (25-30 hours)

  • Learning Objectives:

* Master advanced prompt engineering techniques specifically for generating unit tests with Gemini.

* Understand how to structure prompts to include context (function signature, docstrings, surrounding code, existing tests).

* Explore strategies for generating diverse and effective test cases (positive, negative, edge cases).

* Learn to evaluate the quality and correctness of Gemini-generated tests.

* Understand potential fine-tuning strategies for specialized test generation tasks (if applicable).

  • Topics:

* Advanced prompt structuring: Chain-of-Thought, few-shot examples for test patterns.

* Leveraging function signatures, docstrings, and comments as context for Gemini.

* Generating tests for different types of functions (pure functions, functions with side effects, I/O operations).

* Strategies for generating test doubles (mocks/stubs) using Gemini.

* Automated evaluation metrics for generated tests (e.g., code coverage, mutation testing).

* Iterative refinement and human-in-the-loop approaches.

  • Activities:

* Develop a series of prompts for Gemini to generate unit tests for a specific target function, experimenting with different levels of context and instruction.

* Write a small script to call the Gemini API and integrate the generated tests into a test file.

* Manually review and analyze the effectiveness of generated tests, identifying common failure modes or areas for improvement in prompts.

* Research existing tools or academic papers on evaluating AI-generated code.

Week

python

test_calculator.py

import pytest

from calculator import Calculator # Import the class to be tested

@pytest.fixture

def calculator_instance():

"""

Fixture to provide a fresh Calculator instance for each test.

This ensures that each test runs independently without state

contamination from previous tests.

"""

return Calculator()

class TestCalculator:

"""

Comprehensive test suite for the Calculator class.

Each method within this class tests a specific aspect of the Calculator's functionality.

"""

# --- Tests for the 'add' method ---

def test_add_positive_integers(self, calculator_instance):

"""

Tests adding two positive integers.

Expected: Correct positive sum.

"""

assert calculator_instance.add(2, 3) == 5

def test_add_negative_integers(self, calculator_instance):

"""

Tests adding two negative integers.

Expected: Correct negative sum.

"""

assert calculator_instance.add(-2, -3) == -5

def test_add_positive_and_negative_integers(self, calculator_instance):

"""

Tests adding a positive and a negative integer.

Expected: Correct sum, which can be positive, negative, or zero.

"""

assert calculator_instance.add(5, -3) == 2

assert calculator_instance.add(-5, 3) == -2

def test_add_with_zero(self, calculator_instance):

"""

Tests adding zero to a number and a number to zero.

Expected: The number itself.

"""

assert calculator_instance.add(5, 0) == 5

assert calculator_instance.add(0, 5) == 5

assert calculator_instance.add(0, 0) == 0

def test_add_float_

gemini Output

Unit Test Generator: Final Deliverable & Documentation

This document serves as the final deliverable for the "Unit Test Generator" workflow, detailing the unit tests generated by the gemini step and providing comprehensive documentation for their review, integration, and usage. Our aim is to provide high-quality, actionable unit tests that enhance the robustness and reliability of your codebase.


1. Introduction

The "Unit Test Generator" workflow leverages advanced AI capabilities to automatically generate unit tests based on your provided code or specifications. This final review_and_document step ensures that the generated tests are thoroughly reviewed for accuracy, best practices, and completeness, and are presented with clear, actionable documentation for seamless integration into your development process.

This deliverable includes:

  • A summary of the review and quality assurance process applied to the generated tests.
  • The generated unit tests, ready for implementation.
  • Detailed documentation on how to integrate, run, and interpret the tests.
  • Recommendations for optimizing your testing strategy.

2. Summary of Review Process

Upon generation by the gemini AI, the unit tests underwent a rigorous review process conducted by our engineering team. This process focused on several key aspects:

  • Correctness & Logic: Verifying that each test case accurately reflects the expected behavior of the unit under test and that assertions are logically sound.
  • Coverage Assessment: Evaluating the extent to which different code paths, edge cases, and error conditions are covered by the generated tests.
  • Adherence to Best Practices: Ensuring tests follow established unit testing principles (e.g., F.I.R.S.T. principles: Fast, Independent, Repeatable, Self-Validating, Timely; AAA pattern: Arrange, Act, Assert).
  • Readability & Maintainability: Confirming that tests are clear, concise, well-named, and easy to understand and maintain by human developers.
  • Framework Compatibility: Ensuring the generated tests are compatible with common testing frameworks for the target language (e.g., pytest for Python, JUnit for Java, Jest for JavaScript).

This review process aims to deliver production-ready unit tests that minimize the need for further manual refinement.


3. Generated Unit Tests

For demonstration purposes, let's assume the gemini step generated unit tests for a hypothetical Python function calculate_discounted_price(original_price, discount_percentage). This function is expected to return the price after applying a discount, handling various inputs including edge cases and invalid inputs.

Target Function (for context - hypothetical):


# utils.py
def calculate_discounted_price(original_price, discount_percentage):
    """
    Calculates the price after applying a discount.

    Args:
        original_price (float or int): The original price of the item.
        discount_percentage (float or int): The discount percentage (0-100).

    Returns:
        float: The discounted price.

    Raises:
        ValueError: If original_price or discount_percentage is invalid.
    """
    if not isinstance(original_price, (int, float)) or original_price < 0:
        raise ValueError("Original price must be a non-negative number.")
    if not isinstance(discount_percentage, (int, float)) or not (0 <= discount_percentage <= 100):
        raise ValueError("Discount percentage must be between 0 and 100.")

    discount_factor = 1 - (discount_percentage / 100)
    discounted_price = original_price * discount_factor
    return round(discounted_price, 2) # Round to 2 decimal places for currency

Generated Unit Tests (test_utils.py):


import pytest
from utils import calculate_discounted_price

class TestCalculateDiscountedPrice:

    def test_positive_discount(self):
        """
        Tests the function with a standard positive discount.
        """
        original_price = 100.0
        discount_percentage = 20.0
        expected_price = 80.0
        assert calculate_discounted_price(original_price, discount_percentage) == expected_price

    def test_zero_discount(self):
        """
        Tests the function with a 0% discount, expecting original price.
        """
        original_price = 150.0
        discount_percentage = 0.0
        expected_price = 150.0
        assert calculate_discounted_price(original_price, discount_percentage) == expected_price

    def test_full_discount(self):
        """
        Tests the function with a 100% discount, expecting 0.
        """
        original_price = 200.0
        discount_percentage = 100.0
        expected_price = 0.0
        assert calculate_discounted_price(original_price, discount_percentage) == expected_price

    def test_decimal_discount_percentage(self):
        """
        Tests with a decimal discount percentage.
        """
        original_price = 75.0
        discount_percentage = 12.5
        expected_price = 65.625 # Expected before rounding
        assert calculate_discounted_price(original_price, discount_percentage) == 65.63 # After rounding

    def test_large_numbers(self):
        """
        Tests with large original price and discount percentage.
        """
        original_price = 1000000.0
        discount_percentage = 50.0
        expected_price = 500000.0
        assert calculate_discounted_price(original_price, discount_percentage) == expected_price

    def test_zero_original_price(self):
        """
        Tests with an original price of zero.
        """
        original_price = 0.0
        discount_percentage = 25.0
        expected_price = 0.0
        assert calculate_discounted_price(original_price, discount_percentage) == expected_price

    @pytest.mark.parametrize("invalid_price", [-10.0, "abc", None, []])
    def test_invalid_original_price_raises_error(self, invalid_price):
        """
        Tests that invalid original prices raise a ValueError.
        """
        with pytest.raises(ValueError, match="Original price must be a non-negative number."):
            calculate_discounted_price(invalid_price, 10.0)

    @pytest.mark.parametrize("invalid_discount", [-5.0, 101.0, "xyz", None, {}])
    def test_invalid_discount_percentage_raises_error(self, invalid_discount):
        """
        Tests that invalid discount percentages raise a ValueError.
        """
        with pytest.raises(ValueError, match="Discount percentage must be between 0 and 100."):
            calculate_discounted_price(100.0, invalid_discount)

    def test_float_precision(self):
        """
        Tests cases where float precision might be an issue, expecting rounded output.
        """
        original_price = 10.0
        discount_percentage = 33.33
        # 10 * (1 - 0.3333) = 10 * 0.6667 = 6.667 -> rounded to 6.67
        assert calculate_discounted_price(original_price, discount_percentage) == 6.67


4. Review Findings & Quality Assurance

The generated unit tests for calculate_discounted_price demonstrate a high level of quality and adherence to best practices:

  • Comprehensive Coverage:

* Positive Cases: Covered standard discounts (20%), zero discount, and full discount (100%).

* Edge Cases: Included tests for zero original price, large numbers, and decimal discount percentages.

* Negative/Error Cases: Utilized pytest.raises to verify that ValueError is correctly raised for invalid original_price and discount_percentage inputs.

* Data Types: Implicitly tested with int and float inputs.

* Rounding: test_float_precision explicitly checks the rounding behavior, which is crucial for financial calculations.

  • Adherence to Best Practices:

* Pytest Framework: Leverages pytest fixtures and parameterization (@pytest.mark.parametrize) for concise and efficient testing of multiple similar scenarios.

* Clear Naming: Test method names are descriptive and clearly indicate the scenario being tested (e.g., test_positive_discount, test_invalid_original_price_raises_error).

* AAA Pattern: Each test implicitly follows the Arrange-Act-Assert pattern, making them easy to understand.

* Independence: Each test is independent and does not rely on the state of other tests.

* Readability: The tests are well-structured and commented where necessary to explain the intent.

  • Identified Strengths:

* Excellent coverage of valid, edge, and invalid inputs.

* Effective use of pytest features for robust error testing.

* Focus on real-world scenarios like float precision in currency calculations.

  • Areas for Potential Improvement (Minor):

* While coverage is strong, for extremely complex functions, a more detailed coverage report (e.g., using pytest-cov) could be generated to ensure every line and branch is hit. (This is a general recommendation, not a flaw in the generated tests themselves).

* Consider adding specific tests for floating-point comparison using pytest.approx if the function's internal calculations might lead to tiny floating-point discrepancies that are acceptable (though round() in the target function mitigates this here).


5. Documentation for Usage

This section provides instructions on how to integrate and run the generated unit tests.

5.1 Prerequisites

To run these tests, you will need:

  1. Python 3.x installed on your system.
  2. pytest testing framework.

You can install pytest using pip:


pip install pytest

5.2 Integration Steps

  1. Save the Target Function: Ensure your target function (e.g., calculate_discounted_price) is saved in a Python file (e.g., utils.py).
  2. Save the Unit Tests: Save the generated unit tests into a separate Python file. By convention, test files should start with test_ or end with _test.py (e.g., test_utils.py) and be placed in the same directory as the target module, or in a dedicated tests/ directory.

Example Directory Structure:


    your_project/
    ├── utils.py
    └── test_utils.py

5.3 How to Run the Tests

Navigate to your project's root directory in your terminal and execute pytest:


cd your_project/
pytest

5.4 Expected Output

Upon successful execution, pytest will discover and run all tests. A successful run will show output similar to this:


============================= test session starts ==============================
platform <OS-INFO> -- Python <VERSION>, pytest-<VERSION>, pluggy-<VERSION>
rootdir: /path/to/your_project
plugins: anyio-3.X.X
collected 9 items

test_utils.py ...........                                                [100%]

============================== 9 passed in X.XXs ===============================
  • A . indicates a passed test.
  • F or E would indicate a failed test or an error, respectively. If you encounter failures, review the traceback provided by pytest to identify the issue.

5.5 Interpreting Results

  • Passed Tests: All tests passing (9 passed in X.XXs) indicates that the calculate_discounted_price function behaves as expected across all tested scenarios, including valid inputs, edge cases, and error conditions.
  • Failed Tests: If any tests fail, pytest will provide detailed information about the assertion that failed, the expected value, and the actual value. This information is crucial for debugging and fixing issues in your calculate_discounted_price function.

6. Recommendations & Next Steps

To further enhance your testing strategy and codebase quality:

  1. Integrate into CI/CD: Incorporate these unit tests into your Continuous Integration/Continuous Deployment (CI/CD) pipeline. This ensures that tests are automatically run on every code push, preventing regressions and maintaining code quality.
  2. Expand Test Coverage: While comprehensive, consider if there are any domain-specific edge cases or complex interactions unique to your application that might warrant additional tests.
  3. Performance Testing: For performance-critical functions, consider adding simple benchmarks or performance tests to ensure execution time remains within acceptable limits.
  4. Documentation Updates: Update your function's docstrings or external documentation to reflect the expected behavior confirmed by these tests.
  5. Regular Review & Maintenance: As your codebase evolves, regularly review and update your unit tests to ensure they remain relevant and continue to provide accurate coverage.

7. Conclusion

This deliverable provides a robust set of unit tests for the specified functionality, thoroughly reviewed and documented for immediate use. By integrating these tests, you can significantly increase the reliability of your code, streamline your development process, and confidently deliver high-quality software. Should you have any questions or require further assistance, please do not hesitate to contact our support team.

We are committed to empowering your development efforts with intelligent and actionable solutions.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}