Unit Test Generator
Run ID: 69cbb91c61b1021a29a8bafd2026-03-31Development
PantheraHive BOS
BOS Dashboard

This document details the output for Step 2 of 3 in the "Unit Test Generator" workflow: gemini -> generate_code. This step leverages the Gemini Pro model to generate comprehensive unit tests based on provided source code. The output includes a detailed explanation of the process, the core code implementation, usage instructions, and best practices.


Step 2: Gemini - Generate Unit Tests

This step is responsible for orchestrating the generation of unit tests using Google's Gemini Pro model. It takes your application's source code as input and, through intelligent prompt engineering, instructs the AI to produce high-quality, framework-specific unit tests.

1. Introduction and Purpose

The primary goal of this step is to automate the initial creation of unit tests, significantly accelerating the development cycle and ensuring better test coverage from the outset. By utilizing a powerful large language model like Gemini, we can generate tests that cover various scenarios, including normal execution, edge cases, and potential error conditions, tailored to your specific codebase and preferred testing framework.

This output provides the Python code implementation for interacting with the Gemini API to perform this task.

2. Core Concept: AI-Powered Test Generation

The generate_code sub-step works by:

  1. Receiving Source Code: It accepts the source code of the function, class, or module for which tests are required.
  2. Crafting an AI Prompt: A carefully constructed prompt is created, instructing the Gemini model on the desired output. This includes specifying the programming language, the testing framework (e.g., Python unittest, pytest, JavaScript Jest), and the required level of test coverage and quality.
  3. Invoking the Gemini Model: The prompt is sent to the Gemini Pro model via its API.
  4. Extracting Generated Tests: The AI's response, containing the generated unit test code, is extracted and returned.

3. Assumed Input and Output Structure

For this step, we assume the following:

* source_code (string): The actual code snippet (function, class, module) for which unit tests are to be generated.

* language (string, optional): The programming language of the source_code (e.g., "python", "javascript"). Defaults to "python".

* framework (string, optional): The desired testing framework (e.g., "unittest", "pytest", "jest"). Defaults to "unittest".

* generated_test_code (string): The complete unit test code as generated by the Gemini model, ready to be saved to a file or further processed.

4. Code Implementation: generate_unit_tests Function

The following Python code demonstrates how to implement the generate_unit_tests function. This code is clean, well-commented, and designed for production use, assuming you have configured your Gemini API key.

text • 2,169 chars
### 5. Explanation of the Code

1.  **`import` Statements**: Imports necessary modules: `os` for environment variables (API key), `google.generativeai` for interacting with the Gemini API, and `typing.Optional` for type hinting.
2.  **`generate_unit_tests` Function Definition**:
    *   Takes `source_code`, `language`, `framework`, and `model_name` as arguments.
    *   Includes a comprehensive docstring explaining its purpose, arguments, and return value.
3.  **API Key Configuration**:
    *   Retrieves the `GEMINI_API_KEY` from environment variables. **Crucially, for production environments, avoid hardcoding API keys and use secure methods like environment variables, secret management services (e.g., Google Secret Manager, AWS Secrets Manager), or dedicated configuration files.**
    *   `genai.configure(api_key=api_key)` initializes the Google Generative AI SDK with your API key.
4.  **Model Initialization**:
    *   `genai.GenerativeModel(model_name)` creates an instance of the specified Gemini model (e.g., "gemini-pro"). Error handling is included for model initialization.
5.  **Prompt Engineering**:
    *   This is the most critical part. A multi-line f-string constructs a detailed prompt.
    *   It clearly defines the AI's role ("expert software engineer").
    *   It specifies the desired output format (comprehensive, production-ready tests).
    *   It outlines specific requirements: normal/edge/error cases, all functions covered, meaningful assertions, language/framework conventions, necessary imports, and comments.
    *   **Crucially, it instructs the model to ONLY output the code within a markdown code block**, making it easier to parse the response.
6.  **Generate Content via Gemini API**:
    *   `model.generate_content(prompt)` sends the crafted prompt to the Gemini model and awaits its response.
    *   Basic error handling is included for the API call itself.
7.  **Extract and Return Generated Code**:
    *   The `response.text` attribute typically holds the AI's generated textual output.
    *   The code then attempts to parse the `response.text` to extract the code from within the markdown code block (e.g., ` 
Sandboxed live preview

This document outlines the architectural plan for the "Unit Test Generator," presented in a structured format as requested. This plan details the strategic approach, key components, and development roadmap to build a robust, extensible, and efficient system for automated unit test generation.


Unit Test Generator: Architectural Plan

1. Executive Summary

The Unit Test Generator aims to revolutionize software development by automating the creation of comprehensive unit tests. This system will analyze existing source code, understand its structure and logic, and then generate relevant, high-quality unit tests for various programming languages and testing frameworks. The primary goal is to enhance code quality, accelerate development cycles, and reduce the manual effort associated with writing and maintaining unit tests. This architectural plan lays the foundation for a modular, scalable, and adaptable system capable of meeting diverse project needs.

2. Architectural Goals (Interpreted from "Learning Objectives")

These objectives define the core capabilities and non-functional requirements that the Unit Test Generator's architecture must satisfy.

  • Modularity & Extensibility:

* Support for adding new programming languages (e.g., Python, Java, JavaScript, C#, Go) and their respective parsing mechanisms.

* Ability to integrate new testing frameworks (e.g., JUnit, Pytest, Jest, NUnit) and assertion libraries.

* Flexible design to incorporate novel test generation strategies and heuristics without significant refactoring.

  • Accuracy & Relevance:

* Generate meaningful tests that cover critical code paths, edge cases, and error conditions.

* Produce tests that are syntactically correct and semantically valid for the target language and framework.

* Minimize false positives (tests that pass but don't adequately validate logic) and false negatives (missed critical scenarios).

  • Performance & Efficiency:

* Efficiently parse and analyze large codebases within acceptable timeframes.

* Optimize test generation algorithms to produce tests quickly, even for complex functions.

* Minimize resource consumption (CPU, memory) during analysis and generation.

  • Configurability & Customization:

* Allow users to define parameters for test generation, such as mock/stub strategies, test data ranges, and desired assertion types.

* Provide options to include/exclude specific code sections or types of tests.

  • Integration Capabilities:

* Offer clear APIs (e.g., RESTful, gRPC) and/or a command-line interface (CLI) for seamless integration into CI/CD pipelines, IDEs, and other development tools.

* Support various input formats (e.g., file paths, raw code snippets) and output formats (e.g., directly to file, standard output).

  • Maintainability & Readability:

* Design a clean, well-documented, and logically separated architecture that is easy to understand, debug, and enhance.

* Generated tests should be human-readable and follow common best practices for the target language/framework.

  • Scalability:

* The architecture should be capable of processing increasingly complex code structures and larger project sizes without significant degradation in performance or accuracy.

  • Language Agnosticism (Core Logic):

* Separate the core test generation logic from language-specific parsing and code generation components to maximize reusability and simplify extension.

3. Architectural Phases/Roadmap (Interpreted from "Weekly Schedule")

This phased roadmap outlines the key stages of architectural design and initial implementation, providing a structured approach to development.

  • Phase 1: Core Parsing & Abstract Syntax Tree (AST) Generation (Weeks 1-2)

* Objective: Establish the foundation for source code analysis by selecting and integrating robust parsers. Develop a unified, language-agnostic internal representation (AST).

* Key Activities:

* Research and select suitable parsing libraries/tools for initial target languages (e.g., Python, Java).

* Design the canonical internal AST structure that can represent common programming constructs across languages.

* Implement proof-of-concept parsers to convert source code into the internal AST.

* Deliverables: Documented internal AST schema. Functional PoC for parsing sample code in at least two target languages and generating the internal AST.

  • Phase 2: Test Strategy Engine & Data Generation (Weeks 3-4)

* Objective: Develop the intelligence layer responsible for identifying testable units, formulating test cases, and generating appropriate test data and mocks/stubs.

* Key Activities:

* Design algorithms for identifying functions, methods, classes, and their dependencies from the internal AST.

* Develop rules/heuristics for generating various test types (e.g., positive, negative, boundary, null checks, exception handling).

* Implement a module for generating diverse test data (e.g., random, specific values, sequences).

* Design a mocking/stubbing strategy to isolate units under test.

* Deliverables: Formalized documentation of test generation strategies. Core test strategy engine capable of proposing test scenarios and generating basic test data.

  • Phase 3: Code Generation & Templating System (Weeks 5-6)

* Objective: Create a flexible system to translate the internal test representations into actual, executable unit test code for specific languages and frameworks.

* Key Activities:

* Select and integrate a powerful templating engine (e.g., Jinja2, Handlebars).

* Develop templates for generating test boilerplate, assertions, setup/teardown, and mock definitions for each target language/framework.

* Implement the code generation logic to populate templates with data from the test strategy engine.

* Deliverables: Set of robust templates for generating tests in at least one target language/framework. Functional code generator that can produce runnable tests for simple cases.

  • Phase 4: Integration Layer & API Design (Weeks 7-8)

* Objective: Define and implement the external interfaces that allow users and other systems to interact with the Unit Test Generator.

* Key Activities:

* Design the RESTful API endpoints and data models for input (source code, configuration) and output (generated tests, reports).

* Develop a command-line interface (CLI) for easy local execution and scripting.

* Implement input validation and basic authentication/authorization mechanisms (if applicable).

* Deliverables: API specification (e.g., OpenAPI/Swagger). Functional CLI prototype.

  • Phase 5: Refinement, Error Handling & Logging (Weeks 9-10)

* Objective: Enhance the system's robustness, usability, and observability through comprehensive error handling, detailed logging, and performance optimizations.

* Key Activities:

* Implement a centralized error handling strategy across all components.

* Integrate a logging framework for detailed operational insights, debugging, and audit trails.

* Conduct initial performance profiling and apply optimizations.

* Refine user experience for both CLI and API interactions.

* Deliverables: Comprehensive error handling and logging implemented throughout the system. Initial performance benchmarks and optimizations report.

4. Key Technologies & Tools (Interpreted from "Recommended Resources")

This section proposes the foundational technologies and tools to be utilized, chosen for their robustness, community support, and suitability for the architectural goals.

  • Core Development Language: Python

Rationale:* Rich ecosystem for static analysis, AI/ML (potentially for advanced test generation), rapid prototyping, and strong community support.

  • Parsing & AST Generation:

* Language-Agnostic: tree-sitter (for its incremental parsing, robust support for multiple languages, and uniform AST structure).

* Language-Specific (as fallback or for deeper analysis):

* Python: parso, ast module

* Java: JavaParser, ANTLR

* JavaScript: Babel, esprima

* C#: Roslyn compiler API

  • Internal Representation: Custom Graph-Based AST or Code Property Graph (CPG)

Rationale:* Provides a unified, semantic representation that facilitates language-agnostic analysis of control flow, data flow, and dependencies.

  • Test Strategy Engine:

* Rule-Based System: Custom Python modules implementing logic based on code patterns and heuristics.

* Potential AI/ML Integration: Libraries like spaCy (for NLP on comments/docstrings), scikit-learn (for pattern recognition), or integration with large language models (LLMs) for context-aware test suggestions.

  • Code Generation:

* Jinja2 (Python): A powerful and widely used templating engine for generating structured text, ideal for creating language-specific test code.

  • API/CLI Frameworks:

* REST API: FastAPI (Python) for high performance and automatic interactive API documentation (Swagger

python

main_generator.py

Set your Gemini API key as an environment variable

Example (for Linux/macOS): export GEMINI_API_KEY='YOUR_API_KEY'

Example (for Windows CMD): set GEMINI_API_KEY='YOUR_API_KEY'

Or, set it directly in your script for testing (NOT recommended for production):

os.environ["GEMINI_API_KEY"] = "YOUR_GEMINI_API_KEY_HERE"

Ensure the generate_unit_tests function is accessible (e.g., in the same file or imported)

from your_module_name import generate_unit_tests

--- (The generate_unit_tests function code from above would be here or imported) ---

... (insert generate_unit_tests function here or import it) ...

----------------------------------------------------------------------------------

if __name__ == "__main__":

sample_source_code = """

class Calculator:

def add(self, a, b):

\"\"\"Adds two numbers.\"\"\"

return a + b

def subtract(self, a, b):

\"\"\"Subtracts two numbers.\"\"\"

return a - b

def multiply(self, a, b):

\"\"\"Multiplies two numbers.\"\"\"

return a * b

def divide(self, a, b):

\"\"\"Divides two numbers. Handles division by zero.\"\"\"

if b == 0:

raise ValueError("Cannot divide by zero")

return a / b

"""

print("--- Generating Python unittest tests ---")

generated_tests_unittest = generate_unit_tests(

source_code=sample_source_code,

language="python",

framework="unittest"

)

if generated_tests_unittest:

print("\n--- Generated unittest Tests ---\n")

print(generated_tests_unittest)

# You would typically save this to a file, e.g., 'test_calculator_unittest.py'

# with open("test_calculator_unittest.py", "

gemini Output

Unit Test Generator: Review and Documentation Deliverable

This document provides a comprehensive review and detailed documentation for the unit tests generated by the "Unit Test Generator" workflow. Our aim is to ensure the generated tests are high-quality, actionable, and seamlessly integratable into your development lifecycle.


Executive Summary of Deliverables

The "Unit Test Generator" has successfully produced a suite of unit tests tailored to your specified codebase/functionality. This deliverable includes:

  • Detailed Review: An assessment of the generated tests covering quality, coverage, correctness, and adherence to best practices.
  • Comprehensive Documentation: Guidelines for integrating, using, and understanding the generated tests, including assumptions made and known limitations.
  • Actionable Next Steps: Clear instructions for your team to validate, execute, maintain, and expand upon these tests.

Our objective is to provide you with a robust foundation for improving code quality, identifying regressions, and accelerating your development process through effective unit testing.


Detailed Review of Generated Unit Tests

This section provides a qualitative and quantitative analysis of the unit tests produced.

1. Quality & Readability Assessment

The generated unit tests prioritize clarity and maintainability, aiming to make them easily understandable and adaptable by your development team.

  • Readability: Tests are structured with clear naming conventions (e.g., test_functionality_scenario) and follow standard test patterns (Arrange-Act-Assert).
  • Modularity: Where applicable, tests are organized into logical groups or files corresponding to the modules/classes they cover.
  • Maintainability: Efforts have been made to avoid overly complex test logic, promoting easier updates as your codebase evolves. Mocking and dependency injection are used judiciously to isolate units.
  • Code Comments: Essential comments are included to explain complex test scenarios, setup procedures, or specific assertion logic, enhancing understanding.

2. Test Coverage Analysis

The generator aimed for high-value coverage rather than merely maximizing line coverage. This means focusing on critical paths, edge cases, and core business logic.

  • Identified Coverage Areas:

* Core Functionality: All primary functions/methods identified in the input have at least one positive test case.

* Edge Cases: Common edge cases such as null inputs, empty collections, zero values, and boundary conditions (where applicable) have been considered.

* Error Handling: Tests for expected exceptions, invalid inputs, and error propagation have been included for robust error handling validation.

* Dependency Isolation: Critical external dependencies are mocked or stubbed to ensure true unit isolation, preventing flaky tests due to external factors.

  • Areas for Potential Expansion: While comprehensive, specific integration points or highly complex algorithms might warrant further, custom-written tests by your team. These are highlighted in the "Known Limitations" section.

3. Correctness & Assertions

The generated tests utilize appropriate assertion libraries and methods to validate expected outcomes accurately.

  • Assertion Accuracy: Assertions (assert_equals, assert_true, assert_raises, etc.) are chosen to precisely verify the post-conditions of each test scenario.
  • Mocking Strategy: Dependencies (e.g., database calls, API requests, external services) are mocked using standard frameworks (e.g., unittest.mock in Python, Mockito in Java) to ensure the unit under test is isolated and its behavior is predictable. Mocked objects are configured with expected return values or side effects relevant to the test case.
  • Setup/Teardown: Where necessary, setUp/tearDown methods or equivalent fixtures are used to prepare the test environment and clean up resources, ensuring test independence.

4. Adherence to Best Practices

The generated tests align with industry-standard unit testing best practices.

  • FAST Principles: Tests are designed to be Fast, Automated, Self-validating, and Thorough.
  • Independent Tests: Each test case is designed to run independently of others, preventing order-dependent failures.
  • Single Responsibility Principle: Individual test methods generally focus on testing one specific aspect or scenario of the unit under test.
  • No Side Effects: Tests do not alter global state or leave behind artifacts that could affect subsequent tests or the overall system.

Documentation for Integration & Usage

This section provides essential information for integrating and effectively utilizing the generated unit tests within your development environment.

1. Integration Instructions

The generated unit test files are provided in a format compatible with common testing frameworks.

  • File Location: The unit test files (e.g., test_.py, Test.java, *.test.ts) are typically located in a dedicated tests/ or src/test/ directory, mirroring your source code structure.
  • Framework Compatibility: Tests are generated for [Specify Framework, e.g., Python's pytest/unittest, Java's JUnit 5, JavaScript's Jest/Mocha, C#'s NUnit/xUnit].
  • Setup:

1. Dependency Installation: Ensure your project's test dependencies (e.g., pytest, junit-jupiter-api) are listed in your project's dependency management file (e.g., requirements.txt, pom.xml, package.json) and installed.

2. Placement: Place the generated test files into the appropriate test directory within your project structure.

3. Configuration (if needed): Verify your project's test runner configuration points to the correct test directory and uses the specified framework.

2. Assumptions & Context

The test generation process made the following assumptions based on the provided input:

  • Language & Framework: [Specify, e.g., Python 3.9+ with pytest]
  • Code Structure: Assumed standard project layouts and module import paths.
  • Input Scope: Tests were generated based on the specific functions/classes/modules explicitly provided.
  • Dependency Behavior: Mocked dependencies assume standard success/failure scenarios unless specific alternative behaviors were indicated in the input.
  • No External State: Assumed the units under test are primarily stateless or manage state internally in a testable manner.

3. Known Limitations

While comprehensive, it's important to acknowledge certain limitations inherent to automated test generation:

  • Complex Business Logic: Tests for highly nuanced or domain-specific business rules might require human review and refinement to capture all intricate scenarios accurately.
  • System-Level Interactions: These are unit tests, designed to isolate individual units. They do not cover end-to-end system interactions, performance, or UI testing.
  • Ambiguity in Requirements: If the original code's intent or expected behavior was ambiguous, the generated tests might reflect one plausible interpretation, which may need adjustment.
  • Data-Driven Tests: While some parameterization is included, extensive data-driven testing (e.g., large datasets for validation) might need further manual implementation.
  • External System State: Tests for interactions with external systems that maintain complex state (e.g., caching layers, distributed transactions) are typically mocked and might not fully replicate all real-world scenarios.

Actionable Next Steps for Your Team

To fully leverage the generated unit tests, we recommend the following steps:

1. Validation & Refinement

  • Review Code: Have your development team review the generated test code. Verify that the test cases accurately reflect the intended behavior of the units under test.
  • Fill Gaps: Identify any critical scenarios or edge cases unique to your domain that may not have been explicitly covered and add new test cases as needed.
  • Refactor/Optimize: While generated for readability, consider refactoring complex setup logic into helper functions or fixtures if it significantly improves maintainability for your specific team standards.
  • Update Assertions: Adjust assertion values or types if the generated expectations do not perfectly align with evolving requirements or newly discovered behaviors.

2. Execution & Reporting

  • Run Tests Locally: Execute the tests in your local development environment to confirm they pass and integrate correctly with your existing setup.

* Command Example: pytest (for Python), mvn test (for Java/Maven), npm test (for Node.js/Jest).

  • Integrate into CI/CD: Incorporate these tests into your Continuous Integration/Continuous Deployment (CI/CD) pipeline to ensure automated regression detection on every code commit.
  • Monitor Test Results: Establish clear reporting mechanisms to track test pass/fail rates and code coverage metrics.

3. Maintenance & Expansion

  • Regular Updates: As your codebase evolves, ensure the unit tests are updated concurrently to reflect changes in functionality, API contracts, or dependencies.
  • New Feature Testing: Use the generated tests as a template and guide for writing new unit tests for future features and bug fixes.
  • Documentation: Consider adding internal documentation or runbooks for your team on how to effectively extend and troubleshoot these tests.

Support & Feedback

Your feedback is invaluable for improving our "Unit Test Generator."

  • Questions or Issues: If you encounter any issues with the generated tests, have questions about their structure, or require further assistance, please contact our support team at [Your Support Contact Info/Portal].
  • Feature Requests: We welcome suggestions for enhancing the generator's capabilities, such as support for additional frameworks, more sophisticated mocking strategies, or specific types of test generation.

We are committed to helping you maximize the value of this deliverable and enhance your software development quality.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}