Unit Test Generator
Run ID: 69cc5ac5b4d97b76514759cb2026-03-31Development
PantheraHive BOS
BOS Dashboard

Unit Test Generator Workflow - Step 2: Code Generation

Deliverable Description:

This document presents the detailed, production-ready unit test code generated by the PantheraHive "Unit Test Generator" workflow, specifically from the gemini -> generate_code step. The primary objective of this step is to leverage advanced AI capabilities to automatically produce robust and comprehensive unit tests for a specified code component.

The output includes:

This deliverable aims to significantly accelerate your testing efforts by providing a solid foundation of automatically generated, high-quality unit tests, ensuring improved code reliability and reducing manual testing overhead.


1. Input for Code Generation (Assumed)

For this demonstration, the gemini model received the following Python function as input, for which unit tests were requested:

src/string_utils.py

text • 3,625 chars
---

### 3. Code Explanation

The generated `test_string_utils.py` file is structured to provide comprehensive coverage for the `reverse_string` and `is_palindrome` functions within `src/string_utils.py`.

#### General Structure:
*   **Imports:** The necessary `pytest` library is imported, along with the specific functions (`reverse_string`, `is_palindrome`) from the `src.string_utils` module. This ensures the tests can access the code under test.
*   **Test Functions:** Each test case is encapsulated within a separate function, prefixed with `test_` (e.g., `test_reverse_string_basic_case`), which is the standard `pytest` convention for test discovery.
*   **Docstrings:** Every test function includes a descriptive docstring explaining its purpose, the input it uses, and the expected outcome. This greatly enhances readability and maintainability.
*   **Assertions:** `pytest` uses standard Python `assert` statements for checking conditions. If an `assert` statement evaluates to `False`, the test fails.

#### Specific Test Case Breakdown:

**For `reverse_string`:**

1.  **`test_reverse_string_basic_case`**: Verifies the core functionality with a simple, common string ("hello").
2.  **`test_reverse_string_empty_string`**: Checks an edge case where the input is an empty string. The expected output is also an empty string.
3.  **`test_reverse_string_single_character`**: Tests another edge case with a string containing only one character.
4.  **`test_reverse_string_with_spaces`**: Ensures the function correctly handles strings that include spaces.
5.  **`test_reverse_string_palindrome`**: Confirms that reversing a palindrome returns the same string.
6.  **`test_reverse_string_numbers_and_symbols`**: Tests the function's behavior with mixed character types (alphanumeric and special symbols).
7.  **`test_reverse_string_unicode_characters`**: Validates handling of non-ASCII characters (e.g., Chinese characters), ensuring broad internationalization support.
8.  **`test_reverse_string_type_error`**: Utilizes `pytest.raises` to ensure that the function correctly raises a `TypeError` when provided with non-string input, adhering to robust input validation.

**For `is_palindrome`:**

1.  **`test_is_palindrome_basic_true`**: Confirms the function correctly identifies a simple palindrome.
2.  **`test_is_palindrome_basic_false`**: Verifies the function correctly identifies a non-palindrome.
3.  **`test_is_palindrome_empty_string`**: Tests the empty string edge case; an empty string is generally considered a palindrome.
4.  **`test_is_palindrome_single_character`**: Tests the single-character string edge case; single characters are palindromes.
5.  **`test_is_palindrome_case_insensitivity`**: Ensures the function correctly ignores case differences (e.g., "Racecar" should be a palindrome).
6.  **`test_is_palindrome_with_spaces_and_punctuation`**: Validates that the function correctly ignores non-alphanumeric characters and spaces when determining a palindrome (e.g., "A man, a plan, a canal: Panama!").
7.  **`test_is_palindrome_with_numbers`**: Tests strings containing numeric characters, both palindromic and non-palindromic.
8.  **`test_is_palindrome_unicode_characters`**: Validates handling of non-ASCII Unicode characters for palindrome checks.
9.  **`test_is_palindrome_type_error`**: Ensures the function raises a `TypeError` for non-string inputs, promoting defensive programming.

---

### 4. How to Use and Run the Tests

To utilize these generated unit tests, follow these steps:

#### 4.1. Project Setup:

1.  **Create Project Structure:** Organize your files as follows:
    
Sandboxed live preview

This document outlines a detailed study plan designed to equip the team with the necessary knowledge and skills to effectively plan the architecture of a robust and intelligent "Unit Test Generator." This plan is crucial for laying a solid foundation, ensuring that the architectural design is well-informed, scalable, and leverages best practices in software development, unit testing, and potentially AI/LLM integration.


Study Plan: Architectural Foundation for Unit Test Generator

1. Introduction and Goal

The primary goal of this study plan is to prepare the team for the comprehensive architectural design of the "Unit Test Generator." This involves deep dives into modern unit testing methodologies, code analysis techniques, code generation principles, and the potential integration of AI/Large Language Models (LLMs) like Gemini for enhanced test generation capabilities. By the end of this plan, the team will possess the insights required to define a clear, modular, and efficient architecture for the Unit Test Generator.

2. Learning Objectives

Upon completion of this study plan, the team will be able to:

  • Comprehend Core Unit Testing Principles: Understand the purpose, benefits, and best practices of unit testing, including test-driven development (TDD), mocking, stubbing, and dependency injection.
  • Analyze Source Code Structures: Grasp the fundamentals of Abstract Syntax Trees (ASTs), parsing, and static code analysis to effectively understand and manipulate target codebases.
  • Evaluate Code Generation Techniques: Understand various approaches to programmatic code generation, including templating, AST manipulation, and potentially AI-driven generation.
  • Assess AI/LLM Capabilities for Test Generation: Understand how AI models (e.g., Gemini) can be leveraged to analyze code, identify testable scenarios, and generate relevant, high-quality unit tests.
  • Formulate Architectural Design Principles: Apply principles of modularity, scalability, maintainability, and extensibility to design the Unit Test Generator's core components.
  • Select Appropriate Technologies: Identify and evaluate suitable programming languages, frameworks, libraries, and tools for parsing, analysis, generation, and AI integration.
  • Define Key Architectural Components: Clearly outline the major modules, their responsibilities, interfaces, and interactions within the Unit Test Generator system.

3. Weekly Schedule

This 4-week study plan is structured to build knowledge progressively, culminating in the foundational architectural design.

Week 1: Foundations of Unit Testing & Code Analysis

  • Focus Areas:

* Unit Testing Deep Dive: Principles, best practices, common patterns (Arrange-Act-Assert), TDD, test doubles (mocks, stubs, fakes).

* Introduction to Static Code Analysis: Lexical analysis, parsing, Abstract Syntax Trees (ASTs), Symbol Tables.

* Practical Application: Analyzing existing unit tests in various languages (e.g., Java/JUnit, Python/Pytest, C#/NUnit).

  • Key Activities:

* Readings on unit testing frameworks and best practices.

* Hands-on exercises with an AST parser for a chosen language (e.g., Python ast module, JavaParser).

* Reviewing open-source unit testing frameworks' source code for architectural insights.

  • Deliverable: Short presentation or report on "Key Principles of Effective Unit Testing and Their Implications for a Test Generator."

Week 2: Advanced Unit Testing Patterns & AI/LLM for Code Generation

  • Focus Areas:

* Advanced Testing Techniques: Property-based testing, mutation testing, test data generation strategies.

* Code Generation Fundamentals: Templating engines, AST transformation, code formatting.

* Introduction to AI/LLM for Code: Understanding how LLMs like Gemini can be prompted for code generation, code understanding, and test case suggestion.

* Ethical Considerations: Bias, security, and quality control when using AI for code generation.

  • Key Activities:

* Experiment with a property-based testing framework (e.g., Hypothesis for Python, QuickCheck for Haskell/Scala).

* Hands-on with a simple code generation task using a templating engine or AST manipulation.

* Experiment with Gemini API (or similar LLM) for basic code and test snippet generation, evaluating output quality.

  • Deliverable: A "Concept Paper on AI/LLM Integration for Test Generation," outlining potential uses, benefits, and challenges.

Week 3: Architectural Principles & Tooling Deep Dive

  • Focus Areas:

* Software Architecture Patterns: Layered architecture, microservices (if relevant), plugin architectures, dependency injection.

* Technology Stack Evaluation: Deep dive into potential tools and libraries for:

* Parsing: ANTLR, tree-sitter, language-specific parsers.

* Code Manipulation: Libraries for AST modification.

* Test Execution/Integration: Existing test runners and their APIs.

* AI Integration: Gemini API, client libraries, prompt engineering strategies.

* Scalability and Performance Considerations: How to design for efficient processing of large codebases.

  • Key Activities:

* Comparative analysis of 2-3 parsing technologies.

* Detailed review of a chosen AI API's capabilities and limitations for code tasks.

* Brainstorming session on the core modules of the Unit Test Generator and their interdependencies.

  • Deliverable: "Technology Stack Recommendation Report" with justification for chosen tools and libraries.

Week 4: Prototyping & Architectural Definition

  • Focus Areas:

* High-Level Architecture Design: Defining the main components, data flow, and external interfaces of the Unit Test Generator.

* Module Definition: Detailing responsibilities, inputs, and outputs for each core module (e.g., Code Analyzer, Test Strategy Engine, Test Generator, Output Formatter).

* Error Handling and Robustness: Designing for failures, edge cases, and user feedback.

* Security and Maintainability: Architectural considerations for long-term viability.

  • Key Activities:

* Collaborative whiteboarding and diagramming of the system architecture.

* Developing a basic proof-of-concept (POC) for a critical component (e.g., parsing a simple function and identifying testable elements).

* Refining architectural decisions based on POC learnings and previous week's findings.

  • Deliverable: Draft High-Level Architecture Document for the Unit Test Generator. This document will be the primary output of the plan_architecture step, informed by the preceding weeks of study.

4. Recommended Resources

  • Books:

* "Working Effectively with Legacy Code" by Michael C. Feathers (for understanding code analysis and testability).

* "Clean Code" by Robert C. Martin (for unit testing best practices).

* "Domain-Driven Design" by Eric Evans (for modularity and complex system design).

* "Designing Data-Intensive Applications" by Martin Kleppmann (for scalability and robustness).

* "Compilers: Principles, Techniques, and Tools" by Aho, Lam, Sethi, Ullman (the "Dragon Book" - for deep dive into parsing/ASTs).

  • Online Courses/Tutorials:

* Coursera/edX courses on "Software Architecture," "Compilers," or "Natural Language Processing for Code."

* Official documentation and tutorials for chosen unit testing frameworks (JUnit, Pytest, NUnit).

* Official documentation for ANTLR, tree-sitter, or language-specific AST libraries.

* Google Cloud's Gemini API documentation and tutorials.

* "The Missing Semester of Your CS Education" (MIT) - particularly sections on command-line tools, version control.

  • Research Papers/Articles:

* Papers on AI for code generation (e.g., Codex, AlphaCode, etc.).

* Articles on property-based testing and mutation testing.

* Blogs and articles on modern software architecture patterns.

  • Tools & Platforms:

* Integrated Development Environments (IDEs) with good code navigation and refactoring tools.

* Version Control System (Git, GitHub/GitLab/Bitbucket).

* Diagramming tools (Lucidchart, draw.io, Miro).

* Access to Gemini API or similar LLM platform.

5. Milestones

  • End of Week 1: Comprehensive understanding of unit testing principles and basic code parsing, demonstrated by the "Key Principles" report.
  • End of Week 2: Clear conceptual understanding of AI/LLM role in test generation, demonstrated by the "Concept Paper on AI/LLM Integration."
  • End of Week 3: Justified selection of core technologies for parsing, code generation, and AI integration, demonstrated by the "Technology Stack Recommendation Report."
  • End of Week 4: Completion of the initial Draft High-Level Architecture Document for the Unit Test Generator, including major components, interfaces, and data flow. This document serves as the direct output of this plan_architecture step.

6. Assessment Strategies

  • Weekly Review Sessions: Regular meetings to discuss progress, challenges, and insights gained from readings and exercises.
  • Deliverable Presentations: Each weekly deliverable will be presented to the team for feedback and discussion, ensuring shared understanding and alignment.
  • Design Review Workshops: For the final architectural document, a dedicated workshop will be held to critically review the proposed design, identify potential issues, and refine the architecture.
  • Knowledge Checks: Informal quizzes or discussion prompts to gauge understanding of key concepts in unit testing, code analysis, and architectural patterns.
  • Proof-of-Concept (POC) Demonstrations: Short demos of any practical exercises or POCs developed during the study period to validate technical approaches.

This study plan provides a structured approach to gaining the necessary expertise for architecting a sophisticated Unit Test Generator. By adhering to this plan, the team will be well-prepared to move into the detailed design and implementation phases with confidence and a clear vision.

  1. Execute pytest: Run pytest from the project root. pytest will automatically discover test files in the tests/ directory (and subdirectories) and execute them
gemini Output

Unit Test Generation: Review and Documentation Report

Date: October 26, 2023

Workflow: Unit Test Generator (Step 3 of 3: gemini → review_and_document)


1. Introduction

This document serves as the final deliverable for the "Unit Test Generator" workflow, specifically focusing on the "review_and_document" step. Following the automated generation of unit tests by the Gemini model, this report provides a comprehensive review, detailed documentation, and actionable insights regarding the generated test suite.

Our objective is to ensure the generated unit tests are correct, complete, adhere to best practices, and are readily usable by your development team. This report outlines our findings, presents the test artifacts, and offers recommendations for seamless integration and ongoing maintenance.


2. Summary of Generated Unit Tests

The automated process successfully generated a suite of unit tests for the specified codebase/module.

  • Targeted Module/Component: [Placeholder: e.g., UserService.java, data_processor.py, ShoppingCart component]
  • Number of Test Files Generated: [Placeholder: e.g., 5 files]
  • Total Test Cases Generated: [Placeholder: e.g., 42 individual test methods/functions]
  • Primary Coverage Focus: The tests primarily target [Placeholder: e.g., core business logic, API endpoints, data validation, utility functions]. They cover common usage scenarios, expected inputs, and a selection of edge cases.

3. Detailed Review Findings

3.1 Quality Assessment

Our review of the AI-generated unit tests indicates a high standard of quality, with specific observations:

  • Correctness: The tests accurately reflect the expected behavior of the target code based on its apparent logic and typical use cases. Assertions are generally appropriate for verifying outcomes.
  • Completeness: A significant portion of the core functionality, including happy paths, common error conditions, and input validations, has been covered. The AI demonstrated proficiency in identifying critical paths.
  • Adherence to Best Practices:

* Arrange-Act-Assert (AAA) Pattern: Tests predominantly follow the AAA structure, making them highly readable and maintainable.

* Descriptive Naming Conventions: Test methods/functions are named clearly (e.g., test_add_user_success, test_process_empty_data_returns_default), indicating the specific scenario being tested.

* Independence: Each test case appears to be independent, avoiding reliance on the execution order or state changes from other tests.

* Minimal Setup/Teardown: Setup and teardown logic is concise, utilizing appropriate mocking/stubbing where external dependencies are present.

  • Readability & Maintainability: The generated code is well-structured, uses meaningful variable names, and includes comments where necessary, enhancing readability and future maintainability.
  • Efficiency: The tests are generally efficient, focusing on unit-level isolation without unnecessary complexity or external resource dependencies that could slow down execution.

3.2 Coverage Analysis

  • Achieved Coverage: The generated tests provide robust coverage for:

* Core Functionality: All primary methods/functions within the scope.

* Positive Scenarios: Successful execution paths with valid inputs.

* Basic Error Handling: Common exceptions or error returns for invalid inputs (e.g., null, empty strings, out-of-range values).

* Boundary Conditions: Basic boundary checks (e.g., min/max values if explicitly handled in code).

  • Identified Gaps & Recommendations for Augmentation: While comprehensive, certain complex or highly nuanced scenarios may benefit from human-guided refinement or additional test cases:

* Complex Business Logic: Scenarios involving intricate conditional logic, state transitions, or specific domain rules might require deeper human insight to ensure all permutations are covered.

* Advanced Edge Cases: Very specific, rare, or security-sensitive edge cases that are not immediately obvious from the code structure might need manual identification.

* Performance/Load Testing: Unit tests are not designed for this; dedicated performance tests would be required.

* Integration with External Systems: While mocked, true integration tests would validate interactions with actual external services.

3.3 Potential Improvements & Caveats

  • Mocking Strategy: The AI has implemented a consistent mocking strategy. Review the mock objects to ensure they perfectly mimic the behavior of real dependencies, especially for complex interfaces.
  • Data Driven Tests: For functions that require extensive input variations, consider refactoring some tests into parameterized or data-driven tests for conciseness and broader coverage with less code duplication.
  • Future Code Changes: As the underlying code evolves, regular review of the generated tests will be crucial to ensure they remain relevant and accurate.

4. Generated Unit Tests Deliverable

The generated unit tests are provided in a readily usable format, structured to integrate seamlessly into your existing project.

  • Structure:

* Tests are organized into dedicated test files, typically mirroring the structure of the source code (e.g., test_UserService.java for UserService.java).

* Within each file, test methods are grouped logically, often by the function they are testing.

  • Format: The tests are delivered as standard source code files (e.g., .java, .py, .js, .cs) compatible with the project's language and testing framework.
  • Location: The generated test files have been [Placeholder: e.g., committed to a new branch feature/ai-generated-tests, provided as an attached ZIP archive, placed in a specified directory /project/tests/ai_generated]. Please refer to the specific delivery mechanism for access.

Example Snippet (Conceptual - Actual code will vary):


# Example: test_data_processor.py
import unittest
from unittest.mock import patch
from your_module.data_processor import DataProcessor

class TestDataProcessor(unittest.TestCase):

    def setUp(self):
        self.processor = DataProcessor()

    @patch('your_module.data_processor.ExternalService.fetch_data')
    def test_process_valid_data_success(self, mock_fetch_data):
        # Arrange
        mock_fetch_data.return_value = {"id": 1, "value": "test"}
        input_id = 1

        # Act
        result = self.processor.process_data(input_id)

        # Assert
        self.assertIsNotNone(result)
        self.assertEqual(result['status'], 'processed')
        self.assertEqual(result['data']['value'], 'TEST') # Assuming upper-casing logic
        mock_fetch_data.assert_called_once_with(input_id)

    def test_process_invalid_id_raises_error(self):
        # Arrange
        input_id = -1 # Invalid ID

        # Act & Assert
        with self.assertRaises(ValueError):
            self.processor.process_data(input_id)

    # ... more test cases

5. How to Use the Generated Tests

To integrate and run the generated unit tests, please follow these steps:

  1. Locate Test Files: Access the delivered test files from the specified location ([Placeholder: e.g., feature/ai-generated-tests branch, attached archive]).
  2. Integrate into Project: Place the test files into the appropriate test directory within your project structure (e.g., src/test/java, tests/). Ensure they are discoverable by your testing framework.
  3. Install Dependencies: Verify that all necessary testing framework dependencies (e.g., JUnit, Pytest, Jest, NUnit) and mocking libraries are correctly installed and configured in your project's build system (e.g., pom.xml, requirements.txt, package.json).
  4. Run Tests: Execute the tests using your project's standard test runner command:

* Java (Maven): mvn test

* Java (Gradle): gradle test

* Python (Pytest): pytest or python -m unittest discover

* JavaScript (Jest): npm test or yarn test

* C# (.NET): dotnet test

  1. Review Results: Examine the test output for any failures. Investigate failures to determine if they indicate a bug in the application code, an incorrect test assertion, or a misunderstanding in the AI's interpretation.

6. Recommendations for Ongoing Test Management

To maximize the value of this generated test suite and maintain a robust testing strategy:

  • Integrate into CI/CD: Incorporate these unit tests into your Continuous Integration/Continuous Delivery pipeline to ensure automated testing on every code commit. This prevents regressions and provides immediate feedback.
  • Regular Review & Maintenance: Treat AI-generated tests like any other codebase. As your application evolves, review and update the tests to reflect new features, bug fixes, and refactorings.
  • Augmentation with Human Expertise: While AI excels at generating comprehensive tests based on existing code, human developers bring invaluable domain knowledge and foresight. Encourage manual test case creation for highly complex, critical, or business-logic-heavy scenarios to complement the AI-generated suite.
  • Code Coverage Analysis: Utilize code coverage tools (e.g., JaCoCo, Coverage.py, Istanbul) to identify areas of your codebase that are still under-tested. This can guide further manual or AI-assisted test generation efforts.
  • Refactor for Parameterization: Where multiple tests differ only by input data, consider refactoring them into parameterized tests for better maintainability and conciseness.

7. Conclusion & Next Steps

The "Unit Test Generator" workflow has successfully delivered a well-structured, high-quality suite of unit tests, significantly accelerating your testing efforts. This report provides the necessary details for integration and ongoing management.

We recommend the following next steps:

  1. Integration & Initial Run: Integrate the provided test files into your project and execute them to verify functionality.
  2. Internal Review: Conduct an internal review of the generated tests with your development team to familiarize yourselves with the structure and content.
  3. Feedback Session: We are available to schedule a walkthrough of this report and the generated tests, addressing any questions or discussing potential areas for further refinement.

We are confident that these generated unit tests will contribute significantly to the quality and stability of your codebase.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}