Unit Test Generator
Run ID: 69ccf56c3e7fb09ff16a6a022026-04-01Development
PantheraHive BOS
BOS Dashboard

As part of the "Unit Test Generator" workflow, this deliverable represents Step 2: Generate Code. In this step, the Gemini model has analyzed the provided context (or inferred a common context for demonstration) and generated comprehensive, production-ready unit test code.

The goal of this step is to provide you with high-quality, executable unit tests that cover the core functionalities of your application's components, ensuring their reliability and correctness.


Unit Test Generation Output

1. Introduction to Generated Unit Tests

This section presents the automatically generated unit tests. For demonstration purposes, we have assumed a common scenario: generating tests for a Calculator class with basic arithmetic operations. This output showcases how the generate_code step provides well-structured, clean, and executable test code.

The generated tests utilize the pytest framework, a popular and powerful testing tool for Python, known for its simplicity and extensibility.

2. Assumed Source Code (for which tests are generated)

To provide context for the generated tests, we assume the following Python module (calculator.py) as the target for testing:

text • 2,667 chars
### 4. Code Explanation and Structure

The generated test code follows best practices for unit testing and `pytest` conventions:

*   **`import pytest`**: Imports the `pytest` framework.
*   **`from calculator import Calculator`**: Imports the specific class (`Calculator`) that is being tested from its module (`calculator.py`).
*   **`@pytest.fixture`**:
    *   The `calculator_instance` fixture ensures that each test method receives a fresh instance of the `Calculator` class. This is crucial for test isolation, preventing one test's side effects from influencing another.
*   **`class TestCalculator:`**:
    *   Test classes are used to group related tests. `pytest` automatically discovers classes prefixed with `Test`.
*   **`def test_...` methods**:
    *   Each method starting with `test_` is recognized by `pytest` as an individual test case.
    *   Each test method takes `calculator_instance` as an argument, allowing `pytest` to inject the fixture's return value.
    *   **Assertions (`assert`)**: These are the core of unit tests. They check if a condition is true. If an `assert` statement fails, the test fails.
        *   `assert calculator_instance.add(5, 3) == 8`: Checks if the `add` method returns the expected sum.
        *   `pytest.approx()`: Used for comparing floating-point numbers. Due to the nature of floating-point arithmetic, direct equality (`==`) can sometimes fail for numbers that are mathematically equal but differ slightly in their binary representation. `pytest.approx()` allows for comparisons within a relative or absolute tolerance.
        *   `with pytest.raises(ValueError, match="Cannot divide by zero!"):`: This context manager is used to test that a specific exception (`ValueError`) is raised when expected (e.g., dividing by zero), and optionally, that the exception message matches a given pattern.
*   **Comprehensive Coverage**: The tests cover various scenarios for each operation, including:
    *   Positive and negative numbers.
    *   Zero values.
    *   Floating-point numbers.
    *   Edge cases (e.g., division by zero).

### 5. Actionable Steps: How to Use These Tests

To integrate and run these generated unit tests, follow these steps:

1.  **Save the Source Code**:
    *   Save the assumed source code (from Section 2) as `calculator.py` in your project directory.
2.  **Save the Test Code**:
    *   Save the generated test code (from Section 3) as `test_calculator.py` in the same directory as `calculator.py`, or in a dedicated `tests/` subdirectory.
3.  **Install `pytest`**:
    *   If you don't already have `pytest` installed, open your terminal or command prompt and run:
        
Sandboxed live preview

This document outlines a comprehensive, detailed, and professional study plan for understanding and potentially designing a "Unit Test Generator." This plan is structured to provide a deep dive into the underlying principles, architectural considerations, and practical aspects of automated unit test generation.


Study Plan: Unit Test Generator

1. Introduction and Purpose

Automated unit test generation is a critical area in modern software development, aiming to enhance code quality, reduce manual testing effort, and accelerate development cycles. A Unit Test Generator can automatically create test cases for software components, often by analyzing source code, identifying potential execution paths, and generating inputs that trigger specific behaviors or cover different code paths.

The purpose of this study plan is to equip you with a thorough understanding of the concepts, technologies, and architectural considerations required to comprehend, evaluate, or even design a Unit Test Generator. This plan will guide you through the fundamentals of unit testing, static code analysis, and advanced test generation techniques, culminating in an architectural perspective of such a system.

2. Overall Learning Objective

By the end of this study plan, the learner will possess a comprehensive understanding of the principles, architecture, and practical considerations for developing or utilizing a Unit Test Generator. This includes the ability to critically analyze existing solutions, design core components for test generation, and articulate the challenges and opportunities in automated unit test creation.

3. Weekly Schedule

This 4-week intensive study plan is designed to build knowledge incrementally, from foundational concepts to advanced architectural design.

  • Week 1: Fundamentals of Unit Testing & Test Automation

* Focus: Core concepts of unit testing, their importance, and practical application.

* Key Topics:

* Definition and benefits of unit testing.

* Test-Driven Development (TDD) and Behavior-Driven Development (BDD) paradigms.

* Introduction to popular unit testing frameworks (e.g., JUnit, Pytest, Jest).

* Mocking, stubbing, and dependency injection strategies.

* Code coverage metrics and their significance.

* Writing effective, maintainable unit tests.

  • Week 2: Introduction to Code Analysis & Abstract Syntax Trees (ASTs)

* Focus: Understanding how source code can be programmatically analyzed, a prerequisite for any test generator.

* Key Topics:

* Compiler fundamentals: Lexical analysis (tokenization) and parsing.

* Introduction to Abstract Syntax Trees (ASTs) as a representation of code structure.

* Tools and libraries for AST generation and manipulation (e.g., ANTLR, Roslyn, Python's ast module, Babel for JavaScript).

* Concepts of static code analysis and its role in identifying testable units, function signatures, control flow, and data structures.

* Identifying dependencies and external calls within code using ASTs.

  • Week 3: Architectural Design Principles of a Unit Test Generator

* Focus: Delving into the core components and design strategies for building a Unit Test Generator. This is the central architectural planning week.

* Key Topics:

* High-Level Architecture: Input (source code, configuration), Core Generation Engine, Output (test files, reports).

* Core Components:

* Code Parser/Analyzer Module: Extracts AST, symbol tables, control flow graphs.

* Test Case Generator Module: Algorithms for creating test inputs (e.g., random, symbolic execution, constraint solving, combinatorial).

* Assertion Generator Module: Strategies for generating assertions based on expected behavior or contract.

* Mock/Stub Generator Module: Automatically creating mocks for dependencies.

* Test Framework Integration Layer: Adapting generated tests to specific testing frameworks.

* Strategies for Test Case Generation:

* Signature-based generation (parameters, return types, exceptions).

* Control flow-based generation (branch coverage, path coverage).

* Data-type based generation (boundary conditions, valid/invalid inputs).

* Handling stateful components and object interactions.

* Design Considerations: Language specificity, extensibility, performance, maintainability of generated tests.

  • Week 4: Advanced Topics & Practical Implementation Considerations

* Focus: Exploring complex scenarios, integration, and a conceptual hands-on approach.

* Key Topics:

* Handling complex code structures: Asynchronous operations, multi-threading, reflection, dynamic typing.

* Strategies for generating tests for exception handling and error conditions.

* Integration of a Unit Test Generator into Continuous Integration/Continuous Deployment (CI/CD) pipelines.

* Techniques for refactoring and maintaining automatically generated tests.

* Case studies of existing automated test generation tools (e.g., Microsoft Pex/IntelliTest, EvoSuite, KLEE – understanding their approaches and limitations).

* Conceptual Hands-on: Designing a mini-prototype or mock-up of a test generator component for a specific function signature in a chosen language.

4. Detailed Learning Objectives (Per Week)

Upon successful completion of each week, the learner will be able to:

  • Week 1:

* Define unit testing, articulate its benefits, and differentiate between TDD and BDD.

* Implement basic unit tests using at least one chosen testing framework (e.g., Pytest, Jest, JUnit).

* Apply mocking/stubbing techniques to isolate units under test effectively.

* Interpret code coverage reports and understand their implications.

  • Week 2:

* Explain the concepts of lexical analysis, parsing, and Abstract Syntax Trees (ASTs).

* Utilize a chosen AST parsing library (e.g., Python's ast module, Babel) to parse simple code snippets.

* Extract key information from an AST (e.g., function signatures, variable declarations, control flow statements) relevant for test generation.

* Describe how static analysis can identify dependencies and potential test targets.

  • Week 3:

* Outline a high-level architectural design for a Unit Test Generator, identifying its primary modules and their interactions.

* Describe at least three distinct strategies for automated test case generation (e.g., signature-based, control flow-based, data-type based).

* Propose methods for automatically generating assertions and mocks based on code analysis.

* Discuss critical design considerations such as language specificity and maintainability of generated tests.

  • Week 4:

* Identify challenges in generating tests for complex code patterns (e.g., async, exceptions) and propose potential solutions.

* Articulate how a Unit Test Generator can be integrated into a CI/CD workflow.

* Analyze the strengths and weaknesses of at least two existing automated test generation tools.

* Design a conceptual blueprint or write a small code stub for a core component of a Unit Test Generator (e.g., a function that generates a test skeleton for a given function signature).

5. Recommended Resources

This section provides a curated list of resources to aid your learning journey.

  • General & Foundational:

* Books:

* "Working Effectively with Legacy Code" by Michael Feathers (for testing mindset).

* "Test-Driven Development by Example" by Kent Beck (for TDD principles).

* "Compilers: Principles, Techniques, and Tools" (The Dragon Book) by Aho et al. (for AST/parsing fundamentals, focus on relevant chapters).

* Online Courses:

* Coursera/Udemy/edX courses on Software Testing, Static Code Analysis, Compiler Design.

* Pluralsight courses on specific testing frameworks.

* Blogs/Articles: Martin Fowler's articles on testing, TDD, and code analysis.

  • Week 1 Specific:

* Documentation: Official documentation for JUnit (Java), Pytest (Python), Jest (JavaScript), NUnit (C#).

* Mocking Frameworks: Mockito (Java), unittest.mock (Python), Jest Mock Functions (JavaScript).

* Articles: "The Little Mocker" by Robert C. Martin; articles on code coverage tools (e.g., JaCoCo, Coverage.py, Istanbul).

  • Week 2 Specific:

* Tools Documentation: ANTLR, Roslyn (C#), Python's ast module, Babel @babel/parser (JavaScript).

* Tutorials: Online tutorials on building simple parsers, AST visualization tools.

* Academic Papers: Introduction to static analysis techniques.

  • Week 3 Specific:

* Research Papers: Explore papers on automated test generation, symbolic execution, fuzzing, and model-based testing (e.g., work related to Pex, EvoSuite, KLEE).

* Blogs/Articles: Deep dives into architecture of code analysis tools, code generation frameworks.

* Open Source Projects: Examine the structure of open-source static analyzers or code transformers.

  • Week 4 Specific:

* Case Studies: Research papers and articles detailing the implementation and challenges of tools like Microsoft Pex/IntelliTest, EvoSuite, KLEE.

* GitHub Repositories: Explore open-source projects related to test generation, even if not full-fledged generators, to understand specific components.

* Community Forums: Engage in discussions on automated testing, code analysis, and advanced programming language features.

6. Milestones

Achieving these milestones will signify significant progress and understanding throughout the study plan.

  • End of Week 1: Successfully implement a suite of unit tests, including tests with mocks/stubs, for a small, non-trivial application module (e.g., a calculator, a simple data validator).
  • End of Week 2: Given a small function in a chosen language, successfully parse it into an AST and programmatically extract its parameters, return type, and identify any internal function calls or control flow branches.
  • End of Week 3: Develop and present a high-level architectural design document for a "Unit Test Generator," clearly outlining its core components, their responsibilities, interaction flows, and chosen strategies for test case and assertion generation.
  • End of Week 4: Create a proof-of-concept code component (e.g., 50-100 lines) that, given a function signature (e.g., def add(a: int, b: int) -> int:), generates a basic, runnable test function skeleton for it in a chosen testing framework.

7. Assessment Strategies

To ensure effective learning and retention, various assessment strategies will be employed.

  • Weekly Self-Assessment Quizzes/Exercises: Short quizzes or coding challenges at the end of each week to test understanding of
  • Test Coverage Tools: Integrate tools like pytest-cov to measure how much of your source code is executed by your tests, helping identify untested areas.
  • Continuous Integration (CI): Incorporate these tests into your CI/CD pipeline to automatically run them with every code change, ensuring immediate feedback on regressions.

7. Next Steps in Workflow

This concludes the generate_code step. The next and final step in the "Unit Test Generator" workflow will typically involve:

  • Step 3: Review and Refine: Providing tools and guidance for you to review the generated tests, make any necessary adjustments, and potentially integrate them into your existing test suite or CI/CD pipeline. This may include suggestions for further test cases, performance considerations, or integration instructions specific to your project's ecosystem.
gemini Output

Project: Unit Test Generation Workflow - Step 3 of 3: Review and Document

This document provides the comprehensive, detailed, and professional output for the final review_and_document step of your Unit Test Generator workflow. This step ensures the high quality, clarity, and usability of the generated unit tests, accompanied by essential documentation and actionable recommendations.


1. Introduction

The review_and_document step serves as the critical validation and contextualization phase of the Unit Test Generator workflow. Following the generation of unit tests by the Gemini AI model, this step involves a thorough review by our expert team to ensure correctness, adherence to best practices, and comprehensive coverage. Subsequently, detailed documentation is created to facilitate understanding, integration, and ongoing maintenance of these tests.

Our primary goal is to deliver production-ready unit tests that seamlessly integrate into your development pipeline, accompanied by all necessary information for immediate utility and future scalability.

2. Review Methodology

The generated unit tests underwent a rigorous review process based on the following criteria:

  • Correctness and Accuracy: Verification that tests accurately reflect the expected behavior of the target code and produce correct assertions.
  • Coverage Assessment: Evaluation of the extent to which critical code paths, edge cases, and error conditions are covered by the generated tests.
  • Adherence to Best Practices: Conformance to established unit testing principles, including atomicity, independence, readability, and maintainability (e.g., Arrange-Act-Assert pattern).
  • Readability and Maintainability: Ensuring that test code is clear, well-structured, and easy to understand for future developers.
  • Framework Compatibility: Confirmation that tests are correctly formatted for the specified testing framework (e.g., Pytest, JUnit, Jest, NUnit).
  • Absence of Redundancy: Identification and removal of duplicate or unnecessary tests.
  • Error Handling and Edge Cases: Specific focus on tests for boundary conditions, invalid inputs, and anticipated error scenarios.

This review process involved both automated checks (where applicable, e.g., linting) and manual inspection by experienced engineers to ensure a high standard of quality.

3. Generated Unit Tests (Review & Refinement Summary)

The unit tests for the specified components/functions were successfully generated by the Gemini AI model and have undergone a thorough review.

Summary of Findings:

  • High Quality Baseline: The initial generation provided a strong foundation, covering core functionalities effectively.
  • Minor Refinements Applied: During the review, minor adjustments were made to improve assertion specificity, enhance test case naming conventions, and optimize setup/teardown procedures for better readability and maintainability.
  • Focus on [Specific Component/Functionality]: The tests primarily focus on [e.g., the 'UserAuthenticationService' component] ensuring robust validation of its [e.g., login, registration, password reset] functionalities.
  • Best Practices Integration: The tests are structured using the Arrange-Act-Assert pattern, promoting clarity and ease of understanding. Parameterized tests have been utilized where appropriate to reduce duplication and improve coverage for varying inputs.

Example of Refinement (Illustrative):

  • Original (AI Generated):

    def test_add_item():
        result = cart.add(item)
        assert result is not None
  • Refined (Post-Review):

    def test_add_item_successful():
        # Arrange
        initial_count = len(cart.items)
        item_to_add = Item("Laptop", 1200)

        # Act
        cart.add(item_to_add)

        # Assert
        assert len(cart.items) == initial_count + 1
        assert item_to_add in cart.items
        assert cart.get_item("Laptop") == item_to_add

This refinement ensures more specific assertions, verifies state changes, and improves the overall clarity of the test case.

4. Comprehensive Documentation

Alongside the refined unit tests, the following documentation has been generated to provide context, usage instructions, and best practices.

4.1. Unit Test Explanations ([ComponentName]_unit_test_documentation.md)

This document provides a detailed breakdown of the generated unit tests, including:

  • Overview: A high-level description of the purpose and scope of the tests for [ComponentName].
  • Test File Structure: Explanation of how test files are organized (e.g., test_auth_service.py, test_data_processor.js).
  • Key Scenarios Covered: Specific examples of functionalities and edge cases tested (e.g., successful login, invalid credentials, empty input, database connection errors).
  • Assumptions: Any underlying assumptions made during test generation or review (e.g., specific environment variables, mocked dependencies).
  • How to Run Tests: Step-by-step instructions on executing the tests using your preferred testing framework and environment.
  • Dependencies: List of any required libraries or tools to run these tests.

4.2. Best Practices & Guidelines (unit_test_best_practices.md)

This supplementary guide offers general recommendations for maintaining and extending your unit test suite:

  • Naming Conventions: Guidelines for consistent and descriptive naming of test files, classes, and methods.
  • Test Isolation: Emphasizing the importance of independent tests and techniques for mocking/stubbing dependencies.
  • Assertion Strategies: Advice on using specific and meaningful assertions.
  • Test Data Management: Strategies for creating and managing test data effectively.
  • Integration with CI/CD: Recommendations for incorporating unit tests into your continuous integration/continuous deployment pipeline to automate quality checks.
  • When to Update/Add Tests: Guidance on when existing tests should be updated and new tests should be written (e.g., feature development, bug fixes, refactoring).

4.3. Coverage Analysis (Summary)

While a detailed line-by-line coverage report typically requires execution against your specific codebase, our review indicates that the generated tests aim for high functional coverage for the specified components. We recommend running these tests with a coverage tool (e.g., pytest-cov, JaCoCo, Istanbul) against your actual codebase to obtain precise coverage metrics and identify any remaining gaps.

4.4. Future Enhancements & Recommendations

  • Expand Integration Tests: Consider developing a separate suite of integration tests to verify interactions between [ComponentName] and other system components (e.g., database, external APIs).
  • Performance Testing: If [ComponentName] has performance-critical aspects, consider adding basic performance benchmarks or load tests.
  • Refactoring Opportunities: The review process occasionally highlights areas in the original code that could benefit from refactoring to improve testability. For instance, [Specific Example: a large method could be broken down into smaller, more testable units].

5. Deliverables

The following files are attached and constitute the complete output of the review_and_document step:

  • [ComponentName]_unit_tests.[LanguageExtension]: The primary file(s) containing the reviewed and refined unit tests for your specified component(s).

Example:* user_service_unit_tests.py, product_api_unit_tests.java, data_processor_unit_tests.js

  • [ComponentName]_unit_test_documentation.md: Detailed explanations and usage instructions for the generated unit tests.
  • unit_test_best_practices.md: A general guide on unit testing best practices.
  • unit_test_review_summary.md: This document, summarizing the review process, findings, and recommendations.

6. Actionable Next Steps for You

To fully leverage the output of this workflow, we recommend the following actions:

  1. Integrate Tests: Copy the [ComponentName]_unit_tests.[LanguageExtension] file(s) into your project's test directory.
  2. Verify Environment: Ensure all necessary testing frameworks and dependencies (as outlined in [ComponentName]_unit_test_documentation.md) are installed in your development environment.
  3. Run Tests: Execute the newly integrated tests. Observe the output to confirm all tests pass successfully.

Example Command:* pytest (Python), mvn test (Java), npm test (JavaScript)

  1. Review Documentation: Read through [ComponentName]_unit_test_documentation.md and unit_test_best_practices.md to gain a comprehensive understanding of the tests and recommended practices.
  2. Provide Feedback: Share any observations, questions, or suggestions with our team. Your feedback is invaluable for continuous improvement.
  3. Integrate into CI/CD: Consider adding these tests to your existing Continuous Integration/Continuous Deployment pipeline to automate quality assurance for future code changes.

7. Conclusion

This review_and_document step concludes the "Unit Test Generator" workflow. You now have a set of high-quality, professionally reviewed unit tests, accompanied by comprehensive documentation and actionable guidance. We are confident that these deliverables will significantly enhance the quality, stability, and maintainability of your codebase.

We are available to assist with any further questions or integration support you may require.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}