Unit Test Generator
Run ID: 69cb9cf961b1021a29a8aa812026-03-31Development
PantheraHive BOS
BOS Dashboard

This document details the execution of Step 2 of 3 in the "Unit Test Generator" workflow. In this step, the Gemini model generates comprehensive, production-ready unit test code based on the provided (or implicitly understood) source code.


Step 2 of 3: Gemini → Generate Code

Workflow Description: "Unit Test Generator"

Current Step: gemini → generate_code

Objective: To generate clean, well-commented, and production-ready unit test code for a given source code module or component. This output is designed to be directly usable by developers to validate their software.


1. Introduction & Purpose

This phase leverages the advanced code generation capabilities of the Gemini model to produce high-quality unit tests. The primary goal is to provide a robust set of tests that verify the functionality, edge cases, and error handling of the target code. The generated output focuses on best practices for unit testing, ensuring readability, maintainability, and effective coverage.

For this demonstration, we will assume the input to this step was a Python class Calculator. The Gemini model has analyzed this class and will now generate unittest framework-based tests for it.


2. Assumed Input Code (for Context)

To provide clear context for the generated unit tests, here is the hypothetical Python code for a Calculator class that Gemini would have analyzed in a prior step (or was provided as direct input):

File: calculator.py

text • 208 chars
---

## 3. Generated Unit Tests

Below is the production-ready Python unit test code generated by Gemini for the `Calculator` class, using the standard `unittest` framework.

**File: `test_calculator.py`**

Sandboxed live preview

This document outlines a comprehensive, detailed, and professional study plan for understanding and potentially developing a "Unit Test Generator." This plan is designed to provide a structured approach, covering foundational concepts, advanced techniques, and practical application, ensuring a deep understanding of the subject matter.


Comprehensive Study Plan: Unit Test Generator

1. Introduction and Overall Goal

This study plan is meticulously crafted to guide you through the intricate world of unit test generation. From the fundamental principles of unit testing to advanced techniques leveraging Artificial Intelligence and static analysis, this plan ensures a holistic understanding.

The overall goal is to equip you with the knowledge and skills to:

  • Understand the principles, benefits, and challenges of unit testing.
  • Explore various methodologies for automated unit test generation, including property-based testing, fuzzing, symbolic execution, and AI-driven approaches.
  • Be able to design a high-level architecture for an intelligent unit test generator system.
  • Evaluate existing tools and research in the field.
  • Formulate strategies for integrating test generation into a CI/CD pipeline.

2. Weekly Schedule

This 4-week schedule provides a structured progression, building knowledge incrementally.


Week 1: Foundations of Unit Testing & Code Analysis

Learning Objectives:

  • Deep Dive into Unit Testing: Understand the core principles of unit testing, TDD (Test-Driven Development), BDD (Behavior-Driven Development), and their importance in software quality.
  • Test Anatomy: Learn about test doubles (mocks, stubs, spies, fakes), assertions, and test setup/teardown.
  • Code Coverage Metrics: Understand different types of code coverage (line, branch, path, mutation) and their significance.
  • Introduction to Static Analysis: Grasp the basics of parsing code, Abstract Syntax Trees (ASTs), Control Flow Graphs (CFGs), and Data Flow Analysis.
  • Programming Language Specifics: Review unit testing frameworks and practices in a chosen primary language (e.g., Python unittest/pytest, Java JUnit/Mockito, JavaScript Jest/Mocha).

Recommended Resources:

  • Books:

* "Working Effectively with Legacy Code" by Michael C. Feathers (Focus on testing principles and refactoring for testability).

* "Unit Testing Principles, Practices, and Patterns" by Vladimir Khorikov.

* "Test Driven Development: By Example" by Kent Beck.

  • Online Courses/Documentation:

* Official documentation for pytest (Python), JUnit (Java), or Jest (JavaScript).

* Tutorials on AST parsing for your chosen language (e.g., ast module in Python, JDT in Java, Babel in JavaScript).

* Articles on static code analysis principles (e.g., from OWASP, various academic papers).

  • Tools:

* Your chosen language's unit testing framework.

* Code coverage tools (e.g., coverage.py, JaCoCo, Istanbul).

* An IDE with AST visualization plugins if available.

Practical Exercises/Activities:

  • Write comprehensive unit tests for a small, existing codebase without tests.
  • Refactor a module to improve its testability.
  • Experiment with different test doubles to understand their use cases.
  • Analyze the AST of a simple function in your chosen language.
  • Achieve 100% line and branch coverage for a given function.

Week 2: Traditional Test Generation Strategies

Learning Objectives:

  • Property-Based Testing (PBT): Understand the concept of properties, generators, and shrinking. Learn how PBT can find edge cases traditional unit tests miss.
  • Fuzzing/Fuzz Testing: Explore the principles of input mutation, stateful fuzzing, and how fuzzers identify crashes or vulnerabilities.
  • Symbolic Execution & Concolic Testing: Grasp the theoretical foundations of symbolic execution, path exploration, constraint solving, and its application in test case generation.
  • Test Data Generation: Learn techniques for generating realistic and diverse test data programmatically.

Recommended Resources:

  • Books/Papers:

* "The Fuzzing Book" by Andrea Arcuri (online resource).

* Academic papers on symbolic execution (e.g., KLEE, S2E).

* Documentation for PBT libraries.

  • Online Courses/Documentation:

* Documentation for Hypothesis (Python), jqwik (Java), or fast-check (JavaScript).

* Tutorials on using fuzzing tools (e.g., AFL++, libFuzzer).

* Introductory articles/videos on symbolic execution.

  • Tools:

* A Property-Based Testing library for your chosen language.

* A basic fuzzer (e.g., simple input mutators you can build, or AFL++ for advanced exploration).

* (Optional) Explore tools like KLEE or Angr if interested in deep symbolic execution.

Practical Exercises/Activities:

  • Implement property-based tests for a data structure (e.g., a list, a map) or an algorithm.
  • Build a simple fuzzer to test a small command-line utility.
  • Manually trace the symbolic execution of a small function with a few branches.
  • Generate a diverse set of test inputs for a function using programmatic techniques.

Week 3: AI/ML for Test Generation (LLMs & Beyond)

Learning Objectives:

  • AI in Testing Overview: Understand the different ways AI/ML can augment or automate testing processes.
  • LLMs for Test Generation: Explore how Large Language Models (LLMs) can be prompted to generate unit tests, test data, and even test cases from requirements.
  • Prompt Engineering for Testing: Learn effective strategies for crafting prompts to elicit high-quality, relevant tests from LLMs.
  • Challenges and Limitations: Identify the inherent challenges in using AI for testing, such as hallucination, context window limitations, and ensuring correctness.
  • Other ML Approaches: Briefly touch upon other ML techniques (e.g., evolutionary algorithms, reinforcement learning) for optimizing test suites or finding bugs.

Recommended Resources:

  • Books/Papers:

* Research papers on "AI for Software Testing," "LLMs for Code Generation," "Automated Test Case Generation using Machine Learning."

* Relevant chapters from books on MLOps or AI Engineering.

  • Online Courses/Documentation:

* API documentation for LLMs (e.g., OpenAI, Google Gemini, Anthropic Claude).

* Courses on Prompt Engineering.

* Articles and blogs discussing AI-driven code generation.

  • Tools:

* Access to an LLM API (e.g., via Python client libraries).

* An IDE with AI code generation features (e.g., GitHub Copilot, Cursor).

Practical Exercises/Activities:

  • Use an LLM to generate unit tests for a given function or class description. Evaluate the quality and correctness of the generated tests.
  • Experiment with different prompt engineering techniques (e.g., few-shot learning, chain-of-thought) to improve test generation quality.
  • Manually correct and refine tests generated by an LLM.
  • Research and summarize a recent academic paper on AI-driven test generation.

Week 4: Architectural Design & Integration

Learning Objectives:

  • High-Level System Architecture: Design the conceptual architecture for a "Unit Test Generator" system, considering input (source code), processing (analysis, generation strategies), and output (test code).
  • Component Design: Identify key components such as Code Parser, Test Strategy Selector, Test Code Generator, Feedback Loop Mechanism, and Reporting Module.
  • Integration Points: Understand how a test generator would integrate into a Continuous Integration/Continuous Deployment (CI/CD) pipeline, IDEs, and version control systems.
  • Evaluation & Refinement: Learn how to evaluate the effectiveness of generated tests (e.g., mutation testing, code coverage) and strategies for continuous improvement.
  • Ethical Considerations: Discuss the ethical implications and responsibilities when using AI for code generation.

Recommended Resources:

  • Books/Papers:

* "Designing Data-Intensive Applications" by Martin Kleppmann (for system design principles).

* Articles on software architecture patterns (e.g., microservices, plugin architectures).

* Case studies of existing test generation tools (e.g., Google's Jazzer, Microsoft's Pex).

  • Online Courses/Documentation:

* Courses on System Design.

* Documentation for CI/CD platforms (e.g., Jenkins, GitLab CI, GitHub Actions).

* Articles on mutation testing frameworks (e.g., Pitr, MutPy).

  • Tools:

* Diagramming tools (e.g., Draw.io, Lucidchart) for architectural design.

* A mutation testing framework for your chosen language.

Practical Exercises/Activities:

  • Draw a high-level architectural diagram for a Unit Test Generator, including data flow and key components.
  • Propose different test generation strategies and how the system would select among them.
  • Outline a CI/CD pipeline step that integrates a hypothetical unit test generator.
  • Perform mutation testing on a set of manually written tests and analyze the results.
  • Write a short design document outlining the core features, challenges, and potential solutions for your proposed generator.

3. Milestones

  • End of Week 1: Proficient in unit testing principles, able to write effective tests, and understand basic AST structures.
  • End of Week 2: Able to apply property-based testing and understand the concepts of fuzzing and symbolic execution for test generation.
  • End of Week 3: Capable of leveraging LLMs for basic test generation, understanding prompt engineering, and aware of AI's limitations in testing.
  • End of Week 4: Developed a high-level architectural design for a Unit Test Generator, identified key components, and understood integration strategies.

4. Assessment Strategies

  • Weekly Self-Assessments: Review learning objectives at the end of each week and honestly gauge your understanding.
  • Practical Project Submissions:

* Week 1: Submit a set of unit tests for a given legacy code module, demonstrating high coverage and proper use of test doubles.

* Week 2: Submit code demonstrating the use of a property-based testing library for a non-trivial function.

* Week 3: Submit a report detailing experiments with an LLM for test generation, including prompts used, generated tests, and analysis of quality.

* Week 4: Submit a detailed architectural diagram and a brief design document for a Unit Test Generator.

  • Peer Review (Optional): Share your weekly practical exercises with a peer for constructive feedback.
  • Knowledge Quizzes: Short quizzes at the end of each week to test theoretical understanding.
  • Final Presentation/Report: A comprehensive report or presentation summarizing the insights gained, the architectural design, and future considerations for building or implementing a Unit Test Generator.

5. General Tips for Success

  • Active Learning: Don't just read; actively try out the concepts and tools.
  • Consistent Practice: Regular coding and experimentation are crucial.
  • Document Your Learning: Keep a log of your findings, challenges, and solutions.
  • Engage with the Community: Participate in forums, read blogs, and follow experts in software testing and AI.
  • Stay Curious: The field of automated test generation is rapidly evolving, especially with AI. Stay updated with new research and tools.
  • Focus on a Primary Language: While concepts are universal, practicing in one language initially will solidify understanding.

This detailed study plan provides a robust framework for mastering the "Unit Test Generator" domain. By diligently following this schedule and actively engaging with the resources and exercises, you will gain a profound and practical understanding of automated test generation.


4. Explanation of Generated Code & Methodology

The generated unit test code adheres to standard Python unittest best practices and covers a wide range of scenarios for the Calculator class.

4.1. Key Components and Structure

  • import unittest: Imports the necessary testing framework.
  • from calculator import Calculator: Imports the class under test. This assumes calculator.py is accessible in the Python path, typically in the same directory.
  • class TestCalculator(unittest.TestCase):: Defines a test suite class that inherits from unittest.TestCase. This provides access to various assertion methods (assertEqual, assertRaises, etc.).
  • setUp(self):

This method is automatically called before* each individual test method is run.

* It's used to set up the test environment, in this case, by creating a fresh instance of Calculator. This ensures that each test runs with a clean slate, preventing test interactions and maintaining isolation.

  • tearDown(self):

This method is automatically called after* each individual test method is run.

* It's used for cleanup operations, though for a simple Calculator class, it's less critical. It demonstrates good practice for more complex scenarios (e.g., closing database connections, deleting temporary files).

  • Test Methods (test_add_positive_numbers, test_divide_by_zero_raises_error, etc.):

* Each method whose name starts with test_ is recognized and executed by the unittest test runner.

* Each method focuses on testing a specific aspect or scenario of a function within the Calculator class.

* Descriptive names clearly indicate

gemini Output

Unit Test Generator Workflow: Step 3 - Review and Document

This document outlines the final deliverable for the "Unit Test Generator" workflow, focusing on the review, documentation, and actionable next steps for the unit tests generated in the previous step. Our aim is to provide you with a comprehensive package that enables seamless integration, effective validation, and long-term maintainability of your generated tests.


1. Workflow Summary

The "Unit Test Generator" workflow is designed to automate the creation of robust unit tests for your specified code. This process leverages advanced AI capabilities (such as Gemini) to analyze your source code, identify testable components, and generate relevant test cases, including positive, negative, and edge-case scenarios.

Workflow Steps:

  1. Input Code Analysis: Your provided source code is analyzed to understand its structure, functions, classes, and dependencies.
  2. Test Generation (gemini): AI models generate unit tests based on the analysis, targeting high code coverage and scenario diversity.
  3. Review and Document (Current Step): The generated tests are packaged, documented, and presented with clear guidelines for review, integration, and ongoing maintenance.

2. Generated Unit Tests Overview

In the preceding step, our system successfully generated a suite of unit tests tailored to your codebase. While the specific tests are delivered separately (e.g., as a downloadable archive, code snippet, or integrated pull request), this section describes their general characteristics and structure.

Expected Output Characteristics:

  • Language & Framework Specific: Tests are generated in the same programming language as your source code (e.g., Python, Java, JavaScript, C#) and utilize common testing frameworks (e.g., pytest, JUnit, Jest, NUnit).
  • Modular & Organized: Tests are typically organized into files or classes that mirror your source code structure, making them easy to locate and manage.
  • Comprehensive Coverage: Attempts to cover various aspects, including:

* Happy Path: Valid inputs and expected successful outcomes.

* Edge Cases: Boundary conditions, empty inputs, maximum/minimum values.

* Error Handling: Invalid inputs, exceptions, and error propagation.

* Dependency Mocking: Where applicable, mocks or stubs are suggested or implemented for external dependencies to isolate unit logic.

  • Readability: Tests are designed to be human-readable, with clear test names and assertions.

Example Structure (Conceptual):


/tests
    /my_module_name
        test_function_a.py  # Tests for functions/methods in function_a.py
        test_function_b.py  # Tests for functions/methods in function_b.py
        ...
    /another_module
        test_class_x.java   # Tests for ClassX.java
        ...

3. Detailed Review Guidelines

It is critical to thoroughly review the generated unit tests before integration. This step ensures accuracy, completeness, and adherence to your project's specific requirements and coding standards.

Actionable Review Points:

  1. Correctness and Logic:

* Assertions: Verify that the assertions (assert_equals, assert_true, etc.) correctly reflect the expected behavior of your code.

* Input Data: Check if the test input data (fixtures, parameters) is appropriate and representative for each test case.

* Expected Outputs: Confirm that the expected outputs in assertions are accurate based on your code's logic.

* Test Setup/Teardown: Ensure any setup or teardown logic (e.g., setUp, tearDown, @BeforeEach, @AfterEach) is correct and cleans up resources properly.

  1. Coverage and Completeness:

* Function/Method Coverage: Confirm that all critical functions, methods, and public interfaces are covered by at least one test.

* Branch Coverage: Review if conditional branches (if/else, switch, loops) have sufficient test cases to exercise different paths.

* Edge Cases: Specifically look for tests that handle:

* Null/empty inputs

* Zero/negative values (if applicable)

* Maximum/minimum allowable values

* Boundary conditions (e.g., list with one item, full buffer)

* Error Paths: Verify that error conditions and exception handling are adequately tested.

  1. Code Quality and Maintainability:

* Readability: Are the test names descriptive? Is the test code easy to understand?

* Adherence to Standards: Do the tests follow your project's coding style, naming conventions, and best practices for unit testing?

* DRY Principle: Identify and refactor any redundant test code (e.g., using fixtures, parameterized tests, or helper functions).

* Isolation: Ensure each unit test is independent and doesn't rely on the state of other tests. Mocking should be used effectively to isolate units.

* Dependencies: Check for any hard-coded dependencies that should be mocked or injected.

  1. Performance (if applicable):

* For performance-critical code, ensure tests don't introduce significant overhead or slow down the test suite excessively.

Recommendation: Perform a peer review or team review of these tests to leverage collective expertise and catch potential issues.


4. Documentation and Usage Instructions

This section provides practical guidance on how to integrate, run, and maintain the generated unit tests within your development workflow.

4.1 Integration Steps

  1. Locate Test Files: The generated test files (e.g., .py, .java, *.js) will be provided in a structured format.
  2. Place in Project: Copy these test files into the designated test directory of your project (e.g., project_root/tests/, src/test/java/).
  3. Update Build System (if necessary):

* Maven/Gradle (Java): Ensure your pom.xml or build.gradle is configured to include the test directory and the relevant testing framework (e.g., JUnit, Mockito).

* Node.js (JavaScript): Verify package.json scripts are set up to run tests with your chosen framework (e.g., jest, mocha).

* Python: Ensure pytest or unittest can discover the new test files.

* C# (.NET): Add the test project reference to your solution and ensure necessary NuGet packages are installed.

  1. Install Dependencies: Make sure all necessary testing framework dependencies (e.g., pytest, junit, jest, mockito) are installed and configured in your project's environment.

4.2 Running the Tests

Once integrated, you can run the tests using your project's standard testing commands:

  • Python (pytest):

    pytest
  • Java (Maven):

    mvn test
  • Java (Gradle):

    gradle test
  • JavaScript (Jest):

    npm test
    # or
    yarn test
  • C# (.NET CLI):

    dotnet test
  • Integrated Development Environment (IDE): Most modern IDEs (e.g., VS Code, IntelliJ IDEA, Visual Studio, PyCharm) have built-in support for running unit tests directly from the interface.

4.3 Interpreting Test Results

  • Green: All tests passed. Your code behaves as expected for the tested scenarios.
  • Red: One or more tests failed. Review the failure messages to identify the discrepancy between expected and actual behavior. This often indicates a bug in the application code or an incorrect assertion in the test.
  • Skipped/Ignored: Some tests might be intentionally skipped. Ensure this is desired behavior.

4.4 Test Maintenance

  • Refactoring: As your application code evolves, remember to update the corresponding unit tests to reflect changes in functionality or API.
  • New Features: When adding new features, generate or write new unit tests to cover the new functionality.
  • Bug Fixes: For every bug identified and fixed, write a new unit test that specifically reproduces the bug and then passes once the fix is applied (Test-Driven Development principle).
  • CI/CD Integration: Integrate your test suite into your Continuous Integration/Continuous Deployment (CI/CD) pipeline to ensure tests are run automatically on every code commit, providing immediate feedback.

5. Key Benefits & Value Proposition

By leveraging the "Unit Test Generator" workflow and following these guidelines, you gain:

  • Accelerated Development: Significantly reduces the manual effort and time required to write comprehensive unit tests.
  • Enhanced Code Quality: Provides a foundational test suite that helps catch bugs early and ensures code reliability.
  • Improved Maintainability: Well-structured and documented tests make it easier to understand code behavior and refactor with confidence.
  • Increased Confidence: A robust test suite instills confidence in your codebase, enabling faster iterations and deployment.
  • Consistency: Generated tests often follow consistent patterns, promoting a standardized approach to testing within your project.

6. Next Steps & Recommendations

  1. Immediate Review: Begin the detailed review of the generated unit tests as outlined in Section 3.
  2. Integrate and Run: Once reviewed and any minor adjustments made, integrate the tests into your project and run them locally.
  3. Refine and Extend: Use the generated tests as a strong baseline. Feel free to refine them, add more specific domain-knowledge tests, or extend coverage to areas not fully addressed by the initial generation.
  4. CI/CD Integration: Work with your DevOps team to integrate the new test suite into your existing CI/CD pipeline.
  5. Ongoing Maintenance: Establish a process for regular test maintenance and updates as your codebase evolves.

7. Feedback Mechanism

Your feedback is invaluable for improving our AI-driven generation capabilities. Please provide any observations, suggestions, or issues encountered during the review and integration process.

  • Direct Feedback: [Link to Feedback Form/Email Address]
  • Issue Reporting: [Link to Issue Tracker/Support Portal]

We are committed to ensuring the success of your project and are available to assist with any questions or challenges you may encounter.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}