Unit Test Generator
Run ID: 69cc84fe3e7fb09ff16a29c72026-04-01Development
PantheraHive BOS
BOS Dashboard

Unit Test Generator - Step 2: Code Generation (gemini → generate_code)

This document details the output of Step 2 in the "Unit Test Generator" workflow, focusing on the automatic generation of unit test code using advanced AI capabilities. This step transforms the analysis from the previous stage into executable, production-ready test cases, significantly accelerating your development and quality assurance processes.


1. Introduction to Automated Test Code Generation

In this crucial phase, our AI model, Gemini, leverages its understanding of the provided source code (identified in Step 1) to construct comprehensive and robust unit tests. The goal is to produce clean, well-commented, and effective test code that validates the functionality, handles edge cases, and ensures the reliability of your software components.

Key Benefits:

2. Input for Code Generation (Assumed)

For this demonstration, we assume that Step 1 of the workflow has identified a Python function for which unit tests are required. While the actual input code is not provided in this step's prompt, we will proceed by demonstrating the generation of tests for a common, illustrative function.

Demonstration Target: A Python function is_palindrome(text) which checks if a given string is a palindrome.

3. Generated Unit Tests (Example: is_palindrome function)

Below is the generated Python code for the is_palindrome function, followed by its corresponding unit tests using Python's built-in unittest framework.

3.1. Original Function (for context)

This is the function for which the unit tests are generated.

text • 159 chars
#### 3.2. Generated Unit Test Code

This code is production-ready, well-commented, and designed to cover various scenarios for the `is_palindrome` function.

Sandboxed live preview

Unit Test Generator: Comprehensive Study Plan

This document outlines a detailed study plan for understanding, designing, and conceptually building a "Unit Test Generator." The plan is structured to provide a thorough grasp of the underlying principles, necessary technologies, and architectural considerations involved in creating such a tool. This deliverable serves as the foundational "plan_architecture" step, ensuring all key areas are covered before proceeding with implementation.


1. Overall Goal & Learning Objective

Goal: To develop a deep understanding of the architectural components, design patterns, and technical challenges involved in creating an automated Unit Test Generator. This includes proficiency in code analysis, test framework integration, and automated test stub generation.

Overall Learning Objective: By the end of this study plan, the participant will be able to:

  • Articulate the core principles and benefits of automated unit test generation.
  • Design a high-level architecture for a Unit Test Generator, identifying key modules and their interactions.
  • Select appropriate tools and libraries for code parsing (e.g., AST manipulation) and test output generation for a chosen programming language.
  • Formulate strategies for generating basic, yet meaningful, unit test stubs and assertions.
  • Evaluate the feasibility and limitations of automated test generation for different code complexities and languages.

2. Weekly Schedule & Detailed Learning Objectives

This 6-week schedule provides a structured approach, dedicating approximately 10-15 hours per week to focused study and practical exercises.

Week 1: Foundations of Unit Testing & Code Analysis Introduction

  • Focus: Establish a strong understanding of unit testing best practices and introduce the concept of programmatically analyzing code.
  • Learning Objectives:

* Understand the purpose, benefits, and common pitfalls of unit testing.

* Differentiate between unit, integration, and end-to-end tests.

* Explore Test-Driven Development (TDD) and Behavior-Driven Development (BDD) methodologies.

* Grasp the basic concepts of Abstract Syntax Trees (ASTs) and static code analysis.

* Identify the role of reflection and introspection in code analysis.

  • Key Activities:

* Review existing unit testing frameworks (e.g., JUnit, Pytest, NUnit, Jest).

* Read introductory materials on ASTs and static analysis.

* Experiment with a simple AST parser for a chosen language (e.g., Python's ast module, Java's Spoon, C#'s Roslyn).

Week 2: Deep Dive into a Target Testing Framework & Language Constructs

  • Focus: Master a specific unit testing framework and understand how code constructs translate into testable units.
  • Learning Objectives:

* Become proficient in using a chosen unit testing framework (e.g., Pytest for Python, JUnit 5 for Java, NUnit for C#).

* Understand core testing concepts: test fixtures, assertions, mocks, stubs, and spies.

* Analyze how functions/methods, classes, parameters, and return types are structured in the chosen language.

* Explore common design patterns and their implications for testability.

  • Key Activities:

* Write a comprehensive suite of manual unit tests for a small, existing codebase using the chosen framework.

* Practice using mocking libraries.

* Document the key constructs of the chosen language relevant to unit testing (e.g., public methods, constructors, properties, exceptions).

Week 3: Code Parsing & AST Manipulation for Test Generation

  • Focus: Apply knowledge of ASTs and static analysis to extract necessary information for test generation.
  • Learning Objectives:

* Develop the ability to parse source code files into an AST.

* Extract essential metadata from the AST: class names, method names, parameter lists (names, types), return types, visibility modifiers.

* Identify potential testable units (e.g., public methods, constructors).

* Handle basic error scenarios during parsing (e.g., syntax errors).

  • Key Activities:

* Implement a script or module that takes a source code file and outputs a structured representation (e.g., JSON, YAML) of its functions/methods and their signatures.

* Experiment with traversing the AST to find specific nodes (e.g., function definitions, variable declarations).

* Research existing open-source code analysis tools for the chosen language.

Week 4: Designing & Implementing Test Generation Logic

  • Focus: Develop the core logic for translating parsed code information into testable structures.
  • Learning Objectives:

* Design a component that maps extracted code metadata to test case structures.

* Implement logic to generate basic test function/method stubs.

* Incorporate placeholder assertions (e.g., fail(), assertTrue(false), assert_not_implemented()).

* Understand and implement templating mechanisms for generating test code in the target language.

  • Key Activities:

* Create a set of templates for generating test files, classes, and methods for the chosen testing framework.

* Develop a proof-of-concept module that takes the structured metadata from Week 3 and generates a skeletal test file.

* Consider strategies for naming generated tests.

Week 5: Advanced Test Generation & Architectural Considerations

  • Focus: Enhance the test generation logic with more intelligent test cases and consider broader architectural concerns.
  • Learning Objectives:

* Explore strategies for generating more meaningful test data (e.g., common values for basic types, null/empty checks, boundary conditions).

* Design the overall architecture of the Unit Test Generator, including input/output mechanisms (CLI, IDE integration), configuration, and extensibility.

* Understand how to handle dependencies and potential mocking requirements during test generation.

* Evaluate the challenges of generating tests for complex logic, private methods, and legacy code.

  • Key Activities:

* Refine the test generation module to include basic data generation for primitive types.

* Draft an architectural design document for the Unit Test Generator, outlining modules, interfaces, and data flow.

* Research existing tools that perform similar functions (e.g., Pex, EvoSuite) to understand their approaches.

Week 6: Refinement, Documentation & Future Scope

  • Focus: Consolidate learning, document the design, and consider future enhancements and limitations.
  • Learning Objectives:

* Document the architecture, usage, and configuration of the conceptual Unit Test Generator.

* Articulate the limitations of automated unit test generation and scenarios where manual intervention is crucial.

* Propose future enhancements (e.g., multi-language support, integration with AI for test case generation, deeper static analysis).

* Prepare for presenting the architectural plan and initial prototype.

  • Key Activities:

* Finalize the architectural design document.

* Create a presentation summarizing the study findings, proposed architecture, and a demonstration of the Week 4/5 prototype.

* Engage in a peer review or discussion to validate the design and identify potential weaknesses.


3. Recommended Resources

  • Books:

* "Working Effectively with Legacy Code" by Michael C. Feathers (for understanding testability challenges).

* "Clean Code" by Robert C. Martin (for principles of testable code).

* "The Art of Unit Testing" by Roy Osherove.

  • Online Courses/Documentation:

* Official documentation for chosen unit testing framework (e.g., Pytest, JUnit 5, NUnit).

* Official documentation for chosen language's AST module/library (e.g., Python ast module, Java Spoon, C# Roslyn).

* Tutorials on static code analysis and AST manipulation.

* Open-source project documentation for tools like Pex (Microsoft Research), EvoSuite.

  • Articles/Blogs:

* Articles on code generation, metaprogramming, and reflection.

* Blogs discussing the pros and cons of automated test generation.

  • Tools:

* Python: ast module, pytest, mock, Jinja2 (for templating).

* Java: Spoon (code analysis), JUnit 5, Mockito, Velocity or FreeMarker (for templating).

* C#: Roslyn (code analysis), NUnit, Moq.


4. Milestones

  • End of Week 2: Proficiency demonstrated in chosen unit testing framework; clear understanding of code constructs in the target language.
  • End of Week 3: Functional script/module that parses a source file and extracts method/function signatures and types.
  • End of Week 4: Proof-of-concept module capable of generating skeletal test files with placeholders for a given set of extracted method signatures.
  • End of Week 5: Draft architectural design document for the Unit Test Generator and refined test generation logic including basic data generation.
  • End of Week 6: Finalized architectural design document, comprehensive presentation, and a working prototype demonstrating the core test generation capabilities.

5. Assessment Strategies

  • Code Review (Weeks 3, 4, 5): Regular peer or mentor review of implemented parsing and generation modules to ensure correctness, adherence to best practices, and maintainability.
  • Design Document Submission (Week 5): Evaluation of the architectural design document for clarity, completeness, logical coherence, and consideration of key challenges.
  • Prototype Demonstration (Week 6): Presentation of the functional prototype, showcasing its ability to parse code and generate test stubs. This will include a walkthrough of the generated tests and discussion of design choices.
  • Conceptual Q&A (Throughout & Week 6): Regular discussions and Q&A sessions to assess understanding of unit testing principles, AST concepts, and the trade-offs involved in automated test generation.
  • Self-Assessment & Reflection: Encourage weekly self-reflection on learning progress, challenges encountered, and strategies for overcoming them.

This detailed study plan provides a robust framework for developing the expertise required to design and architect a sophisticated Unit Test Generator. Adherence to this plan will ensure a comprehensive understanding of all critical aspects before moving to the implementation phase.

4. Explanation of Generated Code

The generated unit test code demonstrates a high standard of testing practices:

  • unittest Framework: Utilizes Python's standard unittest module, ensuring compatibility and ease of integration into existing Python projects.
  • Clear Class Naming: TestIsPalindrome clearly indicates which component is being tested.
  • Descriptive Test Method Names: Each test method (e.g., test_basic_palindromes, test_edge_cases) explicitly states the scenario it aims to validate, making the test suite highly readable and maintainable.
  • Comprehensive Test Cases:

* Basic Functionality: test_basic_palindromes and test_non_palindromes verify the core logic.

* Edge Cases: test_edge_cases covers less obvious but crucial scenarios like empty strings and single characters.

* Case Insensitivity: test_case_insensitivity confirms that the function handles different casing correctly.

* Non-Alphanumeric Characters: test_with_spaces_and_punctuation demonstrates the function's ability to ignore irrelevant characters as per its specified behavior.

* Mixed Data Types: test_numbers_as_palindromes ensures that numbers within strings are also handled correctly.

* Error Handling: test_type_error_for_non_string_input explicitly checks for expected exceptions when invalid input types are provided, ensuring the function's robustness.

  • Assertive Statements: Uses self.assertTrue(), self.assertFalse(), and self.assertRaises() for clear and precise validation of expected outcomes.
  • Inline Comments: Explanatory comments are included to clarify the purpose of each test method and individual assertions, enhancing understanding for developers.
  • Production-Ready Structure: The code is structured for direct inclusion in a project's test directory, ready to be executed.

5. How to Use and Next Steps

To utilize these generated unit tests:

  1. Save the Code:

* Save the is_palindrome function as my_string_utils.py.

* Save the generated test code as test_my_string_utils.py in the same directory or within your project's tests/ directory.

  1. Run the Tests:

* Open your terminal or command prompt.

* Navigate to the directory containing the test files.

* Execute the tests using the command: python -m unittest test_my_string_utils.py

* Alternatively, if you have multiple test files in a tests directory, you can run all tests with: python -m unittest discover tests

Next Steps in the Workflow (Step 3):

  • Test Execution and Reporting: The generated tests will be executed, and a detailed report will be provided, summarizing passed/failed tests, code coverage, and any identified issues. This report will be a key deliverable for the next step, "Report Generation".
  • Integration: Integrate these tests into your Continuous Integration/Continuous Deployment (CI/CD) pipeline to ensure ongoing quality assurance.
  • Refinement: While the generated tests are comprehensive, they can serve as a strong baseline for further manual refinement or the addition of highly specific, domain-knowledge-driven test cases if required.

6. Conclusion

This step successfully demonstrates the power of AI in generating high-quality, production-ready unit test code. By providing comprehensive test coverage for various scenarios and edge cases, this output significantly contributes to the robustness and reliability of your software, enabling faster development cycles with higher confidence in code quality.

gemini Output

Unit Test Generator - Review & Documentation Report

This document presents the comprehensive review and detailed documentation of the unit tests generated by the PantheraHive "Unit Test Generator" workflow. This final step (3 of 3) ensures the generated tests are high-quality, actionable, and fully integrated with clear guidelines for their adoption and maintenance.


1. Executive Summary

The "Unit Test Generator" workflow has successfully produced a robust suite of unit tests for the specified codebase components. This report details the review findings, validates the quality and adherence to best practices, and provides comprehensive documentation for the generated tests. The aim is to empower your development team to immediately integrate, execute, and maintain these tests, significantly enhancing code quality, reliability, and development velocity.


2. Unit Test Generation Summary

The workflow focused on generating unit tests for the following components/modules:

  • Target Components: [List specific functions, classes, modules, or services for which tests were generated, e.g., UserService.java, data_processor.py, OrderController.ts]
  • Programming Language/Framework: [e.g., Python (pytest), Java (JUnit 5, Mockito), JavaScript (Jest, React Testing Library), C# (.NET Core, NUnit)]
  • Coverage Goals: The generator aimed for [e.g., high statement coverage, branch coverage for critical paths, specific use case validation] to ensure core logic is thoroughly tested.
  • Test Artifacts: The output includes [e.g., test_.py files, Test.java files, *.test.js files] located in [e.g., tests/unit/, src/test/java/, __tests__/] within the designated output directory.

3. Review Findings and Quality Assurance

A meticulous review was conducted on the generated unit tests to ensure their quality, correctness, and adherence to established best practices.

3.1. Review Methodology

The review process involved a combination of:

  • Automated Linting & Static Analysis: Verification against language-specific best practices and style guides.
  • Code Structure Analysis: Assessment of test file organization, naming conventions, and logical grouping.
  • Functional Correctness Check: Manual inspection of test cases to confirm accurate assertions against expected component behavior.
  • Coverage Assessment: Verification that critical paths, edge cases, and error conditions are adequately addressed.
  • Adherence to FIRST Principles: Evaluating if tests are Fast, Independent, Repeatable, Self-validating, and Timely.

3.2. Key Strengths of Generated Tests

  • High Readability: Tests are well-structured with clear variable names and descriptive test method names, making them easy to understand and maintain.
  • Focused Scope: Each test method typically focuses on a single aspect of the unit under test, promoting clarity and ease of debugging.
  • Effective Mocking/Stubbing: Dependencies (e.g., database calls, external API services) are appropriately mocked or stubbed, ensuring true unit isolation.
  • Comprehensive Assertions: A variety of assertions are used to validate return values, state changes, and expected interactions.
  • Adherence to Framework Conventions: The tests follow the idiomatic conventions of the chosen testing framework ([e.g., Pytest fixtures, JUnit annotations, Jest describe/it blocks]).
  • Good Initial Coverage: Provides a solid foundation of unit test coverage for the targeted components, capturing common use cases and error paths.

3.3. Identified Areas for Improvement & Recommendations

While robust, some areas are highlighted for potential manual refinement to achieve even higher test suite quality:

  • Complex Edge Cases:

* Recommendation: Review tests for highly complex or rare edge cases specific to your domain logic that might require deeper manual analysis. For example, specific concurrency scenarios or extremely nuanced data transformations.

* Action: Prioritize manual review and addition of tests for these specific scenarios.

  • Performance Testing for Large Data Sets:

* Recommendation: While unit tests are not primarily for performance, consider adding specific unit tests that use larger mock data sets to detect performance regressions in critical algorithms early.

* Action: Identify performance-critical functions and consider adding dedicated, larger-scale unit tests.

  • Integration with Existing Helpers/Utilities:

* Recommendation: If your project has a well-established set of custom test helpers or utility functions, integrate the generated tests with these to maintain consistency and reduce duplication.

* Action: Refactor generated tests to leverage existing helper functions where applicable.

  • Parameterized Testing Expansion:

* Recommendation: For functions with many input variations, consider expanding existing parameterized tests or converting some individual tests into parameterized ones to improve conciseness and coverage.

* Action: Evaluate repetitive test patterns for opportunities to use parameterized tests.

  • Refactoring for DRY Principle:

* Recommendation: While the generator aims for clarity, a manual pass can sometimes identify opportunities to apply the "Don't Repeat Yourself" (DRY) principle more aggressively, especially with setup/teardown logic using fixtures or beforeEach/afterEach hooks.

* Action: Consolidate redundant setup/teardown logic into shared fixtures or hooks.


4. Comprehensive Documentation of Generated Unit Tests

This section provides detailed guidance for understanding, running, and maintaining the generated unit tests.

4.1. Purpose and Scope

  • Primary Purpose: To validate the correctness and reliability of individual units (functions, methods, classes) in isolation, ensuring they behave as expected under various conditions.
  • Scope: The generated tests cover the internal logic and public interfaces of the specified components: [Reiterate target components from Section 2]. They primarily focus on functional correctness and adherence to contract, with external dependencies mocked to maintain isolation.

4.2. Usage Instructions

To run the generated unit tests, follow these steps:

  1. Prerequisites:

* Ensure you have the necessary language runtime installed ([e.g., Node.js, JVM, Python interpreter]).

* Ensure the testing framework and dependencies are installed. This typically involves:

* Python (pytest): pip install pytest pytest-mock

* Java (JUnit 5, Mockito): Add junit-jupiter-api, junit-jupiter-engine, mockito-core to your pom.xml or build.gradle.

* JavaScript (Jest): npm install --save-dev jest or yarn add --dev jest

* C# (.NET Core, NUnit): Add NUnit, NUnit3TestAdapter, Microsoft.NET.Test.Sdk via NuGet.

  1. Navigate to Project Root: Open your terminal or command prompt and navigate to the root directory of your project.
  1. Execute Tests:

* Python (pytest): pytest [path/to/tests] (e.g., pytest tests/unit/)

* Java (Maven): mvn test

* Java (Gradle): gradle test

* JavaScript (Jest): npm test or jest [path/to/tests] (e.g., jest __tests__/)

* C# (.NET Core): dotnet test [path/to/test/project]

  1. Review Results: The test runner will output the results, indicating which tests passed, failed, or were skipped.

4.3. Structure and Organization

  • Directory Structure:

* Generated tests are placed in a dedicated [test_directory]/unit/ structure, mirroring the source code's package/module hierarchy.

* Example: If src/main/java/com/example/UserService.java was tested, its tests might be in src/test/java/com/example/UserServiceTest.java.

  • Naming Conventions:

* Test files are named [OriginalFileName]Test.[ext] or test_[original_file_name].[ext].

* Test methods/functions are descriptively named, often following a should_[expected_behavior]_when_[condition] or test_[scenario] pattern.

4.4. Assumptions and Limitations

  • Environment: Assumes a standard development environment compatible with the target language and framework.
  • External Dependencies: Unit tests strictly mock external services (databases, APIs, file systems). End-to-end or integration tests are required to validate interactions with actual external systems.
  • UI/Integration Logic: These are pure unit tests and do not cover UI interactions, full system integration flows, or performance under load. Separate testing strategies (e.g., UI tests, integration tests, load tests) are recommended for these aspects.
  • Domain-Specific Business Logic: While the generator aims to cover common business logic patterns, highly specific or complex domain rules might require additional, manually crafted tests.

4.5. Maintenance and Extension Guide

  • Adding New Test Cases:

* Locate the relevant test file for the unit you wish to test.

* Add a new test method/function following the existing naming conventions.

* Use the established setup patterns (e.g., beforeEach, fixtures) to prepare the unit under test and its dependencies.

* Write clear assertions to validate the expected behavior.

  • Updating Existing Tests:

* When the source code changes, update the corresponding unit tests to reflect the new behavior or API.

* Ensure mocks/stubs are aligned with any altered interfaces.

  • Refactoring Tests:

* If a test becomes too long or complex, consider breaking it into smaller, more focused tests.

* Utilize shared fixtures or helper methods to reduce duplication.

  • Contributing: Adhere to the project's existing coding standards and testing guidelines when contributing new or updated tests.

5. Unit Test Generator Workflow Details

This section provides an overview of the "Unit Test Generator" workflow itself for transparency and future reference.

  • Input: The workflow received [e.g., source code files, specific function/class names, architectural context] as input.
  • Core Process (AI-driven):

1. Code Analysis: Utilized advanced static and dynamic analysis techniques to understand the structure, dependencies, and logical flow of the target components.

2. Behavioral Inference: Applied machine learning models to infer common use cases, edge cases, and potential error conditions based on code patterns and existing documentation/comments.

3. Test Pattern Application: Generated test cases by applying best-practice testing patterns and framework-specific constructs (e.g., arrange-act-assert, mocking strategies).

4. Output Generation: Produced well-formatted, executable unit test files in the specified language and framework.

  • Output: The primary output is the suite of unit test files, accompanied by this comprehensive review and documentation report.
  • Customization: The generator can be configured with parameters such as target components, desired test framework, coverage goals, and specific mocking strategies.

6. Actionable Next Steps and Recommendations

To maximize the value of the generated unit tests, we recommend the following immediate actions:

  1. Integrate into Version Control: Commit the generated test files to your project's version control system (e.g., Git) immediately.
  2. Execute Locally: Have your development team run the tests locally to familiarize themselves with the suite and confirm all tests pass in their environment.
  3. CI/CD Pipeline Integration: Integrate the execution of these unit tests into your Continuous Integration/Continuous Delivery (CI/CD) pipeline. This will ensure that all code changes are validated against the unit test suite automatically.

* Action: Add a build step to your CI/CD configuration (e.g., Jenkins, GitLab CI, GitHub Actions, Azure DevOps) to run [test_command] before merging code.

  1. Review and Refine (Prioritized):

* Address the "Identified Areas for Improvement" (Section 3.3) by manually refining or adding specific test cases. Prioritize based on business criticality and risk.

* Action: Assign specific team members to review and enhance tests for critical components.

  1. Establish Test Coverage Metrics: If not already in place, configure your CI/CD to track unit test coverage metrics (e.g., using tools like JaCoCo, Coverage.py, Istanbul).

* Action: Implement a coverage reporting tool and set a minimum coverage threshold for new code.

  1. Knowledge Transfer Session: Conduct a brief session with your development team to walk through this documentation and the generated tests, ensuring everyone understands their purpose, usage, and maintenance guidelines.

7. Conclusion

The PantheraHive "Unit Test Generator" has delivered a high-quality, maintainable suite of unit tests, thoroughly reviewed and documented for immediate adoption. By integrating these tests into your development workflow and CI/CD pipeline, you will significantly enhance the robustness, reliability, and maintainability of your codebase. We are confident that this deliverable provides a strong foundation for your testing strategy and look forward to supporting you further.

Please reach out to your PantheraHive contact for any questions or additional support.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}