Unit Test Generator
Run ID: 69ccaf193e7fb09ff16a41e52026-04-01Development
PantheraHive BOS
BOS Dashboard

This output details the execution of Step 2 of 3 in the "Unit Test Generator" workflow. In this crucial phase, the Gemini AI model is leveraged to generate high-quality, comprehensive, and production-ready unit tests based on the provided source code or specified requirements. This step significantly automates and accelerates the test development process, ensuring robust code validation.


Step 2: Gemini AI - Generating Unit Tests

Workflow Step Description: In this step, the powerful Gemini AI model analyzes the target code (functions, methods, classes) and generates a suite of unit tests. These tests are designed to cover various scenarios, including happy paths, edge cases, error conditions, and best-practice assertions, tailored to a specified testing framework.

1. Overview of AI-Powered Test Generation

Our Unit Test Generator utilizes Google's Gemini AI to intelligently create unit tests. This process involves:

2. Capabilities of the Generated Unit Tests

The unit tests produced by this step are designed to be:

3. Example of Generated Unit Test Code (Python with pytest)

Below is an example of unit tests generated by the Gemini AI for a sample Python function.

Target Python Function (src/calculator.py):

text • 2,559 chars
### 4. Explanation of the Generated Code

The generated `pytest` code demonstrates several key aspects:

*   **Import Statements:** Essential modules (`pytest` for testing, `calculate_discounted_price` from the source file) are correctly imported.
*   **Descriptive Function Names:** Each test function (`test_calculate_discounted_price_positive_discount`, etc.) clearly indicates the specific scenario being tested, enhancing readability.
*   **Docstrings:** Comprehensive docstrings explain the purpose of each test, its expected outcome, and the specific condition it addresses. This is crucial for maintainability.
*   **Test Categories:** Tests are logically grouped into "Happy Path," "Edge Case," and "Error Handling" sections, providing a structured overview of test coverage.
*   **Assertions (`assert`):**
    *   **Direct Equality (`assert actual == expected`):** Used for exact matches.
    *   **Floating-Point Comparison (`pytest.approx`):** Crucially, `pytest.approx` is used for comparing floating-point numbers. This is a best practice to avoid issues with floating-point precision errors, ensuring robust tests.
    *   **Exception Handling (`pytest.raises`):** This context manager is used to verify that specific `ValueError` or `TypeError` exceptions are raised under incorrect input conditions, confirming the function's error handling. The `match` argument ensures the error message also matches expectations.
*   **Variable Naming:** Clear variable names like `original_price`, `discount_percentage`, `expected_price`, and `actual_price` improve test clarity.

### 5. Benefits of this AI-Powered Step

*   **Accelerated Development:** Significantly reduces the time and effort required to write unit tests manually.
*   **Improved Test Coverage:** AI can identify and generate tests for a broader range of scenarios, including edge cases that might be overlooked by human developers.
*   **Consistent Quality:** Ensures tests adhere to high standards of quality, readability, and best practices across your codebase.
*   **Early Bug Detection:** By providing comprehensive tests, potential issues are identified earlier in the development cycle.
*   **Developer Empowerment:** Frees up developers to focus on core logic and architecture, while the AI handles the repetitive task of test generation.

### 6. How to Use and Integrate the Generated Tests

1.  **Save the File:** Save the generated test code (e.g., as `tests/test_calculator.py`) in your project's test directory.
2.  **Install `pytest` (if not already installed):**
    
Sandboxed live preview

Unit Test Generator: Comprehensive Study Plan

This document outlines a detailed and actionable study plan for understanding, designing, and potentially implementing a Unit Test Generator. This plan is tailored for professionals aiming to gain a deep architectural and technical understanding of automated unit test generation.


1. Introduction

The purpose of this study plan is to provide a structured approach to mastering the concepts and technologies required to architect and build a sophisticated Unit Test Generator. Automated unit test generation is a critical area in modern software development, aiming to improve code quality, reduce manual testing effort, and enhance development velocity. By following this plan, you will gain the expertise to design a robust and intelligent system capable of analyzing source code and generating relevant, effective unit tests.

2. Overall Goal

Upon completion of this study plan, you will be able to:

  • Comprehend the Core Principles: Understand the fundamental theories and challenges behind automated unit test generation.
  • Architect a Solution: Design a scalable, extensible, and maintainable architecture for a Unit Test Generator.
  • Evaluate Technologies: Identify and select appropriate parsing tools, code generation frameworks, and testing libraries.
  • Implement Key Components: Develop a foundational understanding of how to implement core features such as code analysis, test case generation, and mock object creation.
  • Address Practical Challenges: Recognize and strategize solutions for common issues like dependency management, test data generation, and integrating with existing build systems.

3. Weekly Schedule

This 4-week intensive study plan is designed to progressively build your knowledge, moving from foundational concepts to advanced architectural considerations and practical application. Each week focuses on specific themes, with an estimated time commitment of 8-12 hours, adaptable to individual learning pace.

Week 1: Foundations of Code Analysis and Testability (Estimated: 10 hours)

  • Theme: Understanding the input (source code) and the problem space (why automate testing).
  • Key Topics:

* Unit Testing Fundamentals: Review of unit testing principles, benefits, and best practices (e.g., FIRST principles: Fast, Independent, Repeatable, Self-validating, Timely).

* The Need for Automation: Challenges of manual test writing, benefits of automated generation.

* Static Code Analysis: Introduction to Abstract Syntax Trees (ASTs), Control Flow Graphs (CFGs), and Data Flow Analysis.

* Language Parsing: Overview of parsing techniques (lexing, parsing, semantic analysis) and common tools (e.g., ANTLR, specific language APIs like Roslyn for C#, JavaParser for Java, Python's ast module).

* Code Introspection/Reflection: How to programmatically inspect code structure and metadata at runtime (e.g., Java Reflection, C# Reflection).

* Testability Principles: Understanding code characteristics that make it easy or difficult to test (e.g., high coupling, global state, side effects).

Week 2: Test Case Generation and Test Doubles (Estimated: 12 hours)

  • Theme: How to generate effective test logic and manage dependencies.
  • Key Topics:

* Test Double Strategies: Deep dive into Mocks, Stubs, Fakes, Spies, and Dummies. Understanding when and how to use each.

* Automated Mock/Stub Generation: Techniques for automatically creating test doubles based on interfaces or class definitions.

* Dependency Injection for Testability: How DI patterns facilitate easier testing and how a generator can leverage them.

* Test Case Generation Techniques:

* Structural Coverage: Statement, branch, path coverage.

* Input Domain Analysis: Boundary Value Analysis (BVA), Equivalence Partitioning (EP).

* State-Based Testing: Generating tests for stateful objects.

* Property-Based Testing (PBT): Generating tests based on properties that inputs and outputs must satisfy.

* Test Data Generation: Strategies for creating meaningful input data for test cases (random, constrained, domain-specific).

Week 3: Architectural Design and Integration (Estimated: 10 hours)

  • Theme: Structuring the Unit Test Generator and integrating it into development workflows.
  • Key Topics:

* Architectural Patterns for Code Generators: Designing a modular and extensible system (e.g., pipeline architecture, plugin-based design, visitor pattern for AST traversal).

* Component Breakdown: Identifying core components (e.g., Parser, Analyzer, Test Case Generator, Test Double Generator, Code Emitter, Reporter).

* Language-Specific Considerations: How the choice of target language impacts design (e.g., type systems, reflection capabilities, build tools).

* Code Emission/Pretty Printing: Techniques for generating syntactically correct and readable test code.

* Integration Points: How the generator interacts with IDEs, build systems (Maven, Gradle, MSBuild, npm), and CI/CD pipelines.

* Configuration and Extensibility: Designing for user-defined rules, custom templates, and extensibility points.

Week 4: Advanced Topics and Practical Application (Estimated: 12 hours)

  • Theme: Exploring cutting-edge techniques and building a proof-of-concept.
  • Key Topics:

* AI/ML in Test Generation:

* Fuzzing: Techniques for generating random or semi-random inputs to discover bugs.

* Search-Based Software Engineering (SBSE): Using optimization algorithms (e.g., genetic algorithms) to find test cases that achieve specific coverage goals.

* Learning from Existing Tests: Using ML to analyze existing test suites and learn patterns for new test generation.

* Mutation Testing: How it can be used to assess the quality of generated tests.

* Performance and Scalability: Considerations for large codebases.

* Error Handling and Reporting: How to provide useful feedback when tests cannot be generated or fail.

* Hands-on Project/Prototype: Applying learned concepts to build a small-scale Unit Test Generator for a specific language/framework, focusing on one or two core features.

4. Learning Objectives

By the end of this study plan, you will be able to:

  • Analyze Code: Utilize ASTs and static analysis techniques to understand code structure, dependencies, and potential test targets.
  • Design Test Strategies: Formulate effective strategies for generating test cases, covering various scenarios (e.g., happy path, error conditions, boundary values).
  • Manage Dependencies: Implement mechanisms for automatically creating and injecting test doubles (mocks, stubs) to isolate units under test.
  • Architect a Generator: Develop a modular and extensible architectural blueprint for a Unit Test Generator, including core components and their interactions.
  • Select Tools: Choose appropriate parsing libraries, code generation frameworks, and testing utilities for a given programming language and ecosystem.
  • Integrate Workflow: Understand how to integrate a Unit Test Generator into an existing software development lifecycle, including build systems and IDEs.
  • Explore Advanced Concepts: Discuss and apply advanced techniques like property-based testing, fuzzing, or search-based test generation.
  • Communicate Design: Clearly articulate the design choices, benefits, and limitations of an automated unit test generation system.

5. Recommended Resources

This section provides a curated list of resources to support your learning journey. Prioritize official documentation and widely respected books/papers.

Books:

  • "Working Effectively with Legacy Code" by Michael C. Feathers: Essential for understanding testability issues and refactoring for testability.
  • "Refactoring: Improving the Design of Existing Code" by Martin Fowler: Provides foundational knowledge on code structure and design.
  • "Test-Driven Development by Example" by Kent Beck: Reinforces core unit testing principles.
  • "Compiler Construction: Principles and Practice" by Kenneth C. Louden (or similar): For a deeper dive into parsing and ASTs.
  • "The Art of Unit Testing" by Roy Osherove: Covers best practices for unit testing and mocking.

Online Courses & Tutorials:

  • "Static Program Analysis" (Coursera/edX): Look for courses from reputable universities covering ASTs, control flow, and data flow analysis.
  • Language-Specific AST/Reflection Tutorials:

* Java: JavaParser documentation, tutorials on Reflection API.

* C#: Microsoft Roslyn SDK documentation and tutorials.

* Python: Official ast module documentation, tutorials on introspection.

  • Mocking Framework Tutorials: Official documentation and examples for Mockito (Java), Moq/NSubstitute (C#), unittest.mock (Python).
  • Property-Based Testing Tutorials: QuickCheck (Haskell/various ports), Hypothesis (Python), FsCheck (F#/.NET).

Academic Papers & Research:

  • EvoSuite: Research papers on EvoSuite (an open-source test generator) for insights into search-based test generation.
  • Fuzzing Research: Papers on AFL++ (American Fuzzy Lop) and other modern fuzzing techniques.
  • Software Engineering Journals: Search for "automated test generation," "search-based software testing," "mutation testing" in IEEE Xplore, ACM Digital Library.

Tools & Frameworks:

  • AST Parsers:

* Java: JavaParser, Spoon

* C#: Microsoft Roslyn SDK

* Python: ast module

* General: ANTLR (Another Tool for Language Recognition)

  • Code Generation Libraries:

* Java: Javapoet

* C#: T4 Text Templates, Source Generators (Roslyn)

* Python: Jinja2 (for template-based generation)

  • Mocking Frameworks: Mockito, PowerMock (Java); Moq, NSubstitute (C#); unittest.mock (Python).
  • Property-Based Testing: Hypothesis (Python), QuickCheck (Haskell, also in other languages), FsCheck (F#/.NET).
  • Fuzzing Tools: AFL++, libFuzzer.

Blogs & Articles:

  • Engineering

pytest will automatically discover and run all tests in files named test_.py or _test.py.

  1. Review and Customize: While the AI generates robust tests, it's always recommended to review them. You may want to:

* Add highly specific business logic tests.

* Refactor common test setup into fixtures (for pytest).

* Integrate them into your existing CI/CD pipeline.

7. Next Steps (Step 3 of 3)

The generated unit tests are now ready for review and integration. The final step in this workflow typically involves:

  • Review and Refinement: Providing an interface for users to review, modify, and confirm the generated tests.
  • Integration and Reporting: Guiding the user on how to integrate these tests into their project and potentially generating a test coverage report.

This concludes the detailed output for Step 2 of the "Unit Test Generator" workflow. The generated tests are a robust foundation for ensuring the quality and reliability of your codebase.

gemini Output

As a professional AI assistant within PantheraHive, I am pleased to present the detailed output for Step 3 of 3: "review_and_document" within the "Unit Test Generator" workflow. This output represents the culmination of the process, ensuring the generated unit tests are not only functional but also adhere to best practices, are well-documented, and ready for immediate integration.


1. Deliverable Overview: Reviewed & Documented Unit Tests

This document outlines the comprehensive review and documentation process applied to the unit tests generated in the previous step (via the Gemini model). The primary goal is to provide you with high-quality, robust, and easily maintainable unit tests, accompanied by clear, actionable documentation.

Your Deliverable Includes:

  • Refined Unit Test Code: The generated unit tests, meticulously reviewed, potentially refactored, and optimized for correctness, completeness, and adherence to best practices.
  • Comprehensive Documentation: Detailed explanations of the test suite, individual test cases, setup/teardown instructions, dependencies, and rationale, integrated either inline or as separate files for clarity.
  • Review Summary: A concise overview of the key findings, improvements made, and any specific considerations identified during the review process.

2. Workflow Context: Unit Test Generator

The "Unit Test Generator" workflow is designed to automate and streamline the creation of unit tests for your codebase. It comprises three key steps:

  1. Input & Analysis: Receiving your code or specifications and analyzing them to understand functionality, dependencies, and potential test scenarios.
  2. Generation (gemini): Utilizing the advanced capabilities of the Gemini AI model to generate an initial set of unit tests based on the analysis.
  3. Review & Documentation (review_and_document): The current step, where the AI-generated tests undergo human expert review, refinement, and comprehensive documentation to ensure quality, usability, and maintainability.

3. Step 3: Review and Documentation (gemini → review_and_document)

This critical final step transforms raw AI-generated tests into production-ready assets.

3.1. Purpose of this Step

The "review_and_document" step serves multiple crucial purposes:

  • Quality Assurance: Validate the correctness and effectiveness of the AI-generated tests.
  • Best Practice Adherence: Ensure tests follow industry standards, coding conventions, and framework-specific best practices (e.g., Arrange-Act-Assert pattern, proper mocking/stubbing).
  • Completeness & Coverage: Verify that critical paths, edge cases, and error conditions are adequately covered.
  • Readability & Maintainability: Optimize tests for human understanding, making them easier to debug, modify, and extend in the future.
  • Actionable Insights: Provide context and instructions for integrating and utilizing the tests effectively.

3.2. Review Process Details

The review process is a multi-faceted approach combining automated checks and human expertise.

3.2.1. Review Criteria

Each generated unit test and the overall test suite are evaluated against the following criteria:

  • Correctness: Do the tests accurately reflect the expected behavior of the code under test? Do they pass when expected and fail when expected?
  • Completeness: Is there adequate test coverage for all public methods, critical logic paths, and identified edge cases? Are common failure scenarios considered?
  • Clarity & Readability: Are test names descriptive? Is the test logic easy to understand? Does it follow a clear structure (e.g., AAA - Arrange, Act, Assert)?
  • Maintainability: Are tests independent? Do they avoid unnecessary dependencies? Are they concise and focused on a single responsibility?
  • Performance: Are tests reasonably fast? Do they avoid unnecessary I/O or complex operations that could slow down the test suite?
  • Best Practices: Adherence to specific testing framework guidelines (e.g., JUnit, Pytest, NUnit, Jest), proper use of assertions, mocking, and dependency injection.
  • Error Handling: Are tests included for expected error conditions and invalid inputs?
  • Code Style & Standards: Conformity to the project's existing coding standards and style guides.

3.2.2. Reviewer Persona

The review is typically performed by a Human Software Quality Engineer or Senior Developer with expertise in the target programming language, testing frameworks, and the specific domain of your application. This ensures a nuanced understanding that goes beyond what an AI can currently achieve in terms of contextual validation.

3.2.3. Tools & Techniques

  • Static Analysis Tools: Linters and code quality tools (e.g., SonarQube, ESLint, Pylint) are used to identify style violations, potential bugs, and code smells in the generated tests.
  • Code Coverage Tools: Tools (e.g., JaCoCo, Istanbul, Coverage.py) are used to measure the percentage of code exercised by the tests, helping identify gaps.
  • Manual Code Walkthroughs: Human reviewers meticulously examine each test case, tracing its logic against the expected behavior of the system under test.
  • Execution & Validation: The generated tests are executed against the actual code, and results are verified to ensure they pass correctly and demonstrate the intended behavior.

3.3. Documentation Process Details

Effective documentation ensures that the unit tests remain valuable and understandable throughout the software lifecycle.

3.3.1. Documentation Scope & Content

The documentation covers the following aspects:

  • Overall Test Suite Description:

* Purpose of the test suite.

* Scope of the module/component being tested.

* Dependencies (external libraries, databases, services).

* Instructions on how to run the tests.

  • Individual Test Case Documentation:

* Test Name: Clear, descriptive name explaining what the test verifies.

* Description/Rationale: A brief explanation of the specific scenario or behavior being tested.

* Preconditions/Setup: Any required setup (e.g., mocking objects, database state, specific input data).

* Expected Behavior: What the system should do or return under the given conditions.

* Assertions: Explanation of the key assertions being made.

* Dependencies: Any specific external dependencies or mocks used within the test.

* Edge Cases/Negative Scenarios: Specific notes if the test covers an edge case or an invalid input scenario.

3.3.2. Documentation Format

Documentation is provided in a format that best suits your project's existing practices:

  • Inline Comments: Detailed comments directly within the test code, explaining complex logic, setup, or specific assertions.
  • Separate README.md: A dedicated Markdown file within the test directory or repository, providing an overview of the test suite, setup instructions, and general guidelines.
  • Wiki/Confluence Page (Optional): If requested, a summary or detailed guide can be provided on a project wiki.

4. Customer Deliverables

Upon completion of this step, you will receive the following:

4.1. Primary Deliverables

  1. [ModuleName]Tests.zip (or similar archive):

* Contains the reviewed and refined unit test source code files (e.g., .java, .py, .js, .cs).

* Includes any necessary configuration files for running the tests.

* A README.md file providing an overview, setup instructions, and usage guidelines for the test suite.

  1. Review_Summary_[ModuleName].md:

* A markdown document summarizing the key findings during the review.

* Details on any significant modifications or refactorings applied to the AI-generated tests.

* Identification of any remaining areas for potential improvement or further testing (if applicable).

* Specific notes on dependencies or integration steps.

4.2. Value Added by This Step

  • Increased Confidence: You receive tests that have been validated by human expertise, reducing the risk of false positives or negatives.
  • Reduced Integration Effort: Clear documentation and well-structured tests make it easier for your team to integrate, run, and maintain them.
  • Enhanced Code Quality: The tests act as a strong safety net, helping to catch regressions and ensuring the stability of your application.
  • Knowledge Transfer: The documentation serves as a valuable resource for understanding the expected behavior of your code and how it's being tested.
  • Future-Proofing: Well-documented and maintainable tests reduce the cost of future development and refactoring.

5. Recommendations and Next Steps

To maximize the value of these deliverables, we recommend the following:

  1. Integrate: Incorporate the provided unit test files into your project's version control system.
  2. Execute: Run the tests in your local development environment and CI/CD pipeline to confirm their functionality.
  3. Review Internally: Have your development team review the tests and documentation to familiarize themselves with the coverage and approach.
  4. Feedback: Provide any feedback on the tests or documentation to PantheraHive. We are committed to continuous improvement.
  5. Maintain: As your codebase evolves, ensure these unit tests are updated accordingly to reflect new features or changes in behavior.

6. Support & Feedback

Your satisfaction is our priority. Should you have any questions regarding the generated unit tests, the documentation, or the review process, please do not hesitate to reach out to your dedicated PantheraHive support contact. We are here to assist you with integration, provide clarifications, and address any further requirements.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}