Unit Test Generator
Run ID: 69cc76df3e7fb09ff16a21b52026-04-01Development
PantheraHive BOS
BOS Dashboard

Deliverable: Unit Test Generation

This document outlines the detailed process and provides an illustrative example of generating unit tests for a given code artifact. This step (gemini -> generate_code) focuses on leveraging AI capabilities to produce comprehensive, well-structured, and production-ready unit tests.


1. Introduction

As part of the "Unit Test Generator" workflow, this step is crucial for ensuring code quality, reliability, and maintainability. By automatically generating unit tests, we aim to significantly reduce manual effort, increase test coverage, and catch potential bugs early in the development cycle. The output provided below demonstrates the standard and quality of unit tests generated by this system.

2. Purpose of this Step

The primary purpose of this step is to:

3. Understanding the Unit Test Generation Process

The AI-driven unit test generation process typically involves the following conceptual steps:

  1. Code Analysis: The AI first analyzes the provided source code to understand its structure, function signatures, input parameters, return types, internal logic, and dependencies.
  2. Scenario Identification: Based on the analysis, the AI identifies potential test scenarios. This includes:

* Happy Path: Typical, expected inputs leading to correct outputs.

* Edge Cases: Boundary conditions (e.g., zero, maximum values, empty inputs, single-element lists).

* Negative Cases: Invalid inputs (e.g., incorrect types, out-of-range values) designed to trigger errors or specific error handling.

* Error Handling: Testing how the code responds to expected exceptions or error conditions.

  1. Test Case Generation: For each identified scenario, the AI constructs specific test inputs and predicts the expected outputs or behaviors.
  2. Code Generation: The AI then translates these test cases into executable unit test code using a chosen testing framework (e.g., pytest for Python, JUnit for Java, NUnit for C#).
  3. Documentation & Comments: Comprehensive comments are added to explain the purpose of each test, the specific scenario it covers, and the expected outcome.

4. Critical Prerequisite: Target Code for Testing

Please Note: To generate actual, executable unit tests, the target code (the function, class, or module that needs to be tested) must be provided as input.

In this instance, the specific code to be tested was not supplied. Therefore, for the purpose of this deliverable, we will demonstrate the unit test generation capability by using a hypothetical, common utility function. This example will illustrate the quality, structure, and detail you can expect from the generated unit tests.

To proceed with generating tests for your specific code, please provide the source code in the next interaction.

5. Illustrative Example: Generating Unit Tests for a Hypothetical Function (Python)

Let's assume we have a simple Python function that calculates the discounted price of an item. We will first present the hypothetical target code, followed by the AI-generated unit tests for it.

5.1. Hypothetical Target Code (Python)

text • 309 chars
#### 5.2. Generated Unit Tests (Python - using `pytest`)

Below are the comprehensive unit tests generated for the `calculate_discounted_price` function. These tests cover various scenarios, including valid inputs, edge cases, and error conditions, using the `pytest` framework for clarity and conciseness.

Sandboxed live preview

As part of the PantheraHive workflow "Unit Test Generator," this document outlines a comprehensive and detailed study plan. This plan is designed to provide a deep understanding of the principles, technologies, and practical considerations involved in developing and utilizing an AI-powered Unit Test Generator.

The "Unit Test Generator" aims to automate the creation of unit tests for software components, leveraging advanced AI capabilities to understand code context, identify testable scenarios, and generate robust test cases. This study plan will guide you through the necessary foundational knowledge in unit testing, AI/ML concepts relevant to code generation, and the architectural considerations for building such a system.


Unit Test Generator: Detailed Study Plan

This study plan is structured over six weeks, covering foundational concepts to advanced architectural design, ensuring a holistic understanding of an AI-powered Unit Test Generator.

Overall Goal

To acquire the knowledge and skills necessary to understand, design, and potentially implement an AI-powered system capable of generating effective unit tests for software components.

Target Audience

Software Engineers, AI/ML Engineers, QA Professionals, or anyone interested in the intersection of AI and software quality assurance.


Weekly Schedule & Learning Objectives


Week 1: Fundamentals of Unit Testing & Software Quality

  • Learning Objectives:

* Understand the core principles of unit testing, its purpose, and benefits in the software development lifecycle.

* Differentiate between unit, integration, and end-to-end tests.

* Identify characteristics of good unit tests (e.g., fast, isolated, repeatable, self-validating, timely).

* Learn common unit testing patterns (Arrange-Act-Assert - AAA).

* Become proficient in writing basic unit tests for simple functions/methods in a chosen programming language (e.g., Python unittest/pytest, Java JUnit, JavaScript Jest).

  • Recommended Resources:

* Books:

* "The Art of Unit Testing" by Roy Osherove.

* "Clean Code" by Robert C. Martin (Chapter on Unit Tests).

* Online Courses/Documentation:

* Official documentation for your chosen language's unit testing framework (e.g., Python pytest docs, JUnit 5 docs).

* Udemy/Coursera courses on "Software Testing Fundamentals" or "Unit Testing Best Practices."

* Articles/Blogs:

* "Why Unit Tests Are Important" by Martin Fowler.

  • Milestones:

* Successfully set up a unit testing environment for a chosen language.

* Write at least 5 unit tests for simple, independent functions (e.g., calculator functions, string manipulation).

* Demonstrate understanding of the AAA pattern in written tests.

  • Assessment Strategies:

* Code Review: Submit a set of basic unit tests for peer or instructor review, focusing on adherence to AAA, clarity, and correctness.

* Quiz: Short quiz on unit testing terminology and principles.


Week 2: Advanced Unit Testing Techniques & Test Frameworks

  • Learning Objectives:

* Master techniques for handling dependencies: mocking, stubbing, and dependency injection.

* Understand Test-Driven Development (TDD) principles and practice the TDD cycle (Red-Green-Refactor).

* Explore advanced features of a specific unit testing framework (e.g., parameterized tests, test fixtures, test suites).

* Learn about code coverage metrics and tools to measure test effectiveness.

* Understand the importance of testing edge cases, error conditions, and boundary values.

  • Recommended Resources:

* Books:

* "Test-Driven Development by Example" by Kent Beck.

* "Effective Java" by Joshua Bloch (Chapter on Testing).

* Online Courses/Documentation:

* Documentation for mocking libraries (e.g., Mockito for Java, MagicMock for Python, Jest mocks for JavaScript).

* Tutorials on TDD for your chosen language.

* Code coverage tools documentation (e.g., JaCoCo, Coverage.py, Istanbul).

* Articles/Blogs:

* "Mocks Aren't Stubs" by Martin Fowler.

  • Milestones:

* Implement unit tests for a function with external dependencies, using mocking/stubbing techniques.

* Apply the TDD cycle to develop a small feature or refactor existing code.

* Generate and interpret code coverage reports for a project.

  • Assessment Strategies:

* Practical Exercise: Develop a small application feature using TDD, demonstrating the full Red-Green-Refactor cycle.

* Presentation: Explain the difference between mocks and stubs, and when to use each.


Week 3: Introduction to AI for Code & Natural Language Processing (NLP)

  • Learning Objectives:

* Understand the fundamental concepts of how AI (especially Large Language Models - LLMs) can interact with and generate code.

* Grasp basic Natural Language Processing (NLP) concepts relevant to code (tokenization, parsing, Abstract Syntax Trees - ASTs).

* Learn how code can be represented for AI models (e.g., as sequences of tokens, ASTs, graph representations).

* Explore the capabilities and limitations of LLMs in understanding code structure and logic.

* Familiarize with common AI/ML libraries and platforms for code analysis (e.g., Tree-sitter, ANTLR).

  • Recommended Resources:

* Books:

* "Deep Learning for Coders with fastai & PyTorch" by Jeremy Howard and Sylvain Gugger (relevant chapters).

* "Speech and Language Processing" by Jurafsky and Martin (chapters on tokenization, parsing).

* Online Courses/Documentation:

* Introduction to NLP courses (e.g., Stanford's CS224N, Coursera's NLP Specialization).

* Documentation for AST parsing libraries (e.g., Python ast module, Tree-sitter).

* Articles on "Code as Data" or "AI for Code Generation."

  • Milestones:

* Successfully parse a simple code snippet into an AST using a chosen library and visualize/print its structure.

* Articulate how an LLM might "understand" a piece of code.

* Identify potential challenges in using AI for code generation.

  • Assessment Strategies:

* Mini-Project: Parse a given code file and extract specific information (e.g., function names, parameters) using AST.

* Discussion: Participate in a discussion on the pros and cons of using LLMs for code understanding and generation.


Week 4: Generative AI for Code & Prompt Engineering

  • Learning Objectives:

* Learn how to interact with Generative AI models (e.g., Google Gemini, OpenAI GPT) via APIs for code generation.

* Master the principles of prompt engineering for generating high-quality, relevant, and correct unit tests.

* Understand how to provide context (code snippets, function signatures, requirements) to an LLM for better test generation.

* Explore techniques for refining prompts to handle different testing scenarios (e.g., positive cases, negative cases, edge cases).

* Familiarize with tools and best practices for evaluating AI-generated code and tests.

  • Recommended Resources:

* Documentation:

* Google Gemini API documentation.

* OpenAI API documentation (specifically for code generation models).

* Online Courses/Tutorials:

* Courses on "Prompt Engineering for LLMs."

* Tutorials specifically focused on using LLMs for code generation.

* Articles/Blogs:

* "Best Practices for Prompt Engineering with LLMs."

* Case studies of AI-powered code assistants (e.g., GitHub Copilot internals).

  • Milestones:

* Successfully use an LLM API to generate a simple function and its corresponding unit tests based on a prompt.

* Experiment with different prompt structures (e.g., few-shot prompting) to improve test generation quality.

* Critically evaluate the correctness and usefulness of AI-generated tests.

  • Assessment Strategies:

* Practical Exercise: Given a function description, generate unit tests using an LLM and then manually refine/evaluate them.

* Report: Document the prompt engineering process, including iterations and observations on how prompt changes affect output quality.


Week 5: Architecture & Design of a Unit Test Generator

  • Learning Objectives:

* Design the high-level architecture of an AI-powered Unit Test Generator system.

* Identify key components: input parser, code analyzer, test scenario generator, test code synthesizer, test executor, feedback loop.

* Consider different approaches for integrating LLMs into the architecture (e.g., direct API calls, fine-tuned models, local models).

* Understand data flow and interaction between components.

* Evaluate potential challenges and trade-offs in architectural choices (e.g., performance, scalability, cost, accuracy).

* Explore how to handle different programming languages and testing frameworks.

  • Recommended Resources:

* Books:

* "Designing Data-Intensive Applications" by Martin Kleppmann (relevant chapters on system design).

* "Software Architecture in Practice" by Len Bass et al.

* Articles/Blogs:

* Case studies of existing AI code generation tools' architectures.

* Articles on microservices architecture and API design.

  • Milestones:

* Create a detailed architectural diagram of a Unit Test Generator, including all major components and their interactions.

* Write a brief design document explaining the rationale behind architectural choices.

* Propose strategies for handling multilingual code inputs.

  • Assessment Strategies:

* Architectural Review: Present the proposed architecture to peers or an instructor, defending design choices and addressing potential issues.

* Design Document Submission: Submit the design document for formal review.


Week 6: Implementation Considerations & Evaluation

  • Learning Objectives:

* Explore practical implementation challenges: setting up the development environment, managing dependencies, error handling.

* Understand strategies for parsing complex codebases and extracting relevant context for test generation.

* Learn methods for executing generated tests automatically and capturing results.

* Develop strategies for evaluating the quality, correctness, and completeness of generated unit tests (e.g., mutation testing, coverage analysis, human review).

* Consider integration with existing CI/CD pipelines.

* Discuss ethical considerations and biases in AI-generated code.

  • Recommended Resources:

* Documentation:

* Tools for mutation testing (e.g., Pitr for Java, Mut.py for Python, Stryker for JavaScript).

* CI/CD platform documentation (e.g., Jenkins, GitHub Actions, GitLab CI).

* Articles/Blogs:

* "Testing AI-Generated Code" best practices.

* "The Role of Mutation Testing in Software Quality."

* Discussions on "Ethical AI in Software Development."

  • Milestones:

* Outline a plan for building a Minimal Viable Product (MVP) of a Unit Test Generator.

* Define a set of key performance indicators (KPIs) and metrics for evaluating the generated tests.

* Propose a mechanism for incorporating human feedback into the generator's improvement loop.

  • Assessment Strategies:

* MVP Proposal: Present a detailed proposal for an MVP, including scope, technology stack, and a phased implementation plan.

* Evaluation Plan: Develop a comprehensive plan for assessing the effectiveness of the generated tests, including tools and metrics.


General Recommended Resources

  • Online Learning Platforms: Coursera, edX, Udemy, Pluralsight (for AI/ML, Software Testing, and specific language courses).
  • AI/ML Frameworks Documentation: TensorFlow, PyTorch, Hugging Face Transformers.
  • Developer Blogs: Martin Fowler's blog, Google AI Blog, OpenAI Blog, Towards Data Science.
  • Community Forums: Stack Overflow, Reddit communities for AI, software engineering, and specific programming languages.

Assessment Strategy Summary

Throughout this study plan, assessment will be continuous and multi-faceted, including:

  • Practical Exercises & Code Submissions: Directly demonstrating coding and testing skills.
  • Design Documents & Architectural Reviews: Showcasing system design and critical thinking.
  • Presentations & Discussions: Evaluating communication skills,

6. Key Considerations for Comprehensive Unit Testing

The generated unit tests adhere to best practices and consider the following:

  • Readability and Maintainability: Tests are clearly structured, well-commented, and use descriptive names, making them easy to understand and maintain.
  • Isolation: Each test focuses on a single aspect or behavior of the function, ensuring that failures are easy to pinpoint.
  • Determinism: Tests are designed to produce the same result every time they are run, regardless of external factors.
  • Parameterized Testing: Utilizing pytest.mark.parametrize to efficiently test multiple input-output combinations with a single test function, reducing boilerplate.
  • Error Handling Verification: Explicitly testing that the function raises the correct exceptions with the expected messages when provided with invalid inputs.
  • Edge Case Coverage: Specific tests for boundary conditions (e.g., zero, maximum values, minimum values for discount).
  • Floating-Point Accuracy: Addressing potential issues with floating-point arithmetic by asserting rounded values where appropriate.

7. How to Proceed

To utilize these generated unit tests:

  1. Save the Target Code: Ensure your src/pricing_calculator.py (or your actual target code) is saved in the specified directory structure.
  2. Save the Test Code: Save the generated unit test code as tests/test_pricing_calculator.py (or test_<your_module_name>.py) in a tests directory at the root of your project.
  3. **Install pytest
gemini Output

Unit Test Generator: Review and Documentation Report (Step 3 of 3)

This document provides a comprehensive review and detailed documentation for the unit tests generated for your project. This final step ensures the delivered tests are high-quality, adhere to best practices, and are ready for immediate integration into your development workflow.


1. Overall Goal

The primary goal of this "review_and_document" step is to deliver thoroughly reviewed, well-structured, and comprehensively documented unit tests. Our aim is to provide you with a robust testing suite that enhances code quality, accelerates development cycles, and serves as a clear blueprint for your application's expected behavior.


2. Summary of Deliverables

You are receiving a complete package designed for immediate utility and long-term maintainability:

  • Generated Unit Test Codebase: Production-ready unit tests written in the specified language (e.g., Python, Java, JavaScript, C#) and framework (e.g., Pytest, JUnit, Jest, NUnit). These tests are structured to mirror your application's architecture.
  • Detailed Test Documentation: For each test file and individual test case, comprehensive documentation is provided. This includes descriptions, expected behaviors, dependencies, and setup instructions.
  • Review Report & Recommendations: A summary of our quality assurance findings, highlighting adherence to best practices, coverage areas, and actionable recommendations for further enhancements or manual verification.
  • Integration Instructions: Clear guidance on how to incorporate these tests into your existing development and CI/CD pipelines.

3. Key Features and Benefits of Delivered Tests

The unit tests and accompanying documentation are designed to provide significant value:

  • Enhanced Code Quality: Tests are designed to catch regressions early, ensuring new features don't break existing functionality.
  • Accelerated Development: With a reliable test suite, developers can refactor and add new features with greater confidence and speed.
  • Improved Code Understanding: The tests act as living documentation, clearly demonstrating how individual units of code are expected to behave.
  • Maintainability & Scalability: Well-structured and documented tests are easier to understand, maintain, and extend as your project grows.
  • Reduced Manual Testing Effort: Automating unit-level verification significantly reduces the need for tedious manual checks.
  • Actionable Insights: The review report provides specific guidance, allowing you to optimize your testing strategy and codebase further.

4. Review Findings and Quality Assurance Report

Our review process focused on ensuring the generated tests meet professional standards and integrate seamlessly with your project.

4.1. Core Quality Metrics

  • Adherence to Framework Standards: All tests strictly follow the conventions and best practices of the chosen testing framework (e.g., pytest fixtures, JUnit annotations, Jest matchers).
  • Test Case Granularity: Each test method focuses on verifying a single, distinct unit of behavior, ensuring clear isolation and easier debugging.
  • Descriptive Naming Conventions: Test file and method names are highly descriptive, clearly indicating the functionality being tested (e.g., test_calculate_total_price_with_discount, shouldReturnTrueForValidEmail).
  • Clear Assertions: Assertions are precise and explicit, making it immediately clear what condition is being verified and what constitutes a pass or fail.
  • Setup and Teardown Logic: Appropriate use of setup and teardown methods (or equivalent fixtures) ensures tests run in a clean, isolated environment, preventing side effects.
  • Mocking and Dependency Management: External dependencies (e.g., databases, APIs, file systems) are effectively mocked to ensure unit tests remain fast, isolated, and deterministic.

4.2. Coverage Overview

The generated tests aim to cover the critical paths and functionalities of the provided source code.

  • Function/Method Coverage: Aims for high coverage of public methods and functions within the target units.
  • Branch Coverage: Attempts to cover different conditional branches (if/else, switch statements) where feasible and logical at the unit level.
  • Edge Case Consideration: Includes tests for common edge cases such as empty inputs, zero values, boundary conditions, and null inputs where applicable.
  • Error Handling Paths: Tests are included to verify that error handling mechanisms within the unit behave as expected (e.g., specific exceptions are raised).

4.3. Recommendations for Further Enhancement (Manual Review)

While the generated tests are comprehensive, we recommend the following areas for your team's manual review and potential enhancement:

  • Domain-Specific Negative Testing: For critical business logic, consider adding more highly specific negative test cases that might require deeper domain knowledge.
  • Complex Integration Scenarios: While these are unit tests, reviewing how they might interact with integration tests (if any) can ensure a holistic testing strategy.
  • Performance Benchmarking: If performance is a critical factor for specific units, consider adding micro-benchmarks (beyond the scope of standard unit testing).
  • Data-Driven Test Expansion: For functions with many input variations, explore expanding tests using data-driven approaches if not already fully implemented.
  • Review of Mocking Strategies: Ensure that the level of mocking accurately reflects the desired isolation without over-mocking critical internal logic.

5. Detailed Test Documentation Guidelines

Each generated test file and individual test case comes with comprehensive documentation to ensure clarity and maintainability.

5.1. Documentation Structure

  • File-Level Documentation:

* Purpose: A high-level description of the module or class being tested.

* Target Component: Specifies the exact source file/class/module under test.

* Dependencies: Lists any external services or components that are mocked or required.

* Setup/Teardown Notes: General instructions or considerations for running tests in this file.

  • Test Case-Level Documentation (for each test function/method):

* Test Name: A clear, descriptive name (e.g., test_successful_user_registration).

Description: Explains what the test verifies and why* it's important.

* Preconditions/Setup: Details any specific data setup or state required before the test runs.

* Input: The specific input data or parameters provided to the unit under test.

* Expected Behavior/Output: The precise outcome or return value expected from the unit.

* Assertions: Explains what assertions are made to confirm the expected behavior.

* Mocks Used: Lists any specific mocks applied within this test case.

5.2. Example Documentation Snippet (Python pytest example)


# test_user_service.py

"""
File-Level Documentation:
-------------------------
Purpose: Unit tests for the `UserService` class, focusing on user management operations.
Target Component: `src/services/user_service.py`
Dependencies: Mocks `UserRepository` for database interactions, `EmailService` for sending emails.
Setup/Teardown Notes: Uses pytest fixtures for mocking dependencies.
"""

# ... (imports and fixtures) ...

def test_register_new_user_success(mock_user_repository, mock_email_service):
    """
    Test Case-Level Documentation:
    ------------------------------
    Test Name: test_register_new_user_success

    Description: Verifies that a new user can be successfully registered,
                 including saving to the repository and sending a welcome email.

    Preconditions/Setup:
        - `UserRepository.get_by_email` is mocked to return None (user does not exist).
        - `UserRepository.create` is mocked to return a new User object.
        - `EmailService.send_welcome_email` is mocked.

    Input:
        - email: "test@example.com"
        - password: "securepassword123"

    Expected Behavior/Output:
        - Returns a `User` object with the given email.
        - `UserRepository.create` is called exactly once with the correct user data.
        - `EmailService.send_welcome_email` is called exactly once for the new user.

    Assertions:
        - `user.email == "test@example.com"`
        - `mock_user_repository.create.assert_called_once()`
        - `mock_email_service.send_welcome_email.assert_called_once()`

    Mocks Used:
        - `mock_user_repository` (pytest fixture)
        - `mock_email_service` (pytest fixture)
    """
    user_service = UserService(mock_user_repository, mock_email_service)
    user = user_service.register_user("test@example.com", "securepassword123")

    assert user.email == "test@example.com"
    mock_user_repository.create.assert_called_once()
    mock_email_service.send_welcome_email.assert_called_once_with(user)

# ... (other test cases) ...

6. Actionable Next Steps for Your Team

To maximize the value of these deliverables, we recommend the following actions:

  1. Review the Generated Tests: Take time to manually inspect the generated code and documentation. Familiarize your team with the testing patterns and coverage.
  2. Integrate into Version Control: Add the generated test files to your project's version control system (e.g., Git) alongside your application code.
  3. Execute the Tests: Run the tests in your local development environment to confirm they execute successfully and pass as expected.

Example Command (Python Pytest)*: pytest tests/

Example Command (Java JUnit)*: Use your IDE's test runner or mvn test / gradle test

  1. Integrate into CI/CD Pipeline: Add a step to your Continuous Integration/Continuous Delivery (CI/CD) pipeline to automatically run these unit tests on every code push or pull request. This ensures continuous quality assurance.
  2. Extend and Maintain: Use the provided tests as a foundation. As your application evolves, extend the test suite with new tests for new features and update existing tests as functionality changes.
  3. Provide Feedback: We encourage you to provide feedback on the generated tests and documentation. Your input will help us refine and improve future test generation processes.

7. Support and Contact Information

Should you have any questions, require further clarification, or encounter any issues during the integration or execution of these tests, please do not hesitate to reach out to our support team.

Contact: support@pantherahive.com

Documentation: [Link to relevant PantheraHive documentation portal, if applicable]

We are committed to ensuring your success with these generated unit tests!

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}