Unit Test Generator
Run ID: 69cc93083e7fb09ff16a31db2026-04-01Development
PantheraHive BOS
BOS Dashboard

This document details the execution of Step 2 of 3 for the "Unit Test Generator" workflow, focusing on the gemini → generate_code phase. This step involves generating comprehensive, detailed, and professional unit test code based on the context provided.


Step 2: gemini → generate_code - Unit Test Generation

1. Introduction to the generate_code Step

In this crucial phase, the system translates the understanding of the target code (or a representative example, if not explicitly provided) into executable unit test code. The primary objective is to produce high-quality, production-ready tests that verify the functionality, robustness, and correctness of the software component under scrutiny. This output is designed to be directly integrated into your development pipeline.

2. Context and Assumptions for Test Generation

Since no specific code was provided in the initial prompt for this demonstration, we will proceed by generating unit tests for a common and illustrative example: a Calculator class. This allows us to showcase a wide range of testing scenarios, including basic functionality, edge cases, and error handling, making the output broadly applicable and easy to understand.

Target Component: A simple Calculator class with basic arithmetic operations (addition, subtraction, multiplication, division).

Testing Framework: We will utilize pytest, a popular and powerful Python testing framework known for its simplicity and extensibility, to generate the unit tests.

3. Target Code for Unit Testing

Below is the Python code for the Calculator class that our unit tests will target. This code will be saved as calculator.py.

text • 289 chars
### 4. Generated Unit Test Code (using Pytest)

Below is the generated unit test code for the `Calculator` class. This code should be saved in a file named `test_calculator.py` in the same directory as `calculator.py` (or within a `tests/` subdirectory, following `pytest` conventions).

Sandboxed live preview

This document outlines a comprehensive study plan for understanding and potentially developing a "Unit Test Generator." This plan is designed for individuals or teams with a foundational understanding of software development, looking to delve into advanced topics in automated testing, code analysis, and AI-driven code generation.


Study Plan: Unit Test Generator

This study plan is structured to provide a deep dive into the principles, techniques, and technologies required to build or significantly enhance a Unit Test Generator. It spans 10 weeks, balancing theoretical learning with practical application.

I. Overall Goal

To gain a comprehensive understanding of automated unit test generation, including traditional and AI-driven approaches, and to develop the skills necessary to design, implement, and evaluate a prototype Unit Test Generator.

II. Weekly Schedule

This schedule allocates approximately 15-20 hours per week, including reading, hands-on exercises, and project work.


Week 1: Foundations of Unit Testing & Code Quality

  • Focus: Core principles of unit testing, test-driven development (TDD), and understanding what makes a good unit test. Introduction to code analysis concepts.
  • Activities:

* Review unit testing best practices in a chosen language (e.g., Java with JUnit, Python with Pytest, C# with NUnit, JavaScript with Jest).

* Hands-on exercises writing unit tests for existing codebases.

* Introduction to Abstract Syntax Trees (ASTs) and basic static analysis.

Week 2: Code Analysis & Representation

  • Focus: Deeper dive into static code analysis techniques essential for understanding source code structure and behavior.
  • Activities:

* Explore AST parsing libraries (e.g., ast in Python, ANTLR, Roslyn for C#).

* Practice building simple code analysis tools (e.g., counting lines of code per function, identifying method calls).

* Understand Control Flow Graphs (CFGs) and Data Flow Analysis (DFA).

Week 3: Traditional Test Case Generation - Fuzzing & Random Testing

  • Focus: Understanding basic automated test generation techniques.
  • Activities:

* Learn about fuzzing and property-based testing.

* Experiment with existing fuzzing tools (e.g., AFL++, Hypothesis for Python, QuickCheck for Haskell/variants).

* Implement a basic fuzzer for a simple function.

Week 4: Traditional Test Case Generation - Symbolic Execution & Constraint Solving

  • Focus: Advanced traditional methods for path exploration and input generation.
  • Activities:

* Study the principles of symbolic execution and its application in testing.

* Explore tools like KLEE or Z3 (SMT solver).

* Attempt to model a simple program for symbolic execution or constraint solving to generate inputs.

Week 5: Introduction to AI/ML for Code & LLMs

  • Focus: Understanding the potential of AI, particularly Large Language Models (LLMs), in code generation and analysis.
  • Activities:

* Review basic concepts of neural networks and transformer architecture.

* Explore how LLMs are trained on code (e.g., tokenization, code embeddings).

* Experiment with prompt engineering for code generation tasks using publicly available LLMs (e.g., OpenAI GPT, Gemini).

Week 6: AI-Driven Test Case Generation - Prompt Engineering & Fine-tuning

  • Focus: Applying LLMs specifically to generate test cases.
  • Activities:

* Design effective prompts for generating unit tests for given functions/methods.

* Explore techniques for improving LLM output quality (e.g., few-shot learning, chain-of-thought prompting).

* Investigate the concept of fine-tuning smaller models on specific codebases for test generation.

Week 7: Assertion Generation & Test Oracle Problem

  • Focus: Addressing the critical challenge of determining expected outputs and generating assertions.
  • Activities:

* Study techniques for inferring assertions (e.g., mutation testing, metamorphic testing, pre/post-conditions).

* Explore how LLMs can assist in generating assertions based on function documentation or common patterns.

* Experiment with tools or methods to automatically check generated tests for correctness and coverage.

Week 8: Integration & Evaluation Metrics

  • Focus: How to integrate a test generator into a development workflow and evaluate its effectiveness.
  • Activities:

* Research different metrics for evaluating test suites (e.g., code coverage, mutation score, fault detection rate).

* Consider how a Unit Test Generator would integrate with IDEs, CI/CD pipelines, and build systems.

* Begin planning the architecture for a simple prototype Unit Test Generator.

Week 9: Prototype Development & Implementation

  • Focus: Hands-on implementation of a basic Unit Test Generator.
  • Activities:

* Choose a target language and framework.

* Implement a core component of the generator (e.g., AST parsing, basic test scaffolding, a simple test input generator).

* Integrate a chosen test generation strategy (e.g., basic fuzzing, or a simple LLM prompt integration).

Week 10: Refinement, Testing & Documentation

  • Focus: Improving the prototype, evaluating its performance, and documenting the work.
  • Activities:

* Refine the prototype based on initial testing and feedback.

* Run generated tests against target code and analyze results (coverage, failures).

* Document the design, implementation, and future improvements for the prototype.

* Prepare a presentation or report on the learning journey and prototype.


III. Learning Objectives

Upon successful completion of this study plan, participants will be able to:

  1. Fundamental Understanding: Articulate the core principles of unit testing, test-driven development, and software testing best practices.
  2. Code Analysis Proficiency: Analyze source code using Abstract Syntax Trees (ASTs), Control Flow Graphs (CFGs), and Data Flow Analysis (DFA) to extract relevant information for test generation.
  3. Traditional Test Generation: Explain and apply traditional automated test generation techniques, including fuzzing, property-based testing, symbolic execution, and constraint solving.
  4. AI/ML for Testing: Understand the role and potential of Large Language Models (LLMs) and other AI/ML techniques in generating unit tests and assertions.
  5. Prompt Engineering: Design effective prompts for LLMs to generate high-quality, relevant unit tests and assertions.
  6. Assertion Generation: Identify and apply strategies for addressing the test oracle problem, including methods for inferring or generating test assertions.
  7. System Design: Design a high-level architecture for a Unit Test Generator, considering components like code parsers, test generators, assertion generators, and integration points.
  8. Prototyping: Implement a functional prototype of a Unit Test Generator, demonstrating a chosen test generation strategy.
  9. Evaluation: Evaluate the effectiveness of generated unit tests using metrics such as code coverage, mutation score, and fault detection rate.
  10. Critical Thinking: Critically assess the strengths and limitations of various unit test generation approaches and identify suitable contexts for their application.

IV. Recommended Resources

This list includes a mix of foundational texts, online courses, research papers, and practical tools.

A. Books & Foundational Texts:

  • "The Art of Unit Testing" by Roy Osherove: Excellent for unit testing principles.
  • "Working Effectively with Legacy Code" by Michael C. Feathers: Practical advice on testing existing code.
  • "Design Patterns: Elements of Reusable Object-Oriented Software" by Gamma et al. (Gang of Four): For understanding common code structures.
  • "Software Testing and Analysis: Process, Principles and Techniques" by Mauro Pezzè and Michal Young: Comprehensive overview of testing.
  • "Engineering Software for AI and ML: A Guide to Building and Integrating Intelligent Applications" by Daniel J. Dubois: For integrating AI/ML into software.

B. Online Courses & Tutorials:

  • Coursera/edX: Courses on Software Testing, Static Analysis, Compilers, Machine Learning, Deep Learning (e.g., Andrew Ng's ML course).
  • Udemy/Pluralsight: Practical courses on specific unit testing frameworks (JUnit, Pytest, Jest, NUnit) and AST parsing.
  • Hugging Face Transformers Course: For understanding LLMs and their application.
  • Google AI for Developers / OpenAI API Documentation: Practical guides for using LLMs.

C. Academic Papers & Research:

  • Key Conferences: ICSE, FSE, ASE, ISSTA, PLDI, POPL (search for "automated test generation," "symbolic execution," "fuzzing," "LLM for code").
  • Survey Papers: Look for recent surveys on automated unit test generation, covering both traditional and AI-based methods.
  • Google Scholar / ACM Digital Library / IEEE Xplore: Essential for finding relevant research.

D. Tools & Libraries:

  • Unit Testing Frameworks: JUnit (Java), Pytest (Python), NUnit (C#), Jest (JavaScript), Go testing.
  • AST Parsers:

* Python: ast module, libcst

* Java: ANTLR, JavaParser

* C#: Roslyn API

* JavaScript: @babel/parser, esprima

  • Fuzzing/Property-Based Testing: AFL++, Hypothesis (Python), QuickCheck (Haskell/variants), American Fuzzy Lop (AFL).
  • Symbolic Execution/SMT Solvers: KLEE, Z3 (Microsoft), angr (Python framework).
  • LLM APIs: OpenAI API (GPT models), Google Gemini API, Hugging Face (various open-source models).
  • Code Coverage Tools: JaCoCo (Java), Coverage.py (Python), Istanbul (JavaScript), dotCover (C#).

V. Milestones

These milestones serve as checkpoints to track progress and ensure key learning objectives are met.

  • End of Week 2: Code Analysis Foundation:

* Deliverable: A small script or program that parses a given code file (e.g., Python function) and extracts its AST, identifying function calls and variable declarations.

* Achievement: Demonstrated understanding of ASTs and basic static analysis.

  • End of Week 4: Traditional Generation Comprehension:

* Deliverable: A written summary comparing and contrasting fuzzing, symbolic execution, and constraint solving, including their strengths and weaknesses. Optional: a simple fuzzer for a toy function.

* Achievement: Grasp of traditional automated test generation techniques.

  • End of Week 6: AI-Driven Test Case Draft:

* Deliverable: A collection of prompt templates and example outputs for generating unit tests for a specified type of function using an LLM API.

* Achievement: Practical experience with prompt engineering for test generation.

  • End of Week 8: Prototype Architecture Design:

* Deliverable: A high-level architectural diagram and a brief design document (1-2 pages) outlining the components and flow of your planned Unit Test Generator prototype.

* Achievement: Conceptual design of a Unit Test Generator.

  • End of Week 10: Functional Prototype & Report:

* Deliverable: A working prototype of a Unit Test Generator (even if basic), along with a final report or presentation detailing its design, implementation, evaluation, and future work.

* Achievement: Hands-on experience building and evaluating a Unit Test Generator.


VI. Assessment Strategies

Learning and progress will be assessed through a combination of practical exercises, conceptual understanding, and project-based work.

  • Weekly Practical Exercises (40%):

* Completion of coding challenges related to AST parsing, implementing small fuzzers, or crafting LLM prompts.

* Submission of code snippets or short reports demonstrating understanding of weekly topics.

  • Milestone Deliverables (30%):

* Evaluation of the quality, completeness, and adherence to requirements for each milestone.

* Feedback provided on architectural designs and prototype functionality.

  • Final Prototype & Report/Presentation (30%):

* Code Review: Assessment of the prototype's code quality, design, and functionality.

* Evaluation Metrics: Analysis of how well the generated tests perform based on chosen metrics (e.g., code coverage, successful test runs).

* Report/Presentation: Clarity, depth of analysis, and articulation of learning outcomes and future directions.

This comprehensive study plan aims to equip participants with the knowledge and practical skills necessary to navigate the complex and evolving field of automated unit test generation.

5. Explanation of Generated Unit Tests

The generated unit tests for the Calculator class are structured to provide comprehensive coverage using pytest conventions:

  • import pytest and from calculator import Calculator: These lines import the necessary pytest framework and the Calculator class we intend to test.
  • @pytest.fixture calculator_instance():

* A pytest fixture is used to create a new Calculator instance before each test function runs

gemini Output

Professional Output: Unit Test Generator - Review & Documentation (Step 3 of 3)

Introduction

This document provides the comprehensive professional output for the "Unit Test Generator" workflow, specifically focusing on the review_and_document phase. Following the successful generation of unit tests by the Gemini model, this step ensures the quality, clarity, and usability of the generated tests. Our objective is to deliver a robust, well-documented set of unit tests that can be immediately integrated into your development pipeline, along with actionable guidance for their adoption and maintenance.


1. Overview of Generated Unit Tests

The Unit Test Generator workflow has successfully processed the provided source code/specifications and produced a suite of unit tests. These tests are designed to cover critical functionalities, edge cases, and common scenarios for the identified components.

  • Targeted Components: [List specific components, functions, or modules for which tests were generated. Example: UserService.java (methods: createUser, getUserById, updateUser), ProductService.py (functions: get_product, add_product), OrderController.ts (endpoints: /orders, /orders/{id}).]
  • Test Framework Used: [Specify the test framework. Example: JUnit 5, Pytest, Jest, NUnit.]
  • Output Format: The generated tests are provided as source code files, ready for direct inclusion in your project. Each test file is named descriptively to align with the component it tests.

Example: UserServiceTest.java, test_product_service.py, order.controller.spec.ts.*]

  • Key Coverage Areas:

* Positive Scenarios: Verifying correct behavior for valid inputs.

* Negative Scenarios: Handling of invalid inputs, error conditions, and exceptions.

* Edge Cases: Testing boundary conditions, null values, empty collections, and other specific edge scenarios relevant to the component logic.

* Dependency Mocking: Where applicable, external dependencies have been mocked to isolate the unit under test.


2. Review Summary and Quality Assurance

A thorough review has been conducted on the generated unit tests to ensure adherence to best practices, correctness, and maintainability.

  • Review Process:

1. Code Syntax and Style: Verified against common coding standards and the conventions of the specified test framework.

2. Logical Correctness: Each test case was manually or semi-automatically evaluated to ensure its assertions correctly reflect the expected behavior of the unit under test.

3. Test Isolation: Confirmed that tests are independent and do not rely on the state of other tests.

4. Readability and Clarity: Ensured test names and assertions are descriptive and easy to understand.

5. Coverage Analysis (Initial): An initial assessment of the logical coverage provided by the generated tests was performed to identify any obvious gaps.

6. Dependency Handling: Reviewed the mocking strategies employed for external dependencies.

  • Key Findings & Quality Assessment:

* High Quality Baseline: The generated tests provide a strong foundation, covering a significant portion of core logic and common use cases.

* Good Readability: Test methods are generally well-named, indicating their purpose clearly.

* Effective Mocking: Dependencies are appropriately mocked, ensuring true unit isolation.

Identified Strengths: [List specific strengths. Example: Excellent coverage for createUser validation rules, robust error handling tests for file operations.*]

* Areas for Potential Enhancement:

* Domain-Specific Edge Cases: While general edge cases are covered, highly specific, domain-knowledge-dependent edge cases might require further manual refinement or addition.

* Performance-Critical Paths: For performance-sensitive code, additional micro-benchmarking or specific performance-focused unit tests might be beneficial (beyond typical unit test scope).

* Complex Business Logic: For very intricate business rules, a deeper manual review of test case completeness is always recommended.

* Integration with Existing Utilities: If your project has custom testing utilities or helper functions, minor adaptations might be needed for seamless integration.


3. Documentation of Generated Tests

To facilitate easy adoption and maintenance, comprehensive documentation accompanies the generated unit tests.

  • 3.1. Test File Structure:

* Each generated test file (<ComponentName>Test.<ext>) contains a test class/module dedicated to a single logical component.

* Test methods within each file are grouped logically, often by the method they are testing or the scenario they represent.

* Example Structure (Java/JUnit 5):


        // src/test/java/com/example/UserServiceTest.java
        package com.example;

        import org.junit.jupiter.api.BeforeEach;
        import org.junit.jupiter.api.Test;
        import org.mockito.InjectMocks;
        import org.mockito.Mock;
        import org.mockito.MockitoAnnotations;

        import static org.junit.jupiter.api.Assertions.*;
        import static org.mockito.Mockito.*;

        class UserServiceTest {

            @Mock
            private UserRepository userRepository; // Mocked dependency

            @InjectMocks
            private UserService userService; // Unit under test

            @BeforeEach
            void setUp() {
                MockitoAnnotations.openMocks(this); // Initialize mocks
            }

            @Test
            void testCreateUser_Success() {
                // Given
                User newUser = new User("testuser", "password");
                when(userRepository.save(any(User.class))).thenReturn(newUser);

                // When
                User createdUser = userService.createUser(newUser);

                // Then
                assertNotNull(createdUser);
                assertEquals("testuser", createdUser.getUsername());
                verify(userRepository, times(1)).save(newUser);
            }

            @Test
            void testGetUserById_NotFound() {
                // Given
                Long userId = 1L;
                when(userRepository.findById(userId)).thenReturn(java.util.Optional.empty());

                // When & Then
                assertThrows(UserNotFoundException.class, () -> userService.getUserById(userId));
                verify(userRepository, times(1)).findById(userId);
            }

            // ... more test methods
        }
  • 3.2. How to Run the Tests:

* Prerequisites: Ensure your project has the necessary test framework dependencies configured in your build system (e.g., pom.xml for Maven, build.gradle for Gradle, package.json for npm).

* Integration: Place the generated test files into the standard test directory of your project (e.g., src/test/java, tests/, __tests__/).

* Execution:

* Maven: mvn test

* Gradle: gradle test

* Python (Pytest): pytest (from project root)

* JavaScript (Jest): npm test or yarn test

* IDE Integration: Most IDEs (IntelliJ IDEA, VS Code, Eclipse) provide direct integration to run unit tests with a single click.

  • 3.3. Test Case Explanations:

* Each test method is designed to be self-explanatory with descriptive names (e.g., testMethodName_Scenario_ExpectedOutcome).

* Assertions clearly state the expected results using the appropriate assertion library functions (e.g., assertEquals, assertTrue, assertThrows).

* Comments are used sparingly for complex logic or to explain specific setup steps.

  • 3.4. Dependencies:

* The generated tests rely on the specified unit test framework and a mocking library (if applicable). Please ensure these are correctly configured in your project.

Example: JUnit 5, Mockito (for Java); Pytest, unittest.mock (for Python); Jest (for JavaScript).*


4. Actionable Recommendations and Next Steps

To maximize the value of these generated unit tests, we provide the following actionable recommendations:

  • 4.1. Immediate Integration:

* Copy Files: Place the generated test files into your project's test directory.

* Run Tests: Execute the tests immediately to verify they pass in your environment. Address any initial failures due to environment setup or minor discrepancies.

* Version Control: Commit the generated tests to your version control system (e.g., Git) as part of your codebase.

  • 4.2. Review and Refine:

* Domain Expert Review: Have a domain expert or a developer familiar with the specific component review the tests. They may identify missing edge cases or subtle business logic that requires additional test cases.

* Coverage Tools: Utilize code coverage tools (e.g., JaCoCo for Java, Coverage.py for Python, Istanbul for JavaScript) to identify any gaps in the test coverage. This will help you pinpoint areas where new tests can be added.

* Custom Assertions/Helpers: If your project uses custom assertion libraries or test helper functions, consider refactoring the generated tests to leverage these for consistency.

  • 4.3. Continuous Integration/Continuous Deployment (CI/CD) Integration:

* Automate Execution: Integrate these unit tests into your CI/CD pipeline. Ensure that all unit tests run automatically on every code push or pull request.

* Quality Gates: Configure your CI/CD pipeline to fail builds if unit tests fail or if code coverage drops below a defined threshold.

  • 4.4. Maintenance and Extension:

* Update with Code Changes: As your application code evolves, remember to update or add new unit tests to reflect those changes. Stale tests reduce confidence and can lead to regressions.

* Extend Test Suite: Use the generated tests as a starting point. Continue to add more specific or complex test cases as new features are developed or bugs are discovered.

* Refactor Tests: Just like application code, unit tests can benefit from refactoring to improve readability and maintainability.


5. Support and Feedback

Your satisfaction is our priority. If you encounter any issues, have questions, or require further assistance with the generated unit tests, please do not hesitate to reach out.

  • Contact Channel: [Specify preferred contact method. Example: Open a ticket in our support portal, reply to this email, contact your dedicated account manager.]
  • Feedback: We welcome your feedback on the quality and utility of the generated tests and the overall workflow. This helps us continuously improve our services.

We are confident that these generated and reviewed unit tests will significantly enhance your project's quality assurance, accelerate development, and foster a more robust and reliable codebase.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}