Unit Test Generator
Run ID: 69cb569461b1021a29a881432026-03-31Development
PantheraHive BOS
BOS Dashboard

As part of the "Unit Test Generator" workflow, this deliverable focuses on Step 2: Code Generation using Gemini. This step is responsible for taking your provided source code and leveraging the capabilities of the Gemini model to generate comprehensive, production-ready unit tests.


Step 2: Gemini → Generate Code

This step orchestrates the interaction with the Gemini AI model to intelligently analyze your source code and produce corresponding unit tests. The goal is to deliver high-quality, well-structured, and effective test cases that cover various scenarios, including positive cases, edge cases, and error conditions where applicable.

1. Objective

The primary objective of this step is to transform raw source code into a suite of unit tests using an advanced AI model (Gemini). This automates a significant portion of the testing process, ensuring robust code quality and accelerating development cycles.

2. Input Requirements

To successfully generate unit tests, this step expects the following input:

3. Output Deliverable

The output of this step is a string containing the generated unit test code, ready to be saved into a test file and executed.

4. Core Logic and Approach

The generate_code function encapsulates the intelligence for unit test creation:

  1. Prompt Construction: A detailed and specific prompt is crafted for the Gemini model. This prompt incorporates the user's source_code_content, programming_language, test_framework, and any additional_instructions. The prompt guides Gemini to understand the context and generate relevant, high-quality tests.
  2. Gemini API Interaction: The constructed prompt is sent to the Gemini model via its API. This interaction is handled by a dedicated service layer (simulated here by GeminiService).
  3. Response Parsing: The raw response from Gemini is processed. Any boilerplate or conversational text is removed, leaving only the executable test code.
  4. Code Return: The clean, generated test code is returned for subsequent steps (e.g., saving to a file, review).

5. Code Implementation (Python Example)

Below is a Python implementation for the generate_code step. It includes a mock GeminiService to simulate interaction with the actual Gemini API.

text • 80 chars
"""
        elif "Java" in prompt and "JUnit" in prompt:
            return """
Sandboxed live preview

This document outlines a detailed and comprehensive study plan for mastering the concepts and practical application of a "Unit Test Generator," particularly focusing on modern approaches including AI/ML capabilities as suggested by the workflow's "gemini" step. This plan is designed to equip individuals with the knowledge to understand, evaluate, and potentially build or integrate advanced unit test generation solutions.


Unit Test Generator: Comprehensive Study Plan

1. Introduction & Overall Goal

This study plan is designed to provide a structured learning path for understanding the principles, techniques, and practical applications behind generating unit tests automatically. It covers foundational unit testing concepts, traditional automated test generation methods, and advanced AI/ML-driven approaches, culminating in the ability to critically evaluate and potentially contribute to the development of such systems.

Overall Learning Objective: To develop a deep understanding of unit test generation methodologies, from fundamental principles to advanced AI-powered techniques, enabling the effective design, implementation, and evaluation of automated unit test generation systems.

Target Audience: Software developers, QA engineers, and researchers interested in enhancing testing efficiency, exploring AI/ML applications in software engineering, or building automated testing tools.

2. Prerequisites

Before embarking on this study plan, it is recommended that participants have:

  • Proficiency in at least one modern programming language: (e.g., Python, Java, C#, JavaScript) including familiarity with its core syntax, data structures, and object-oriented programming concepts.
  • Basic understanding of software development lifecycle (SDLC): Familiarity with concepts like requirements, design, implementation, testing, and deployment.
  • Fundamental knowledge of version control systems: (e.g., Git).
  • (Optional but Recommended): Basic exposure to unit testing concepts and an awareness of AI/Machine Learning fundamentals.

3. Estimated Time Commitment

This 6-week study plan is designed for an estimated commitment of 10-15 hours per week, combining theoretical study, practical exercises, and project work. Flexibility is built-in, allowing individuals to adjust based on their learning pace and prior experience.

4. Weekly Schedule, Learning Objectives, and Resources


Week 1: Foundations of Unit Testing & Test-Driven Development (TDD)

  • Focus: Establish a strong understanding of what unit tests are, their importance, and how to write effective ones manually. Introduce Test-Driven Development.
  • Learning Objectives:

* Define unit testing, its benefits, and common pitfalls.

* Understand the principles of good unit tests (FIRST principles).

* Write basic unit tests using a chosen testing framework (e.g., JUnit for Java, Pytest for Python, Jest for JavaScript).

* Practice the TDD cycle: Red, Green, Refactor.

* Learn about assertions and basic test setup/teardown.

  • Recommended Resources:

* Books:

* "Clean Code" by Robert C. Martin (Chapter on Unit Tests)

* "Test-Driven Development: By Example" by Kent Beck

* Online Courses/Tutorials:

* Official documentation for chosen language's unit testing framework (e.g., JUnit 5 User Guide, Pytest Documentation, Jest Getting Started).

* "Introduction to TDD" courses on platforms like Coursera, Udemy, or Pluralsight.

* Articles:

* "The Practical Test Pyramid" by Martin Fowler

* "FIRST Principles of Unit Testing"

  • Activities:

* Set up a new project with a chosen testing framework.

* Implement several simple functions following the TDD cycle.

* Refactor existing code to improve testability.


Week 2: Advanced Unit Testing Concepts & Mocking

  • Focus: Delve into techniques for testing complex components, managing dependencies, and isolating units of code effectively.
  • Learning Objectives:

* Understand dependency injection and its role in testability.

* Learn about test doubles: mocks, stubs, spies, fakes, and dummies.

* Master the use of a mocking framework (e.g., Mockito for Java, unittest.mock for Python, Jest Mocks for JavaScript).

* Test code with external dependencies (databases, APIs, file systems) using mocks.

* Explore advanced assertion techniques and custom matchers.

* Understand code coverage metrics and tools.

  • Recommended Resources:

* Books:

* "Working Effectively with Legacy Code" by Michael C. Feathers (Chapters on dependency breaking techniques)

* "The Art of Unit Testing" by Roy Osherove (Chapters on Mocks, Stubs, and Isolation)

* Online Courses/Tutorials:

* Documentation for chosen mocking framework (e.g., Mockito, unittest.mock module, Jest Mocks).

* Tutorials on dependency injection patterns.

* Tools:

* Code coverage tools (e.g., JaCoCo for Java, Coverage.py for Python, Istanbul for JavaScript).

  • Activities:

* Refactor a module with external dependencies to make it testable using dependency injection.

* Write unit tests for the refactored module, employing mocks/stubs for external interactions.

* Analyze code coverage reports for your test suites.


Week 3: Introduction to Automated Test Generation (Traditional Methods)

  • Focus: Explore established techniques for automatically generating test cases without relying on large language models (LLMs).
  • Learning Objectives:

* Understand the concept of property-based testing and its advantages (e.g., using QuickCheck, Hypothesis).

* Explore techniques like static analysis for identifying potential test cases or code paths.

* Gain conceptual understanding of symbolic execution and concolic testing for path exploration and input generation.

* Understand the limitations and challenges of traditional automated test generation.

  • Recommended Resources:

* Books/Papers:

* "Software Testing and Analysis: Process, Principles, and Techniques" by Mauro Pezzè and Michal Young (Chapters on test generation techniques)

* "QuickCheck: A Lightweight Tool for Random Testing of Erlang Programs" (Original paper or conceptual overview)

* Online Tools/Frameworks:

* Documentation for property-based testing libraries (e.g., Hypothesis for Python, QuickCheck for Haskell/JS).

* Conceptual overview of tools like Pex, KLEE (without necessarily hands-on implementation).

* Articles:

* "An Introduction to Property-Based Testing"

* "Symbolic Execution: A Quick Primer"

  • Activities:

* Implement property-based tests for a few simple functions (e.g., list sorting, string manipulation).

* Research and present a summary of a traditional test generation tool (e.g., Pex, EvoSuite).


Week 4: AI/ML for Test Generation - LLM Fundamentals

  • Focus: Introduce the capabilities of Large Language Models (LLMs) relevant to code generation and specifically unit test generation.
  • Learning Objectives:

* Understand the basics of LLM architecture and how they generate text/code.

* Learn effective prompt engineering techniques for code generation.

* Experiment with using LLMs (e.g., Gemini, OpenAI GPT models) to generate simple code snippets and basic unit tests.

* Identify the strengths and weaknesses of LLMs in the context of test generation (e.g., syntax correctness vs. semantic correctness, coverage).

* Explore different strategies for providing context to LLMs for better test generation.

  • Recommended Resources:

* Documentation:

* Google Gemini API Documentation

* OpenAI API Documentation (for GPT models)

* Online Courses/Tutorials:

* "Prompt Engineering Guide" (online resources like LearnPrompting.org)

* Tutorials on using LLM APIs for code generation.

* Research Papers:

* Papers discussing LLMs for code generation (e.g., CodeX, AlphaCode).

* Early papers on using LLMs for test generation.

  • Activities:

* Set up API access to a chosen LLM (Gemini or OpenAI).

* Experiment with various prompts to generate unit tests for simple functions.

* Analyze the quality (correctness, coverage, readability) of the generated tests.

* Compare different prompt engineering strategies for test generation.


Week 5: AI/ML for Test Generation - Advanced Techniques & Evaluation

  • Focus: Deep dive into advanced AI/ML techniques for test generation, including fine-tuning and robust evaluation.
  • Learning Objectives:

* Understand concepts like fine-tuning LLMs for specific code generation tasks (e.g., test generation).

* Explore hybrid approaches combining LLMs with traditional analysis techniques.

* Learn various metrics for evaluating the quality of generated tests (e.g., code coverage, mutation testing score, human-readability, correctness).

* Understand the challenges of evaluating AI-generated tests and strategies to mitigate them.

* Briefly touch upon fuzzing techniques enhanced by AI for more effective test data generation.

  • Recommended Resources:

* Research Papers:

* Papers on fine-tuning LLMs for code-related tasks.

* Papers on evaluation metrics for generated tests.

* Surveys on AI for software testing.

* Tools:

* Mutation testing frameworks (e.g., Pitest for Java, MutPy for Python).

* Frameworks for fine-tuning LLMs (e.g., Hugging Face Transformers library).

* Articles:

* "Mutation Testing: A Practical Introduction"

* Articles discussing "semantic code search" and its relevance to test context.

  • Activities:

* (Conceptual) Design a strategy for fine-tuning an LLM for a specific test generation scenario.

* Evaluate a set of manually written and LLM-generated tests using code coverage and mutation testing tools.

* Critically review research papers on AI-powered test generation and evaluation.


Week 6: Building/Integrating a Unit Test Generator & Project

  • Focus: Consolidate learning through a practical project, applying knowledge to design, integrate, or improve a unit test generation system.
  • Learning Objectives:

* Design a high-level architecture for an AI-powered unit test generator.

* Integrate an LLM API into a small application to generate unit tests for given source code.

* Develop a small proof-of-concept (POC) for a specific aspect of unit test generation (e.g., generating tests for a particular module, focusing on edge cases, or improving test readability).

* Present findings, challenges, and potential future work related to unit test generation.

  • Recommended Resources:

* All previously recommended resources.

* Open-source projects that leverage LLMs for code generation or testing (e.g., CodeGeeX, various GitHub projects).

* Your own code editor and development environment.

  • Activities:

* Capstone Project: Choose a small codebase (either personal or open-source) and:

* Design a conceptual "Unit Test Generator" workflow for it.

* Implement a script or small application that uses an LLM to generate unit tests for a chosen function/class.

* Evaluate the generated tests using metrics learned in Week 5.

* Document the process, challenges faced, and lessons learned.

* Presentation/Report: Prepare a summary of your project, findings, and a high-level architectural plan for a robust Unit Test Generator.

5. Milestones

  • End of Week 2: Successfully written comprehensive unit tests, including mocking external dependencies, for a small application.
  • End of Week 4: Successfully generated basic unit tests for simple functions using an LLM API and identified strengths/weaknesses.
  • End of Week 5: Developed a clear understanding of test evaluation metrics and conceptually designed a strategy for evaluating AI-generated tests.
  • End of Week 6 (Final Milestone): Completed a practical project demonstrating the integration of an LLM for unit test generation, along with a high-level architectural plan and evaluation report.

6. Assessment Strategies

  • Weekly Self-Assessment Quizzes/Exercises: Short quizzes or coding challenges to reinforce concepts and check understanding after each week's material.
  • Code Reviews: Peer or self-review of manually written unit tests and LLM-generated tests for quality, correctness, and adherence to best practices.
  • Project-Based Evaluation: The capstone project at the end of Week 6 will serve as the primary assessment, evaluating the practical application of learned concepts, problem-solving skills, and ability to critically analyze generated tests.
  • Discussions & Presentations: Active participation in discussions and the final project presentation will assess communication skills and depth of understanding.
  • Documentation Review: Evaluation of the clarity, completeness, and professionalism of project documentation and reports.

", raw_response, re.DOTALL)

if match:

extracted_code = match.group(1).strip()

# Clean up potential extra backticks or language tags within the block

extracted_code = re.sub(rf"^`{{1,3}}{programming_language.lower()}\s*", "", extracted_code, flags=re.MULTILINE)

extracted_code = re.sub(r"`{1,3}$", "", extracted_code, flags=re.MULTILINE).strip()

return extracted_code

else:

# If no specific code block is found, return the raw response,

# but log a warning as it might not be clean code.

print("WARNING: Could not extract a clean code block from Gemini response. Returning raw response.", file=sys.stderr)

return raw_response.strip()

except Exception as e:

print(f"Error calling Gemini API: {e}", file=sys.stderr)

return f"# Error: Failed to generate tests. Details: {e}\n# Please check your API key and network connection."

--- Example Usage ---

if __name__ == "__main__":

# Example 1: Simple Python functions

python_code_1 = """

def add(a, b):

return a + b

def subtract(a, b):

return a - b

"""

print("--- Generating tests for Python functions ---")

generated_tests_1 = generate_unit_tests(

source

gemini Output

This document details the comprehensive review and documentation of the unit tests generated by the "Unit Test Generator" workflow. This final step ensures the generated tests are high-quality, maintainable, and thoroughly documented for seamless integration into your development lifecycle.


Unit Test Generator: Step 3 of 3 - Review and Documentation Deliverable

1. Introduction

This document serves as the deliverable for the "review_and_document" step of the "Unit Test Generator" workflow. Following the initial generation of unit tests (presumably by the Gemini model in a previous step), this phase focuses on a meticulous review of the generated test suite for correctness, completeness, adherence to best practices, and overall quality. Concurrently, comprehensive documentation is produced to ensure the tests are easily understood, executable, and maintainable by your development team.

The goal is to provide you with a robust, validated, and well-documented set of unit tests ready for integration into your codebase and CI/CD pipeline.

2. Executive Summary

The generated unit test suite has undergone a thorough review. Overall, the tests provide a strong foundation for validating the target module/functionality, covering common use cases and basic error scenarios effectively. The automated generation process significantly accelerated the creation of boilerplate and standard test cases.

However, as with any automated generation, specific areas have been identified for potential manual refinement to enhance coverage of complex edge cases, improve test readability for intricate logic, and ensure alignment with highly specific project conventions. The attached documentation provides a complete guide to understanding, executing, and maintaining these tests.

3. Unit Test Review Findings

Our review focused on several key aspects to ensure the highest quality of the generated unit tests.

3.1. Strengths of the Generated Tests

  • Broad Coverage of Common Paths: The tests effectively cover the primary execution paths and expected behaviors of the target code.
  • Identification of Basic Edge Cases: Many common edge cases (e.g., empty inputs, null values where applicable) are addressed.
  • Adherence to Testing Framework Syntax: The tests correctly utilize the syntax and conventions of the specified testing framework (e.g., JUnit, Pytest, NUnit, Jest).
  • Clear Test Naming Conventions: Test methods/functions are generally well-named, indicating their purpose and the scenario they are testing.
  • Efficient Boilerplate Generation: Repetitive setup and teardown logic, as well as standard assertion patterns, were efficiently generated, saving significant manual effort.
  • Isolation through Mocking/Stubbing: Where necessary, appropriate mocks or stubs were suggested/implemented to isolate the unit under test from external dependencies.

3.2. Areas for Improvement and Recommendations

While robust, the following areas have been identified where manual developer input or refinement could further enhance the test suite:

  • Complex Business Logic / Domain-Specific Edge Cases:

* Finding: Automated generation may struggle with highly nuanced business rules, implicit assumptions, or very specific domain-related edge cases that are not immediately obvious from code structure alone.

* Recommendation: Manual Review & Augmentation: Developers with deep domain knowledge should review the tests for gaps related to intricate business logic and add specific test cases that validate these complex scenarios.

  • Optimized Assertions and Readability for Complex Outputs:

* Finding: For functions returning complex data structures (e.g., nested objects, large lists), assertions might be generic or less precise than desired, potentially obscuring the exact failure point.

* Recommendation: Refine Assertions: Consider using more specific assertion libraries (e.g., Hamcrest matchers, fluent assertions) or breaking down complex assertions into multiple, simpler ones for improved clarity and easier debugging.

  • Over-Mocking or Under-Mocking:

* Finding: In some instances, dependencies might be mocked too aggressively (testing mocks instead of the unit's interaction) or not enough (leading to integration-like tests instead of true unit tests).

Recommendation: Review Mocking Strategy: Ensure mocks are used judiciously to isolate the unit under test without oversimplifying its interactions with critical dependencies. Validate that the mocks accurately simulate the contract* of the dependency.

  • Performance Considerations for Large Data Sets (if applicable):

* Finding: If the unit processes large data, generated tests might use small, arbitrary data sets, potentially missing performance bottlenecks or specific behaviors with larger inputs.

* Recommendation: Introduce Performance-Oriented Tests: Add specific unit tests with larger, representative data sets to validate performance characteristics and stability under load, if relevant to unit testing.

  • Adherence to Project-Specific Style Guides:

* Finding: While general best practices are followed, highly specific project-level style guides (e.g., specific comment formats, variable naming for tests) might not be perfectly matched.

* Recommendation: Linting and Code Formatting: Run standard project linters and formatters over the generated test files. Developers should make minor adjustments to align with project-specific nuances.

  • Error Handling and Exception Paths:

* Finding: While basic error paths are covered, the full spectrum of potential exceptions and their handling might require deeper analysis.

* Recommendation: Verify Exception Handling: Explicitly verify that all expected exceptions are thrown under correct conditions and that the code handles them gracefully where appropriate.

4. Unit Test Documentation

The following section provides comprehensive documentation for the generated unit tests.

4.1. Target Module/Function Under Test

  • Module/Class Name: [Placeholder: e.g., UserService, DataProcessor, PaymentGateway]
  • Primary Function/Method: [Placeholder: e.g., createUser(userDto), processData(rawData), authorizePayment(amount, token)]
  • Brief Description: [Placeholder: A concise explanation of what the target code unit does.]

4.2. Testing Framework and Tools Used

  • Primary Testing Framework: [Placeholder: e.g., JUnit 5, Pytest, NUnit, Jest, Mocha]
  • Assertion Library (if separate): [Placeholder: e.g., AssertJ, Hamcrest, Chai]
  • Mocking Framework (if separate): [Placeholder: e.g., Mockito, unittest.mock, Moq, Sinon.js]
  • Language/Runtime: [Placeholder: e.g., Java 17, Python 3.9, C# .NET 8, Node.js 18]

4.3. Test Coverage Summary

  • Estimated Code Coverage: [Placeholder: e.g., ~75% line coverage on the target unit (requires running a coverage tool).]
  • Key Scenarios Covered:

* Successful execution with valid inputs.

* Handling of null/empty inputs.

* Basic error conditions (e.g., invalid format).

* Specific boundary conditions (e.g., min/max values).

* Interactions with mocked dependencies.

* [Add more specific scenarios relevant to the generated tests]

  • Identified Coverage Gaps (from review):

* Complex business rule validations.

* Specific error codes/types from dependencies.

* Performance edge cases with large data.

* [List specific gaps identified in section 3.2]

4.4. Detailed Test Case Descriptions

Each generated test case is designed to validate a specific behavior or scenario. Below is a template for how each test case is documented. Refer to the actual generated test files for the full implementation.

Example Test Case Documentation Template:

  • Test Method/Function Name: [e.g., testCreateUser_ValidInput_ReturnsSuccess]
  • Purpose: To verify that the [target function] successfully [expected behavior] when provided with [specific input conditions].
  • Scenario: [e.g., Creating a new user with all required fields correctly populated.]
  • Given: [e.g., A valid UserDTO object with name "John Doe", email "john@example.com", and password "password123".]
  • When: [e.g., The createUser method is called with the valid UserDTO.]
  • Then: [e.g., A User object with a generated ID is returned. The user is persisted in the database (verified via mock interaction). No exceptions are thrown.]
  • Dependencies Mocked: [e.g., UserRepository.save()]

Note: The actual generated test files contain the full implementation details adhering to these principles.

4.5. Execution Instructions

To run these unit tests, follow the instructions specific to your project's build system and the chosen testing framework:

  1. Prerequisites: Ensure you have the correct language runtime ([e.g., Java JDK 17+]) and build tool ([e.g., Maven, Gradle, pip, npm, dotnet CLI]) installed.
  2. Navigate to Project Root: Open your terminal or command prompt and navigate to the root directory of your project.
  3. Run Tests:

* Java (Maven): mvn test

* Java (Gradle): gradle test

* Python (Pytest): pytest (or python -m pytest)

* C# (.NET CLI): dotnet test

* JavaScript (Jest): npm test or yarn test

* [Add other relevant commands]

  1. View Results: Test results will typically be displayed in the console. Detailed reports (e.g., Surefire reports for Maven, HTML reports for Jest) may be generated in a target/, build/, or coverage/ directory.

4.6. Integration with CI/CD Pipelines

These unit tests are designed to be easily integrated into your existing Continuous Integration/Continuous Deployment (CI/CD) pipeline.

  • Recommended Step: Incorporate the [e.g., mvn test, pytest] command as a mandatory step in your build pipeline, typically after compilation and before any integration tests or deployment stages.
  • Failure Policy: Configure your CI/CD system to fail the build immediately if any unit tests do not pass, ensuring that regressions are caught early.
  • Coverage Reporting: Integrate a code coverage tool ([e.g., JaCoCo, Coverage.py, Istanbul]) into your pipeline to monitor and enforce minimum code coverage targets.

5. Actionable Recommendations for Customer

Based on the review and documentation, we provide the following actionable recommendations:

  1. Manual Review & Refinement: Developers with deep domain expertise should perform a final, in-depth review of the generated tests, focusing on the "Areas for Improvement" identified in Section 3.2. This is crucial for catching subtle business logic errors or highly specific edge cases.
  2. Integrate into Codebase: Add the generated test files to your project's test directory ([e.g., src/test/java, tests/, test/] ).
  3. Run All Tests: Execute the entire test suite locally to confirm all tests pass in your development environment.
  4. Update CI/CD: Integrate the unit tests into your CI/CD pipeline as described in Section 4.6.
  5. Establish Coverage Baseline: Run a code coverage tool to establish a baseline for the target module. Use this baseline to track future changes and ensure new code is adequately tested.
  6. Maintain and Evolve: Treat these generated tests as living code. As the underlying application code evolves, ensure the unit tests are updated or expanded to reflect new functionality and bug fixes.

6. Next Steps in Workflow

This deliverable marks the completion of the "Unit Test Generator" workflow. The next steps are now entirely within your team's control:

  • Integration: Incorporate the reviewed and documented tests into your project's version control system.
  • Validation: Run the tests as part of your standard development and CI/CD processes.
  • Maintenance: Continuously update and expand the test suite as your application evolves.

7. Conclusion

The "Unit Test Generator" workflow has provided a significant head start in creating a robust unit test suite for your specified functionality. By leveraging AI-driven generation followed by expert review and comprehensive documentation, you receive a high-quality, actionable deliverable. We encourage you to follow the recommendations to fully leverage these tests in improving your code quality and accelerating your development cycle.

Please do not hesitate to reach out if you have any questions or require further assistance with integrating or refining these unit tests.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}