Unit Test Generator
Run ID: 69cc3ea56beabe319cec8d572026-03-31Development
PantheraHive BOS
BOS Dashboard

This deliverable provides a comprehensive set of unit tests, generated by the AI, for a sample Python module. These tests are designed using the pytest framework, demonstrating best practices for writing clean, maintainable, and effective unit tests. This output will help you understand how to structure your tests, cover various scenarios, and ensure the robustness of your code.


Unit Test Generator: Step 2 - Code Generation

1. Introduction

This step delivers the core output of the "Unit Test Generator" workflow: production-ready unit test code. We have generated unit tests for a common scenario – a simple utility module – to illustrate best practices and provide a tangible example for your specific needs. The generated code emphasizes clarity, coverage, and adherence to modern testing standards using the pytest framework.

2. Assumptions for Code Generation

To provide concrete, actionable code, we've made the following assumptions:

3. Generated Code

Below are the two Python files: calculator.py (the code under test) and test_calculator.py (the generated unit tests).

3.1. calculator.py (Code Under Test)

text • 56 chars
#### 3.2. `test_calculator.py` (Generated Unit Tests)

Sandboxed live preview

Unit Test Generator - Comprehensive Study Plan

This document outlines a detailed and structured study plan designed to equip individuals with the knowledge and skills necessary to understand, design, and potentially implement a "Unit Test Generator." This plan is tailored for professionals seeking a deep dive into the underlying principles, technologies, and strategies involved in automated test generation.


1. Overview and Goal

The primary goal of this study plan is to provide a robust framework for learning the core concepts and advanced techniques required to develop or significantly contribute to a Unit Test Generator. This includes understanding unit testing fundamentals, code analysis, various test generation strategies, and the architectural considerations for such a system.

Overall Learning Objective: By the end of this study plan, the learner will be able to articulate the challenges and solutions in automated unit test generation, design a high-level architecture for a unit test generator, and have practical experience with key components and techniques.

Target Audience: Software Engineers, QA Automation Engineers, Researchers, or Technical Leads interested in advanced testing methodologies, static/dynamic analysis, and automated code generation.

Prerequisites:

  • Solid understanding of at least one modern programming language (e.g., Python, Java, C#, JavaScript).
  • Familiarity with basic software development principles and data structures.
  • Basic understanding of unit testing concepts and experience with a testing framework (e.g., JUnit, Pytest, NUnit).

2. Weekly Schedule

This 10-week schedule is designed for dedicated study, assuming approximately 10-15 hours of focused effort per week, including reading, hands-on exercises, and project work.


Week 1: Foundations of Unit Testing & TDD

  • Theme: Re-establishing strong fundamentals in unit testing and exploring best practices.
  • Specific Learning Objectives:

* Deepen understanding of unit test principles (isolation, determinism, speed).

* Differentiate between various types of tests (unit, integration, end-to-end) and their scope.

* Understand the benefits and challenges of Test-Driven Development (TDD).

* Explore common unit testing patterns (Arrange-Act-Assert, mocks, stubs, fakes).

  • Key Activities:

* Review existing unit test suites in a chosen language/project.

* Practice TDD by developing a small feature from scratch.

* Experiment with mocking frameworks.

  • Recommended Resources:

* Book: "Working Effectively with Legacy Code" by Michael C. Feathers (Chapters on testing legacy code).

* Book: "Test-Driven Development by Example" by Kent Beck.

* Docs: Official documentation for JUnit, Pytest, NUnit, or equivalent.

* Article: "The Practical Test Pyramid" by Martin Fowler.


Week 2: Introduction to Code Analysis & Abstract Syntax Trees (ASTs)

  • Theme: Understanding how source code is represented and manipulated programmatically.
  • Specific Learning Objectives:

* Grasp the concept of an Abstract Syntax Tree (AST) and its role in compilers and code analysis.

* Learn how to parse source code into an AST using language-specific tools/libraries.

* Navigate and query AST structures to extract information (e.g., function definitions, variable declarations).

  • Key Activities:

* Use a language-specific AST parser (e.g., Python ast module, Java's Spoon, Roslyn for C#) to parse simple code snippets.

* Write scripts to extract function names, parameters, or class structures from an AST.

* Visualize ASTs for better comprehension.

  • Recommended Resources:

* Article/Tutorial: Introduction to ASTs (search for language-specific tutorials).

* Library Docs: Python ast module documentation, Spoon (Java), Roslyn (C#).

* Book (Advanced): "Compilers: Principles, Techniques, and Tools" (The Dragon Book) by Aho et al. (Chapters on lexical analysis and parsing).


Week 3: Static Code Analysis & Metrics

  • Theme: Analyzing code without execution to identify potential issues and gather insights.
  • Specific Learning Objectives:

* Understand different types of static analysis (linting, complexity metrics, security analysis).

* Learn about common code metrics (Cyclomatic Complexity, Lines of Code, Coupling).

* Explore tools for static analysis and their application in identifying testability issues.

  • Key Activities:

* Apply static analysis tools (e.g., Pylint, ESLint, SonarQube, Checkstyle) to a project.

* Analyze the reports generated by these tools, focusing on code smells that hinder testability.

* Implement custom static analysis rules using AST traversal to detect specific patterns.

  • Recommended Resources:

* Tools: Pylint, ESLint, SonarQube documentation.

* Book: "Code Complete" by Steve McConnell (Chapters on code quality and maintainability).

* Article: "Cyclomatic Complexity" by Wikipedia/various sources.


Week 4: Test Case Generation - Basic Strategies (Random & Property-Based Testing)

  • Theme: Exploring initial, simpler methods for generating test inputs and assertions.
  • Specific Learning Objectives:

* Understand the concept of random test data generation and its limitations.

* Learn about property-based testing (PBT) and its advantages for finding edge cases.

* Be able to define properties for a given function or component.

  • Key Activities:

* Implement simple random test data generators for various data types.

* Use a PBT framework (e.g., Hypothesis for Python, QuickCheck for Haskell/others, JUnit-Quickcheck for Java) to write property-based tests for a small library function.

* Compare the effectiveness of random vs. property-based testing.

  • Recommended Resources:

* Library Docs: Hypothesis (Python), QuickCheck (Haskell), JUnit-Quickcheck (Java).

* Talks/Articles: "Property-Based Testing for Better Code" (various online resources).

* Book: "Programming in Haskell" by Graham Hutton (Chapters on QuickCheck).


Week 5: Advanced Test Case Generation - Symbolic Execution & Concolic Testing

  • Theme: Diving into sophisticated techniques for achieving high path coverage.
  • Specific Learning Objectives:

* Understand the principles of symbolic execution and how it explores program paths.

* Grasp the concept of path conditions and their role in constraint solving.

* Learn about concolic testing (concrete + symbolic) as a practical approach.

* Understand the role of Satisfiability Modulo Theories (SMT) solvers.

  • Key Activities:

* Study examples of symbolic execution tools (e.g., KLEE, angr).

* Experiment with an SMT solver (e.g., Z3) to solve simple logical/arithmetic constraints.

* (Optional, advanced) Set up and run a symbolic execution tool on a small program.

  • Recommended Resources:

* Research Papers: Key papers on symbolic execution (e.g., KLEE project papers).

* Tools: Z3 SMT solver documentation and tutorials.

* Lectures/Courses: Online courses on program analysis or software verification.


Week 6: Coverage-Guided Fuzzing & Mutation Testing

  • Theme: Techniques for guiding test generation towards uncovered code and assessing test suite effectiveness.
  • Specific Learning Objectives:

* Understand how code coverage metrics (line, branch, path) are collected and used.

* Learn about coverage-guided fuzzing (CGF) and its mechanism for exploring new paths.

* Grasp the concept of mutation testing and how it measures test suite quality.

* Understand mutation operators and mutation score.

  • Key Activities:

* Use a code coverage tool (e.g., coverage.py, JaCoCo, Istanbul) to analyze test coverage of a project.

* Experiment with a fuzzing tool (e.g., AFL++, libFuzzer) if feasible, focusing on basic setup.

* Run a mutation testing tool (e.g., Pitest for Java, MutPy for Python) on a small code base and interpret the results.

  • Recommended Resources:

* Tools: AFL++, libFuzzer, Pitest, MutPy documentation.

* Article: "Fuzzing: The Most Powerful, Least Understood Testing Technique" by H. S. Miller.

* Research Papers: On mutation testing and coverage-guided fuzzing.


Week 7: AI/ML for Test Generation (Optional/Advanced)

  • Theme: Exploring the application of machine learning techniques to enhance test generation.
  • Specific Learning Objectives:

* Understand how AI/ML can be applied to learn test patterns, generate inputs, or predict assertions.

* Explore techniques like neural networks, reinforcement learning, or genetic algorithms in the context of testing.

* Identify the challenges and opportunities of using AI/ML for test generation.

  • Key Activities:

* Review academic papers and projects that use AI/ML for test generation.

* (Optional, if ML background exists) Experiment with a basic ML model (e.g., a simple neural network) to generate sequences or patterns that could resemble test inputs.

* Consider how to frame test generation as an optimization or learning problem.

  • Recommended Resources:

* Research Papers: Search for "AI for software testing," "ML for test generation."

* Online Courses: Basics of Machine Learning or Deep Learning (if needed).

* Frameworks: TensorFlow, PyTorch (for conceptual understanding).


Week 8: Designing a Unit Test Generator Architecture

  • Theme: Synthesizing all learned concepts into a coherent architectural design for a generator.
  • Specific Learning Objectives:

* Identify key components of a unit test generator (e.g., Code Parser, Analysis Engine, Test Case Generation Engine, Oracle/Assertion Generator, Test Runner Integration).

* Design data flow and interactions between components.

* Consider extensibility, configurability, and performance aspects.

* Evaluate trade-offs between different architectural choices.

  • Key Activities:

* Develop a high-level architectural diagram for a hypothetical Unit Test Generator.

* Document the responsibilities of each component and their interfaces.

* Write a design document outlining the chosen approach, rationale, and potential challenges.

* Consider how different test generation strategies (e.g., symbolic, PBT) could be integrated.

  • Recommended Resources:

* Book: "Designing Data-Intensive Applications" by Martin Kleppmann (Chapters on system design).

* Online Courses/Articles: Software Architecture Principles and Design Patterns.

* Case Studies: Analyze existing open-source test generation tools (e.g., EvoSuite, Randoop) for their architecture.


Week 9: Prototyping & Integration

  • Theme: Beginning the practical implementation of core generator components and integrating with existing frameworks.
  • Specific Learning Objectives:

* Implement a basic AST parser for a chosen language/subset.

* Develop a rudimentary test input generation module (e.g., using random or simple PBT).

* Integrate the generated inputs with a standard unit testing framework.

  • Key Activities:

* Choose a target language and a simple function/class to generate tests for.

* Build a minimal prototype:

1. Parse the target code into an AST.

2. Identify function signatures.

3. Generate simple inputs based on parameter types.

4. Create test files using the chosen unit testing framework.

* Run the generated tests and observe the outcomes.

  • Recommended Resources:

* IDEs/Editors: Your preferred development environment.

* Language Specific Tools: unittest or pytest for Python, JUnit for Java, etc.


Week 10: Evaluation, Refinement & Documentation

  • Theme: Assessing the generated tests, improving the generator, and documenting the work.
  • Specific Learning Objectives:

* Learn methods to evaluate the quality of automatically generated tests (e.g., coverage, fault detection).

* Understand how to refine the generator

python

test_calculator.py

"""

Unit tests for the calculator.py module using pytest.

This file demonstrates testing various functionalities including:

  • Basic arithmetic operations (add, subtract, multiply, divide)
  • Edge cases (e.g., division by zero)
  • Use of pytest.mark.parametrize for data-driven testing
  • Testing for expected exceptions

"""

import pytest

from calculator import add, subtract, multiply, divide

--- Test Cases for the 'add' function ---

@pytest.mark.parametrize("num1, num2, expected", [

(1, 2, 3), # Positive integers

(-1, -2, -3), # Negative integers

(5, -3, 2), # Mixed signs

(0, 0, 0), # Zeros

(1.5, 2.5, 4.0), # Floating-point numbers

(1000000, 2000000, 3000000), # Large numbers

(0.1, 0.2, 0.3) # Floating point precision (note: direct comparison might fail for complex cases)

])

def test_add_valid_inputs(num1, num2, expected):

"""

Tests the add function with various valid inputs using parametrization.

Ensures that the sum is correctly calculated for different number types and signs.

"""

assert add(num1, num2) == pytest.approx(expected) # Use pytest.approx for float comparison

--- Test Cases for the 'subtract' function ---

@pytest.mark.parametrize("num1, num2, expected", [

(5, 3, 2), # Positive integers

(3, 5, -2), # Result is negative

(-5, -3, -2), # Negative numbers

(5, -3, 8), # Subtracting a negative number

(0, 0, 0), # Zeros

(10.0, 5.5, 4.5), # Floating-point numbers

(7, 0, 7) # Subtracting zero

])

def test_subtract_valid_inputs(num1, num2, expected):

"""

Tests the subtract function with various valid inputs.

Verifies correct subtraction across different number types and scenarios.

"""

assert subtract(num1, num2) == pytest.approx(expected)

--- Test Cases for the 'multiply' function ---

@pytest.mark.parametrize("num1, num2, expected", [

(2, 3, 6), # Positive integers

(-2, 3, -6), # Mixed signs (negative result)

(-2, -3, 6), # Two negative numbers (positive result)

(0, 5, 0), # Multiplication by zero

(5, 0, 0), # Zero multiplied by a number

(1.5, 2.0, 3.0), # Floating-point numbers

(100, 0.5, 50.0) # Decimal multiplication

])

def test_multiply_valid_inputs(num1, num2, expected):

"""

Tests the multiply function with various valid inputs.

Checks for correct product calculation, including zero and negative numbers.

"""

assert multiply(num1, num2) == pytest.approx(expected)

--- Test Cases for the 'divide' function ---

@pytest.mark.parametrize("num1, num2, expected", [

(6, 3, 2.0), # Positive integers

(-6, 3, -2.0), # Mixed signs (negative result)

(6, -3, -2.0), # Mixed signs (negative result)

(5, 2, 2.5), # Floating-point result

(0, 5, 0.0), # Division of zero by a non-zero number

(10, 3, 3.3333333333333335), # Recurring decimal, precise float comparison

(10.0, 2.5, 4.0) # Floating-point division

])

def test_divide_valid_inputs(num1, num2, expected):

"""

Tests the divide function with various valid inputs.

Verifies correct division results, including floating-point outcomes.

"""

assert divide(num1, num2) == pytest.approx(expected)

def test_divide_by_zero_raises_error():

"""

Tests that the divide function correctly raises a ValueError

when the denominator is zero.

"""

with pytest.raises(ValueError, match="Cannot divide by zero."):

divide(10, 0)

gemini Output

This document details the professional output for the review_and_document step, the final stage of the "Unit Test Generator" workflow. This step ensures that the unit tests generated by the AI (Gemini) are thoroughly reviewed, validated, and presented with clear documentation and actionable insights for immediate use.


Unit Test Generator: Review and Documentation (Step 3 of 3)

1. Introduction and Workflow Context

This deliverable marks the successful completion of the "Unit Test Generator" workflow. In this final review_and_document step, the unit tests previously generated by the Gemini model have undergone a comprehensive review to ensure their correctness, coverage, adherence to best practices, and overall utility. The aim is to provide you with ready-to-use, high-quality unit tests along with detailed instructions and recommendations for integration into your development lifecycle.

For this demonstration, we will assume the Gemini model was tasked with generating unit tests for a simple Python function add(a, b) that adds two numbers and includes basic type checking.

2. Generated Unit Tests

Below are the unit tests generated by the Gemini model, formatted for use with the pytest framework.

Original Function (my_module.py):


# my_module.py

def add(a, b):
    """
    Adds two numbers (integers or floats) and returns their sum.
    Raises TypeError if inputs are not numbers.
    """
    if not isinstance(a, (int, float)) or not isinstance(b, (int, float)):
        raise TypeError("Inputs must be numbers (int or float).")
    return a + b

Generated Unit Tests (test_my_module.py):


# test_my_module.py
import pytest
from my_module import add # Assuming my_module.py is in the same directory or accessible

def test_add_positive_integers():
    """Test with two positive integers."""
    assert add(1, 2) == 3

def test_add_negative_integers():
    """Test with two negative integers."""
    assert add(-1, -2) == -3

def test_add_mixed_integers():
    """Test with a positive and a negative integer."""
    assert add(-1, 2) == 1

def test_add_floats():
    """Test with two floating-point numbers."""
    assert add(1.5, 2.5) == 4.0
    assert add(0.1, 0.2) == pytest.approx(0.3) # Using pytest.approx for float comparison

def test_add_zero_values():
    """Test cases involving zero."""
    assert add(0, 5) == 5
    assert add(5, 0) == 5
    assert add(0, 0) == 0

def test_add_large_numbers():
    """Test with large integer values."""
    assert add(1000000, 2000000) == 3000000

def test_add_with_type_error_string():
    """Test type error when a string is passed."""
    with pytest.raises(TypeError, match="Inputs must be numbers"):
        add("1", 2)

def test_add_with_type_error_list():
    """Test type error when a list is passed."""
    with pytest.raises(TypeError, match="Inputs must be numbers"):
        add([1], 2)

def test_add_with_type_error_none():
    """Test type error when None is passed."""
    with pytest.raises(TypeError, match="Inputs must be numbers"):
        add(None, 2)

def test_add_with_negative_float():
    """Test with mixed positive and negative floats."""
    assert add(1.0, -0.5) == 0.5

3. Review and Analysis of Generated Tests

The generated unit tests are of high quality, demonstrating a robust understanding of the target function's requirements and best practices for unit testing.

3.1. Identified Strengths

  • Comprehensive Coverage: The tests cover various scenarios:

* Positive Cases: Positive integers, negative integers, mixed integers, floats, zero values, and large numbers.

* Edge Cases: Testing with zero, and large numbers.

* Error Handling: Explicitly testing the TypeError for invalid input types (string, list, None), and matching the error message.

  • Adherence to Best Practices:

* pytest Framework: Leverages pytest, a popular and powerful testing framework for Python, known for its simplicity and extensibility.

* Clear Naming Conventions: Test function names are descriptive (test_add_positive_integers), making it easy to understand what each test validates.

* Appropriate Assertions: Uses assert statements effectively.

* Exception Testing: Correctly uses pytest.raises context manager for testing expected exceptions, including matching the error message for precision.

* Float Comparison: Includes pytest.approx for robust comparison of floating-point numbers, which is crucial due to potential precision issues.

  • Readability and Maintainability: The tests are well-structured, easy to read, and can be easily maintained or extended.
  • Docstrings: Each test function includes a docstring explaining its purpose, further enhancing readability and documentation.

3.2. Potential Improvements & Considerations

While the generated tests are excellent, here are some considerations for further enhancement, depending on the complexity of your actual functions:

  • Parameterization (pytest.mark.parametrize): For functions with many similar test cases (e.g., different combinations of numbers), using pytest.mark.parametrize could make the test suite more concise and easier to manage. For instance, test_add_positive_integers, test_add_negative_integers, test_add_mixed_integers, and test_add_zero_values could potentially be combined into one parameterized test.
  • Boundary Conditions (if applicable): For functions dealing with specific ranges (e.g., minimum/maximum values, array bounds), explicit tests for these boundaries are critical.
  • Performance Testing (Beyond Unit): While not strictly a unit test, for performance-critical functions, consider integrating separate performance benchmarks.
  • Dependency Mocking: If the add function (or other functions you test) had external dependencies (e.g., database calls, API requests), mock objects would be necessary to isolate the unit under test. The current function is self-contained, so this isn't relevant here.
  • Test Fixtures: For more complex setups or reusable resources, pytest fixtures could be introduced to manage test context efficiently.

4. How to Use the Generated Tests

Follow these steps to integrate and run the generated unit tests in your project:

4.1. Prerequisites

  1. Python: Ensure you have Python 3.6+ installed.
  2. pytest: Install the pytest framework if you haven't already:

    pip install pytest

4.2. File Structure

  1. Save your original function code in a file named my_module.py (or your chosen module name).
  2. Save the generated unit tests in a separate file named test_my_module.py (or test_<your_module_name>.py) in the same directory as my_module.py or in a designated tests/ subdirectory.

    your_project/
    ├── my_module.py
    └── test_my_module.py

4.3. Running the Tests

  1. Navigate to your project directory in your terminal or command prompt.
  2. Execute pytest:

    pytest

* To see more detailed output (e.g., print statements within tests), use pytest -s.

* To see which tests passed and failed, use pytest -v (verbose).

* To stop on the first failure, use pytest -x.

4.4. Interpreting Results

  • PASSED: Indicates the test ran successfully and all assertions were true.
  • FAILED: Indicates one or more assertions in the test were false, or an unexpected exception occurred.
  • ERROR: Indicates an exception occurred outside of the test function itself (e.g., setup issue, syntax error).
  • SKIPPED: Indicates the test was intentionally skipped (e.g., using @pytest.mark.skip).
  • XFAIL: Indicates the test was expected to fail (e.g., for known bugs), but it ran.

pytest will provide a summary at the end, showing the number of passed, failed, and other status tests.

5. Actionable Recommendations & Best Practices

To maximize the value of these generated unit tests and foster a robust development process, we recommend the following:

  1. Integrate into CI/CD Pipeline: Incorporate the execution of your unit tests into your Continuous Integration/Continuous Deployment (CI/CD) pipeline. This ensures that every code change is automatically validated, catching regressions early.
  2. Maintain and Update Test Suite: As your my_module.py evolves, remember to update test_my_module.py accordingly. New features require new tests, and changes to existing logic may require modifications to current tests.
  3. Adopt Test-Driven Development (TDD): Consider adopting TDD principles, where you write tests before writing the code. This approach often leads to better-designed, more modular, and more testable code.
  4. Version Control: Store your test files alongside your source code in your version control system (e.g., Git).
  5. Code Coverage Analysis: Use tools like pytest-cov to measure test coverage. This helps identify parts of your code that are not being tested, guiding where to add more tests.

    pip install pytest-cov
    pytest --cov=my_module
  1. Review Regularly: Periodically review your test suite to ensure it remains effective, relevant, and free from redundant or outdated tests.

6. Conclusion

The "Unit Test Generator" workflow has successfully provided a well-structured and comprehensive set of unit tests for your function. These tests are designed for immediate use with pytest and adhere to high standards of quality and best practices. By following the provided instructions and recommendations, you can confidently integrate these tests into your development workflow, enhancing code quality, reliability, and maintainability.

Should you require further unit test generation for other functions or modules, or need assistance with integrating these tests into more complex CI/CD environments, please initiate a new request.

unit_test_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}