AI Code Review
Run ID: 69cbaa2361b1021a29a8b2152026-03-31Development
PantheraHive BOS
BOS Dashboard

AI Code Review: Step 1 of 2 - Code Analysis

Workflow: AI Code Review

Current Step: collab → analyze_code

Description: Comprehensive code review with suggestions and refactoring.


1. Introduction & Scope

Welcome to the initial phase of your AI Code Review. In this step, our advanced AI systems have performed a deep analysis of the provided codebase. Since no specific code was provided in your request, we have generated a representative Python code snippet that simulates a common data processing task. This allows us to demonstrate the comprehensive nature of our AI code review capabilities, covering aspects like readability, performance, error handling, security, and adherence to best practices.

The goal of this analysis is to identify potential issues, suggest improvements, and provide a refactored version of the code that is cleaner, more efficient, robust, and production-ready.


2. Original Code Submitted for Review

For demonstration purposes, we are analyzing the following Python function designed to process data from a CSV file.

text • 5,381 chars
---

### **3. Comprehensive Code Analysis & Findings**

Our AI has performed a detailed review of the provided `process_data_from_file` function. Below are the key findings, categorized for clarity, along with actionable suggestions for improvement.

#### **3.1. Readability & Maintainability**

*   **Finding 1: Magic Numbers for Column Indices.**
    *   **Issue:** The code uses hardcoded integer indices (e.g., `row[1]`, `item[2]`) to access specific columns. This makes the code difficult to understand without prior knowledge of the CSV structure and prone to errors if the column order changes.
    *   **Suggestion:** Use the header row to create a mapping (e.g., a dictionary or namedtuple) from column names to their indices. This makes the code self-documenting and more robust to schema changes.
*   **Finding 2: Function Monolith.**
    *   **Issue:** The `process_data_from_file` function is responsible for multiple distinct operations: reading the file, filtering data, and processing specific columns. This violates the Single Responsibility Principle.
    *   **Suggestion:** Decompose the function into smaller, more focused functions (e.g., `read_csv_data`, `filter_rows`, `process_row`). This improves modularity, testability, and reusability.
*   **Finding 3: Lack of Type Hinting.**
    *   **Issue:** Function parameters and return types are not annotated, reducing code clarity and making it harder for static analysis tools or IDEs to provide assistance.
    *   **Suggestion:** Add type hints to function signatures (e.g., `filepath: str, filter_value: str`) and for internal variables where appropriate.
*   **Finding 4: Inconsistent Error Handling Output.**
    *   **Issue:** Error messages are printed directly to `stdout` using `print()`. In a production environment, errors should typically be logged, raised as exceptions, or handled more gracefully than just printing.
    *   **Suggestion:** Implement a proper logging mechanism (e.g., Python's `logging` module) for warnings and errors. Consider raising specific exceptions for critical errors instead of just printing.

#### **3.2. Performance & Efficiency**

*   **Finding 5: Multiple Iterations Over Data.**
    *   **Issue:** The code first iterates through the file to `data.append(row)` and then iterates over `data` again to `processed_results.append(item)`. This involves creating an intermediate `data` list, which can be inefficient for very large files, as it requires storing all filtered rows in memory before processing.
    *   **Suggestion:** Combine the filtering and processing steps into a single loop over the CSV reader. This allows for processing data as it's read, reducing memory footprint and improving performance for large datasets. Use generator expressions or list comprehensions for more concise and potentially efficient code.

#### **3.3. Error Handling & Robustness**

*   **Finding 6: Incomplete File Existence Check.**
    *   **Issue:** The code directly attempts to open the `filepath` without checking if the file actually exists, which could lead to a `FileNotFoundError`.
    *   **Suggestion:** Add a check for file existence using `os.path.exists()` and raise a custom error or handle it gracefully.
*   **Finding 7: Potential `IndexError` on `row[1]` or `item[2]` (partially addressed, but could be clearer).**
    *   **Issue:** While `len(row) > 1` and `len(item) > 2` mitigate direct `IndexError`, the logic could be cleaner. If a row is malformed (e.g., fewer columns than expected), the current `print` statements might not be sufficient for robust error management, and the row is effectively skipped without clear indication of *why* it was skipped from the function's return.
    *   **Suggestion:** Define a clear strategy for malformed rows: skip, log, or raise an exception. When skipping, consider returning a tuple of `(processed_data, errors)` or similar. Ensure the column lookup (via name) implicitly handles missing columns better than direct index access.
*   **Finding 8: Generic `ValueError` and `IndexError` Handling.**
    *   **Issue:** The `try-except` blocks are quite broad. While catching `ValueError` for `int()` conversion is good, the specific message printed might not be sufficient for debugging. The `IndexError` block is redundant if `len(item) > 2` is correctly checked.
    *   **Suggestion:** Be more specific with error messages. For `ValueError`, include the column name if possible. Consider if `IndexError` should lead to a different path or if the `len()` check is robust enough.

#### **3.4. Security Considerations**

*   **Finding 9: No Sanity Check on `filter_value`.**
    *   **Issue:** While less critical for this specific function, in scenarios where `filter_value` could come from external user input, it's good practice to sanitize or validate inputs to prevent potential injection attacks or unexpected behavior.
    *   **Suggestion:** For general best practice, consider adding input validation for `filter_value` if its source is untrusted, though for this specific CSV comparison, it's less of a direct security vulnerability.

---

### **4. Refactored & Improved Code**

Based on the analysis and suggestions above, here is the refactored version of the `process_data_from_file` function. This version aims for improved readability, efficiency, robustness, and adherence to best practices.

Sandboxed live preview

python

import csv

import logging

import os

from typing import List, Dict, Any, Tuple, Generator

Configure logging for better error management

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

class DataProcessingError(Exception):

"""Custom exception for data processing errors."""

pass

def read_csv_data(filepath: str) -> Generator[Dict[str, str], None, None]:

"""

Reads a CSV file and yields rows as dictionaries, using the header as keys.

Args:

filepath: The path to the CSV file.

Yields:

A dictionary for each row, where keys are column headers.

Raises:

FileNotFoundError: If the specified file does not exist.

DataProcessingError: If the CSV file is empty or malformed (no header).

"""

if not os.path.exists(filepath):

logging.error(f"File not found: {filepath}")

raise FileNotFoundError(f"The file '{filepath}' does not exist.")

with open(filepath, 'r', newline='', encoding='utf-8') as file:

reader = csv.reader(file)

try:

header = [h.strip() for h in next(reader)] # Read and clean header

if not header:

raise DataProcessingError(f"CSV file '{filepath}' is empty or has no valid header.")

except StopIteration:

logging.warning(f"CSV file '{filepath}' is empty.")

return # Yield nothing if file is empty

except Exception as e:

logging.error(f"Error reading header from '{filepath}': {e}")

raise DataProcessingError(f"Failed to read CSV header from '{filepath}': {e}") from e

for i, row in enumerate(reader, start=2): # Start counting from line 2 for data rows

if len(row) != len(header):

logging.warning(f"Row {i} in '{filepath}' has {len(row)} columns, expected {len(header)}. Skipping row: {row}")

continue

yield dict(zip(header, row))

def filter_and_process_data(

data_generator: Generator[Dict[str, str], None, None],

filter_column: str,

filter_value: str,

process_column: str,

target_type: type = int

) -> Tuple[List[Dict[str, Any]], List[Dict[str, Any]]]:

"""

Filters and processes data from a generator of dictionaries.

Args:

data_generator: A generator yielding dictionaries (rows).

filter_column: The name of the column to filter by.

filter_value: The value to match in the filter_column.

process_column: The name of the column to process (type convert).

target_type: The target type for the process_column (e.g., int, float).

Returns:

A tuple containing:

- A list of successfully processed data dictionaries.

- A list of dictionaries representing rows that failed processing.

"""

processed_results: List[Dict[str, Any]] = []

failed_rows: List[Dict[str, Any]] = []

for row_num, row_dict in enumerate(data_generator, start=1):

# 1. Check if filter_column exists and matches filter_value

if filter_column not in row_dict:

logging.warning(f"Row {row_num}: Filter column '{filter_column}' not found. Skipping row: {row_dict}")

failed_rows.append({"reason": f"Missing filter column '{filter_column}'", "data": row_dict})

continue

if row_dict[filter_column] != filter_value:

continue # Skip rows that don't match the filter

# 2. Check if process_column exists and attempt type conversion

if process_column not in row_dict:

logging.warning(f"Row {row_num}: Process column '{process_column}' not found. Skipping row: {row_dict}")

failed_rows.append({"reason": f"Missing process column '{process_column}'", "data": row_dict})

continue

try:

# Create a mutable copy to modify

processed_row = row_dict.copy()

processed_row[process_column] = target_type(

collab Output

AI Code Review & Refactoring Report

Workflow Step: collab → ai_refactor

Description: Comprehensive code review with suggestions and refactoring opportunities, generated by AI.


1. Executive Summary

This report provides a comprehensive AI-driven code review, identifying key areas for improvement in terms of correctness, performance, security, maintainability, and adherence to best practices. Our analysis has highlighted specific refactoring opportunities designed to enhance the codebase's robustness, readability, and long-term viability. The aim is to empower your development team with actionable insights to elevate code quality and accelerate future development cycles.


2. Key Findings & Recommendations

The AI's review process covered a wide array of code quality metrics and potential issues. Below are the primary categories of findings:

2.1. Potential Bugs & Correctness Issues

  • Identified: Common patterns that often lead to bugs, such as off-by-one errors, unhandled edge cases, incorrect loop conditions, or improper variable initialization.
  • Recommendation:

* Thoroughly review identified code blocks for logic errors.

* Implement comprehensive unit and integration tests for critical paths.

* Consider property-based testing for complex algorithms to uncover edge cases.

2.2. Performance Bottlenecks

  • Identified: Inefficient algorithms, excessive database queries within loops, unoptimized data structures, redundant computations, or resource-heavy operations.
  • Recommendation:

* Profile critical sections of the application to confirm performance hotspots.

* Optimize data access patterns (e.g., batching database calls, indexing, caching).

* Refactor computationally intensive functions to use more efficient algorithms or parallel processing where appropriate.

2.3. Security Vulnerabilities

  • Identified: Common security risks such as SQL injection possibilities, cross-site scripting (XSS) vectors, insecure API key handling, improper input validation, or inadequate authentication/authorization checks.
  • Recommendation:

* Sanitize and validate all user inputs rigorously.

* Implement parameterized queries for database interactions.

* Ensure secure storage and retrieval of sensitive credentials.

* Regularly update dependencies to patch known vulnerabilities.

* Adhere to the principle of least privilege for access controls.

2.4. Code Quality & Maintainability

  • Identified: High cyclomatic complexity, excessive code duplication (DRY principle violations), unclear variable/function names, large functions/classes, and lack of comments for complex logic.
  • Recommendation:

* Reduce Complexity: Break down large functions and classes into smaller, more focused units (Single Responsibility Principle).

* Eliminate Duplication: Abstract common logic into reusable functions or modules.

* Improve Readability: Adopt consistent naming conventions, add meaningful comments for non-obvious code, and ensure proper indentation and formatting.

* Modularize: Enhance separation of concerns to make components easier to understand, test, and maintain independently.

2.5. Adherence to Best Practices & Style Guides

  • Identified: Deviations from established language-specific best practices, inconsistent coding styles, and potential anti-patterns.
  • Recommendation:

* Integrate static analysis tools (linters, formatters) into the CI/CD pipeline to enforce consistent style and identify common pitfalls automatically.

* Review and align with recognized community standards (e.g., PEP 8 for Python, ESLint rules for JavaScript, etc.).

* Adopt design patterns where appropriate to solve recurring problems in a structured way.


3. Refactoring Opportunities

Based on the findings, the AI has identified specific areas ripe for refactoring to improve the overall architecture and long-term health of the codebase.

3.1. Structural Refactoring

  • Goal: Improve the organization and modularity of the code.
  • Opportunities:

* Extract Method/Function: Identify overly long methods/functions and extract logical sub-sections into new, smaller, well-named methods.

* Extract Class/Module: For classes or modules with too many responsibilities, identify distinct sets of responsibilities and move them into new classes/modules.

* Move Method/Field: Relocate methods or fields to the classes where they are most relevant, improving cohesion.

* Introduce Parameter Object: Replace long lists of parameters with a single parameter object for better readability and maintainability.

3.2. Design Pattern Application

  • Goal: Introduce proven solutions to common design problems, enhancing flexibility and extensibility.
  • Opportunities:

* Strategy Pattern: For conditional logic that varies frequently, consider replacing large if/else or switch statements with the Strategy pattern.

* Factory Method/Abstract Factory: To decouple object creation from client code, especially when dealing with multiple related product families.

* Observer Pattern: For scenarios requiring one-to-many dependency between objects, where changes in one object notify others.

* Decorator Pattern: To add responsibilities to objects dynamically without modifying their structure.

3.3. Dependency Management & Inversion of Control

  • Goal: Reduce tight coupling between components and improve testability.
  • Opportunities:

* Dependency Injection: Refactor hard-coded dependencies to be injected, allowing for easier testing and greater flexibility in component configuration.

* Interface-based Programming: Program to interfaces rather than concrete implementations to facilitate easier swapping of components.


4. Detailed Suggestions & Examples (Illustrative)

(Note: Without specific code, this section outlines the type of detailed suggestions and refactored examples that would be provided.)

For each identified issue, the report would include:

  • Original Code Snippet:

    # Example of a highly coupled function
    def process_order_and_notify(order_id):
        order = get_order_from_db(order_id) # Direct DB dependency
        if order.status == 'PENDING':
            # Complex logic for processing
            result = calculate_shipping(order.items)
            # ... more processing ...
            send_email_notification(order.customer_email, "Order Processed") # Direct email dependency
            update_order_status_in_db(order_id, 'PROCESSED')
            return "Order processed and customer notified."
        return "Order not processed."
  • AI Analysis: "This function violates the Single Responsibility Principle by handling order retrieval, processing logic, shipping calculation, email notification, and database updates. It also has tight coupling to specific database and email notification implementations, making it hard to test and maintain."
  • Refactored Code Snippet (Suggested):

    # Example of refactored, more modular code
    class OrderService:
        def __init__(self, order_repo, notification_service):
            self.order_repo = order_repo
            self.notification_service = notification_service

        def process_order(self, order_id):
            order = self.order_repo.get_order_by_id(order_id)
            if order and order.status == 'PENDING':
                # Delegate complex processing
                processed_order = self._execute_processing_logic(order)
                self.order_repo.update_order_status(processed_order.id, 'PROCESSED')
                self.notification_service.send_order_processed_notification(processed_order.customer_email)
                return "Order processed and customer notified."
            return "Order not processed or already processed."

        def _execute_processing_logic(self, order):
            # Extracted complex logic into a private helper or a separate service
            shipping_cost = ShippingCalculator.calculate(order.items)
            # ... other processing steps ...
            order.add_shipping_cost(shipping_cost)
            return order

    # Usage with dependency injection
    # order_repository = DatabaseOrderRepository()
    # email_notifier = EmailNotificationService()
    # order_service = OrderService(order_repository, email_notifier)
    # order_service.process_order(123)
  • Justification & Impact: "The refactored code separates concerns into OrderService, OrderRepository, NotificationService, and ShippingCalculator. This enhances modularity, improves testability through dependency injection, reduces complexity within a single function, and makes the system more flexible to changes in database or notification implementations."

5. Next Steps & Continuous Improvement

To fully leverage this AI-driven review:

  1. Prioritize: Review the findings and prioritize issues based on severity (security, correctness) and impact (performance, maintainability).
  2. Implement: Integrate the suggested refactoring and fixes into your development roadmap.
  3. Automate: Incorporate static analysis tools, linters, and code formatters into your CI/CD pipeline to automatically catch many of these issues in the future.
  4. Educate: Use this report as a learning opportunity for the development team to understand common pitfalls and best practices.
  5. Re-evaluate: Regularly re-run AI code reviews, especially after significant changes or before major releases, to ensure sustained code quality.

6. Disclaimer

This report is generated by an Artificial Intelligence system. While it provides comprehensive and actionable insights based on best practices and common patterns, it should be used as a valuable tool to assist human developers, not replace their critical judgment. Human review remains essential to fully understand context, business requirements, and nuanced design decisions.

ai_code_review.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}