AI Code Review
Run ID: 69c93becfee1f7eb4a80faf22026-03-29Development
PantheraHive BOS
BOS Dashboard

AI Code Review: Step 1 of 2 - Code Analysis

Project/Module: User Data Processing Utility

Current Step: collab → analyze_code

Description: Comprehensive code review with suggestions and refactoring for a Python function designed to process user data.


1. Introduction

This document presents a comprehensive code review for the provided Python function process_user_data. The review aims to identify potential issues related to functionality, robustness, security, performance, readability, and maintainability. Following the analysis, actionable recommendations and a refactored version of the code will be provided to enhance its quality and adherence to best practices.

Note: For the purpose of this demonstration, we are assuming the following initial code snippet was provided for review. In a real-world scenario, this section would present your actual code.


2. Initial Code Analysis (Demonstration Code)

The function process_user_data is designed to take a JSON string and a user ID, parse the string, validate its contents (name, age, optional email), and then enrich the data with processing metadata before returning it.

Assumed Initial Code for Review:

text • 7,578 chars
---

### 3. Detailed Code Review Findings

#### 3.1. General Observations

*   **Functionality:** The core logic for parsing and basic validation is present.
*   **Readability:** The code is generally understandable, but the validation logic can become repetitive and less clear as more fields are added.
*   **Maintainability:** Changes to validation rules or adding new fields would require modifying existing `if` blocks, potentially leading to errors or inconsistencies.
*   **Adherence to Best Practices:** Several areas can be improved to align with Pythonic best practices, particularly around error handling and input validation.

#### 3.2. Specific Issues & Concerns

##### 3.2.1. Error Handling & Robustness

*   **`print()` for Errors:** The function uses `print()` statements to report errors and then returns `None`. This makes it difficult for calling code to programmatically detect and handle specific error conditions. For example, a calling function cannot easily distinguish between "invalid JSON" and "missing name" without parsing the printed string, which is fragile.
*   **Ambiguous Return Value:** Returning `None` on error is an anti-pattern in many cases. It forces the caller to always check for `None`, and doesn't convey the *reason* for the failure.
*   **Lack of Specific Exceptions:** Raising specific exceptions (e.g., `ValueError`, custom exceptions) would provide much clearer error signaling and allow for more granular error handling by the caller.

##### 3.2.2. Input Validation

*   **Repetitive Validation Logic:** The validation for each field (`name`, `age`) involves similar checks (`not in`, `isinstance`, specific value checks). This pattern is duplicated and could be abstracted.
*   **Basic Email Validation:** The email validation only checks if the field exists and is a string. It does not validate the actual format of the email address (e.g., using a regex).
*   **Magic Strings:** Field names like `'name'`, `'age'`, `'email'`, `'processed_by_user'`, `'status'` are hardcoded strings scattered throughout the function. This makes refactoring or modifying field names cumbersome and error-prone.
*   **No Schema Definition:** There's no clear, centralized schema definition for the expected input data, making it harder to understand the data contract at a glance.

##### 3.2.3. Security Considerations

*   **No Input Sanitization Beyond Basic Type Check:** While the code checks types, it doesn't perform any sanitization on string inputs (e.g., HTML escaping for web contexts, SQL injection prevention if used with databases). While not directly apparent from this snippet, it's a general concern for user-provided data.
*   **Potential for Information Leakage:** Printing error messages directly might reveal internal structure or expected formats, which could be exploited by malicious users. Raising exceptions with controlled messages is generally safer.

##### 3.2.4. Performance Implications

*   For small data sizes, the performance impact is negligible. However, for very large JSON strings, repeated dictionary lookups and string operations could be optimized. (Not a critical issue for this specific function, but worth noting).

##### 3.2.5. Readability & Clarity

*   **Missing Type Hinting:** The function signature lacks type hints, which reduces clarity about expected input types and return types, making it harder for static analysis tools and other developers to understand.
*   **Docstring:** The docstring is basic. It could be expanded to include details about parameters, return value, and potential exceptions raised.
*   **Inconsistent Error Handling for `email`:** The `email` field is handled differently (popped with a warning) compared to `name` and `age` (which cause the function to return `None`). This inconsistency can be confusing.

##### 3.2.6. Testability

*   **Difficult to Test Error Paths:** Because errors are handled by printing and returning `None`, testing specific error conditions requires checking console output or asserting `None`, which is less robust than asserting specific exception types.

---

### 4. Actionable Recommendations

#### 4.1. Refactoring Suggestions

1.  **Centralize Error Handling with Exceptions:** Replace `print()` statements and `return None` with specific exceptions. This allows calling code to catch and handle errors gracefully.
2.  **Introduce a Validation Library (e.g., Pydantic):** For complex data structures, using a library like Pydantic can significantly simplify schema definition and validation, making the code cleaner, more robust, and easier to maintain.
3.  **Encapsulate Validation Logic:** Even without a library, validation for individual fields could be moved into helper functions or a dedicated validation class.
4.  **Use Enums or Constants for Magic Strings:** Define constants or an `Enum` for field names and status values to improve maintainability and prevent typos.
5.  **Separate Concerns:** Consider if the function has too many responsibilities. If data parsing, validation, and enrichment become very complex, they could be split into smaller, more focused functions.
6.  **Add Type Hinting:** Enhance the function signature and internal variables with type hints for improved readability and static analysis.
7.  **Improve Docstrings:** Follow PEP 257 for comprehensive docstrings, including `Args`, `Returns`, and `Raises` sections.
8.  **Logging Instead of Printing:** For warnings or informational messages, use Python's `logging` module instead of `print()`. This provides more control over message routing and verbosity.

#### 4.2. Best Practices to Implement

*   **Fail Fast:** When an invalid state is detected, raise an error immediately rather than attempting to proceed with potentially bad data.
*   **DRY (Don't Repeat Yourself):** Abstract common validation patterns.
*   **Clear API:** Ensure the function's interface (inputs, outputs, exceptions) is clear and well-documented.
*   **Defensive Programming:** Assume inputs might be malformed or malicious and validate them thoroughly.

#### 4.3. Library/Tool Suggestions

*   **Pydantic:** Excellent for defining data schemas with type validation, data parsing, and serialization. It integrates well with FastAPI and other modern Python frameworks.
*   **`email_validator`:** For robust email format validation.
*   **`logging` module:** For structured and configurable logging.

#### 4.4. Testing Strategy

*   **Unit Tests:** Write unit tests for the `process_user_data` function (and any helper functions) to cover:
    *   **Valid Inputs:** Ensure correct processing of well-formed data.
    *   **Invalid JSON:** Test `json.JSONDecodeError` handling.
    *   **Missing Required Fields:** Test cases for missing `name` and `age`.
    *   **Invalid Field Types:** Test cases for `name` not being a string, `age` not being an integer.
    *   **Invalid Field Values:** Test cases for `age < 0`.
    *   **Optional Fields:** Test cases with and without `email`, and with invalid `email` types.
    *   **Edge Cases:** Empty `data_string`, very long strings, etc.
*   **Assert Exceptions:** Ensure that the function raises the *correct* exception types for different error conditions.

---

### 5. Refactored Code (Production-Ready)

This refactored version addresses the identified issues by:
*   Using Pydantic for robust schema validation.
*   Employing specific exceptions for clearer error handling.
*   Implementing type hinting and an improved docstring.
*   Utilizing constants for field names.
*   Leveraging Python's `logging` module.

Sandboxed live preview

python

import json

import logging

from typing import Dict, Any, Optional

from enum import Enum

Install pydantic: pip install pydantic

from pydantic import BaseModel, Field, EmailStr, ValidationError, root_validator

Configure basic logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

logger = logging.getLogger(__name__)

--- Constants and Enums for clarity and maintainability ---

class UserDataStatus(str, Enum):

"""Enumeration for user data processing statuses."""

PROCESSED = "processed"

VALIDATION_FAILED = "validation_failed"

ERROR = "error"

class UserDataFields(str, Enum):

"""Enumeration for user data field names."""

NAME = "name"

AGE = "age"

EMAIL = "email"

PROCESSED_BY_USER = "processed_by_user"

STATUS = "status"

--- Pydantic Model for Data Validation ---

class UserInputData(BaseModel):

"""

Pydantic model defining the schema and validation rules for user input data.

"""

name: str = Field(..., min_length=1, description="User's full name, must be a non-empty string.")

age: int = Field(..., ge=0, description="User's age, must be a non-negative integer.")

email: Optional[EmailStr] = Field(None, description="User's email address, must be a valid email format if provided.")

@root_validator(pre=True)

def strip_whitespace_from_strings(cls, values):

"""Pre-processing validator to strip whitespace from string fields."""

for key, value in values.items():

if isinstance(value, str):

values[key] = value.strip()

return values

class Config:

"""Pydantic configuration."""

extra = "ignore" # Ignore extra fields not defined in the model

anystr_strip_whitespace = True # Automatically strip whitespace from all strings

--- Custom Exceptions for specific error handling ---

class UserDataProcessingError(Exception):

"""Base exception for errors during user data processing."""

pass

class InvalidInputDataError(UserDataProcessingError):

"""Raised when the input data itself is invalid (e.g., not JSON)."""

pass

class DataValidationError(UserDataProcessingError):

collab Output

I've completed the ai_refactor step of your "AI Code Review" workflow. This involved a comprehensive analysis of the provided code (or a representative sample, if no specific code was supplied in the prompt), identifying areas for improvement, and generating a refined, more robust, and maintainable version.


AI Code Review & Refactoring Deliverable

Workflow Step: collab → ai_refactor

Description: Comprehensive code review with suggestions and refactoring

1. Introduction

This deliverable provides a detailed code review and a refactored version of the target Python code. The goal is to enhance the code's readability, maintainability, extensibility, error handling, and adherence to best practices, ensuring a more robust and scalable solution.

Assumed Original Code:

(Since no specific code was provided in the prompt, a representative Python function for data processing has been assumed to demonstrate a comprehensive review and refactoring process. If your actual code differs, please provide it for a more tailored review.)


import csv
import json
import io

def process_data_file(file_path, output_format):
    """
    Reads data from a file (CSV or JSON), performs basic type conversion
    for specific fields if CSV, and then outputs the data in the specified format.
    """
    data = []
    try:
        with open(file_path, 'r', encoding='utf-8') as f:
            if file_path.lower().endswith('.csv'):
                reader = csv.DictReader(f)
                for row in reader:
                    processed_row = {}
                    for k, v in row.items():
                        # Basic type conversion for known fields
                        if k.lower() == 'id' and v.isdigit():
                            processed_row[k] = int(v)
                        elif k.lower() == 'price' and v.replace('.', '', 1).isdigit():
                            processed_row[k] = float(v)
                        elif k.lower() == 'is_active':
                            processed_row[k] = (v.lower() == 'true')
                        else:
                            processed_row[k] = v
                    data.append(processed_row)
            elif file_path.lower().endswith('.json'):
                data = json.load(f)
                # If JSON, assume data is already in desired format or needs no special processing here
                # For consistency, could apply similar processing, but original code didn't.
            else:
                print(f"Error: Unsupported input file format for {file_path}")
                return None
    except FileNotFoundError:
        print(f"Error: Input file not found at {file_path}")
        return None
    except json.JSONDecodeError:
        print(f"Error: Invalid JSON format in {file_path}")
        return None
    except csv.Error as e:
        print(f"Error: CSV parsing error in {file_path}: {e}")
        return None
    except Exception as e:
        print(f"An unexpected error occurred during file reading: {e}")
        return None

    # Output formatting
    if output_format.lower() == 'json':
        try:
            return json.dumps(data, indent=4)
        except TypeError as e:
            print(f"Error: Could not serialize data to JSON: {e}")
            return None
    elif output_format.lower() == 'csv':
        if not data:
            return ""
        # Assume data is a list of dictionaries for CSV output
        if not all(isinstance(item, dict) for item in data):
            print("Error: Data is not a list of dictionaries, cannot output as CSV.")
            return None
        
        # Collect all possible keys from all dictionaries to ensure complete header
        all_keys = set()
        for item in data:
            all_keys.update(item.keys())
        keys = sorted(list(all_keys)) # Sort keys for consistent header order

        output_buffer = io.StringIO()
        try:
            writer = csv.DictWriter(output_buffer, fieldnames=keys)
            writer.writeheader()
            writer.writerows(data)
            return output_buffer.getvalue()
        except Exception as e:
            print(f"Error: Could not write data to CSV: {e}")
            return None
    else:
        print(f"Error: Unsupported output format: {output_format}")
        return None

2. Code Review Summary

The original process_data_file function attempts to handle multiple responsibilities: file reading, format detection, data parsing (with type conversion), and output formatting. While functional, this monolithic design leads to several issues:

  • Violation of Single Responsibility Principle (SRP): The function does too much, making it harder to read, test, and maintain.
  • Limited Extensibility: Adding new file types, output formats, or complex data processing rules would require significant modifications to this single function.
  • Error Handling: Uses print statements for errors instead of raising exceptions or utilizing a proper logging framework, which can obscure issues in larger applications.
  • Readability & Maintainability: Nested logic for type conversion and format handling reduces clarity. Hardcoded field names for type conversion are brittle.
  • Lack of Type Hinting: Absence of type hints makes the function's expected inputs and outputs less clear, hindering static analysis and IDE support.

3. Detailed Findings and Suggestions

3.1. Architecture & Design

  • Finding: The function combines file reading, parsing, data transformation, and output generation.
  • Suggestion: Decouple these concerns into smaller, focused functions or classes. This improves modularity, reusability, and testability.

* Separate concerns: read_data(file_path) -> parse_csv(file_content) / parse_json(file_content) -> transform_data(raw_data) -> format_output(processed_data, output_format).

  • Finding: Hardcoded type conversion logic for specific CSV fields (id, price, is_active).
  • Suggestion: Abstract the data transformation logic. This could be a configurable mapping or a separate transformation pipeline, allowing for easy addition/modification of rules without altering the core parsing logic.

3.2. Readability & Maintainability

  • Finding: The main function is long and contains multiple levels of conditional logic.
  • Suggestion: Break down the function into smaller, well-named helper functions. This makes the code easier to understand at a glance and to debug specific parts.
  • Finding: Repetitive lower() calls for file extensions and field names.
  • Suggestion: While minor, consistent use of constants or helper functions can improve clarity. In the refactored code, we'll keep it as is for brevity but note that for more complex scenarios, this could be refactored.
  • Finding: The CSV output logic for collecting keys is robust but could be slightly cleaner if a consistent schema is expected.
  • Suggestion: If a consistent schema is guaranteed, define it upfront. Otherwise, the current dynamic key collection is appropriate.

3.3. Error Handling

  • Finding: Errors are handled via print() statements and returning None.
  • Suggestion:

* Raise specific exceptions: Instead of printing and returning None, raise appropriate exceptions (e.g., ValueError for unsupported formats, FileNotFoundError, IOError for file issues). This allows calling code to handle errors gracefully and programmatically.

* Implement logging: For production systems, use Python's logging module instead of print() for better control over log levels, destinations, and formatting.

* Custom Exceptions: For domain-specific errors, consider defining custom exception classes (e.g., UnsupportedFormatError).

3.4. Performance & Efficiency

  • Finding: For small files, the current approach is acceptable. For very large files, reading the entire file into memory might be inefficient.
  • Suggestion: For extremely large datasets, consider streaming processing or using libraries designed for large-scale data (e.g., Pandas for tabular data). For this refactoring, we'll focus on improving the current in-memory approach.

3.5. Best Practices

  • Finding: Lack of type hints.
  • Suggestion: Add type hints to function signatures and variables for improved code clarity, static analysis, and IDE assistance.
  • Finding: No clear mechanism to define how CSV fields should be typed.
  • Suggestion: Introduce a schema or a mapping configuration for type conversions. This allows for flexible and declarative definition of data types.
  • Finding: The JSON input path doesn't apply the same processing logic as CSV
ai_code_review.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}