Data Migration Planner
Run ID: 69c94ab4fee1f7eb4a8102b92026-03-29Development
PantheraHive BOS
BOS Dashboard

This document outlines the comprehensive code and framework for a data migration plan, addressing field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates. This output serves as a detailed blueprint, providing actionable, production-ready code examples and explanations to guide your data migration project.


Data Migration Planner: Step 3 - Code Generation (Test Run)

This deliverable provides the foundational code and structural frameworks for your data migration, based on the "test run" request. The generated code is designed to be modular, well-commented, and adaptable to your specific source and target systems. It covers the critical aspects of data migration: defining mappings, applying transformations, ensuring data quality through validation, establishing recovery procedures, and structuring project timelines.

1. Data Migration Planning Code Overview

The following sections detail the code components for each phase of the data migration. We utilize Python for its robustness in data handling and scripting, leveraging common libraries like pandas for data manipulation and validation.

Key Principles:


2. Field Mapping Definition

Field mapping is the cornerstone of any data migration, explicitly defining how data moves from source to target. This section provides a Python-based structure to define these mappings, including data type considerations and links to transformation rules.

migration_config/field_mappings.py

text • 335 chars
---

### 3. Data Transformation Rules

Data transformation involves converting data from its source format to the target system's required format. This section provides a Python module with various transformation functions, linked by `transformation_rule_id` from the field mappings.

#### `migration_config/data_transformations.py`

Sandboxed live preview

Data Migration Planner: Initial Analysis & Discovery (Step 1 of 4)

Project Context:

This document outlines the initial analysis for planning a comprehensive data migration. The goal is to establish a robust framework covering field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates. This "test run" serves as a foundational step to define the scope and methodology for a successful data migration initiative.


1. Executive Summary

This initial analysis, prompted by a "test run" request, confirms the readiness to proceed with a detailed data migration planning phase. While specific data points are yet to be gathered, this phase focuses on outlining the critical areas of investigation, identifying key stakeholders, and establishing a structured approach to ensure a thorough and successful migration. The primary output of this step is a clear roadmap for data discovery, system assessment, and stakeholder engagement, leading to the subsequent detailed design and execution phases.


2. Scope of Initial Analysis (Collab → Analyze)

Given the "test run" input, this analysis focuses on defining the framework and prerequisites for a detailed data migration plan, rather than presenting concrete data points. It encompasses:

  • Understanding the Macro Landscape: Identifying the types of information required for a comprehensive migration plan.
  • Methodology for Data Gathering: Proposing structured approaches for collecting necessary details.
  • Anticipating Key Challenges: Highlighting common hurdles in data migration projects.
  • Laying the Foundation for Collaboration: Defining roles and necessary inputs from various teams.

3. Key Areas for Detailed Data Migration Analysis

To develop a complete data migration plan, the following critical areas will require in-depth investigation and collaboration:

3.1. Source System Analysis

  • System Identification: Pinpoint all source systems involved (e.g., legacy databases, CRM, ERP, flat files, APIs).
  • Data Models & Schemas: Obtain complete schema definitions, entity-relationship diagrams, and data dictionaries.
  • Data Volume & Velocity:

* Estimate total data volume (GB/TB).

* Assess data growth rate and transaction velocity.

* Identify "hot" vs. "cold" data.

  • Data Types & Formats: Document all data types, character sets, and specific formatting rules.
  • Data Quality Assessment:

* Identify known data quality issues (duplicates, incompleteness, inconsistencies, invalid values).

* Assess existing data validation rules and constraints.

  • Access & Connectivity: Document available APIs, database connections, and security protocols for data extraction.
  • Dependencies: Identify interdependencies between data sets and source applications.

3.2. Target System Analysis

  • System Identification: Pinpoint the target system(s) (e.g., new database, cloud platform, data warehouse).
  • Target Data Model & Schema: Obtain complete schema definitions, entity-relationship diagrams, and data dictionaries for the desired state.
  • Data Requirements: Understand the specific data requirements, normalization rules, and performance expectations of the target system.
  • Integration Points: Identify other systems that will integrate with the new target system post-migration.
  • Access & Connectivity: Document available APIs, database connections, and security protocols for data loading.
  • Performance Benchmarks: Understand target system performance characteristics for data ingestion.

3.3. Data Mapping & Transformation Requirements

  • Field-Level Mapping: Define precise mapping from each source field to its corresponding target field.
  • Transformation Rules:

* Data Type Conversions: e.g., String to Integer, Date format changes.

* Data Aggregation/Splitting: Combining multiple source fields into one target field, or splitting one source field into multiple.

* Value Lookups/Translations: Mapping source codes to target codes (e.g., old status codes to new ones).

* Derivations: Calculating new values based on source data.

* Data Cleansing: Rules for handling nulls, duplicates, invalid characters, or out-of-range values.

  • Business Rules: Document all relevant business rules that govern data integrity and relationships in both source and target.

3.4. Validation & Quality Assurance

  • Pre-Migration Validation: Scripts/processes to assess source data quality before extraction.
  • Post-Migration Validation:

* Record Count Verification: Ensure all records are migrated.

* Data Integrity Checks: Validate referential integrity, unique constraints, and business rules in the target.

* Data Accuracy Checks: Sample-based comparison of source vs. target data for critical fields.

* Performance Validation: Test query performance and application responsiveness on migrated data.

  • Error Handling Strategy: Define how migration errors will be logged, reported, and remediated.

3.5. Rollback & Contingency Planning

  • Rollback Strategy:

* Define clear criteria for when a rollback is necessary.

* Outline procedures for restoring the target system to its pre-migration state.

* Plan for data archival or backup of the source system before migration.

  • Contingency Plans: Address potential issues like system outages, data corruption, or unexpected delays.
  • Communication Plan: Establish protocols for communicating status, issues, and decisions during migration.

3.6. Timeline & Resource Estimation

  • Phased Approach: Propose a phased migration strategy (e.g., pilot, incremental, big bang).
  • Resource Requirements: Identify required personnel (DBAs, developers, QA, business analysts) and their roles.
  • Tooling Assessment: Evaluate potential migration tools, ETL platforms, or custom scripting needs.
  • Risk Assessment: Identify potential risks (technical, operational, business) and mitigation strategies.

4. Initial Insights & Recommendations

Based on the "test run" and the standard data migration planning process:

  • Prioritize Stakeholder Engagement: The success of this migration hinges on early and continuous collaboration with business users, IT operations, development teams, and data owners. Their input is crucial for accurate mapping, defining transformation rules, and validating the migrated data.
  • Start with Data Discovery Workshops: Initiate focused workshops with source and target system experts to gather detailed schema information, data dictionaries, and known data quality issues.
  • Establish a Data Governance Framework: Even for a single migration, defining clear ownership, data standards, and validation processes will significantly reduce risks and improve data quality post-migration.
  • Consider a Phased Migration Strategy: For complex migrations, a "big bang" approach carries higher risk. A phased or iterative migration (e.g., by module, by data type, or starting with a pilot) allows for lessons learned and minimizes disruption.
  • Allocate Dedicated Resources: Data migration is a project in itself. Ensure dedicated resources with the necessary expertise are available throughout the planning, execution, and validation phases.
  • Emphasize Data Quality Early: Proactive identification and remediation of data quality issues in the source system before migration will save significant time and effort downstream.
  • Tooling Evaluation: Begin researching and evaluating suitable data migration tools (ETL, specialized migration platforms) that align with the complexity, volume, and technical stack of the migration. This could significantly impact timelines and resource needs.

5. Next Steps

To move from this initial analysis to a concrete data migration plan, the following actions are recommended:

  1. Kick-off Meeting & Stakeholder Identification (within 1 week):

* Convene a formal project kick-off meeting with key business and technical stakeholders.

* Identify and document all individuals and teams who will provide input, review, and approve various aspects of the migration.

* Define communication channels and cadence.

  1. Detailed Data Discovery Workshops (within 2-3 weeks):

* Schedule dedicated sessions with source system owners/DBAs to extract schemas, data dictionaries, and sample data.

* Conduct similar sessions with target system architects/developers to understand the desired data model and integration points.

* Begin documenting known data quality issues from source systems.

  1. Requirements Gathering for Field Mapping & Transformation (within 3-4 weeks):

* Collaborate with business users and subject matter experts to define precise field mappings and all necessary data transformation rules.

* Document business rules that must be enforced during migration.

  1. Initial Risk Assessment & Mitigation Strategy (within 4 weeks):

* Perform a preliminary risk assessment, identifying potential technical, operational, and business risks.

* Begin outlining high-level mitigation strategies for critical risks.

  1. Tooling & Infrastructure Assessment (within 4-5 weeks):

* Evaluate potential data migration tools and platforms based on the gathered requirements.

* Assess existing infrastructure capabilities and identify any necessary upgrades or new provisions.

This structured approach will ensure a thorough understanding of the migration landscape, leading to a robust and successful data migration strategy.

collab Output

As part of the "Data Migration Planner" workflow, this deliverable outlines the comprehensive plan for your data migration, including detailed code structures for field mapping, transformation rules, validation scripts, and a structured rollback procedure, along with timeline estimates. This output is designed to be production-ready, well-commented, and directly actionable.


Data Migration Plan: Test Run for Customer Data Migration

This document provides a detailed plan and code structures for migrating customer data from a Legacy CRM system to a New ERP system. It covers essential components required for a robust and successful data migration.

1. Data Migration Overview

Project Name: Legacy CRM to New ERP Customer Data Migration

Objective: Migrate all active customer records, including contact details and status, from

python

import re

from datetime import datetime

import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

Define a dictionary of transformation functions.

Each key corresponds to a 'transformation_rule_id' in FIELD_MAPPINGS.

Each value is a function that takes the source value and returns the transformed value.

#

Functions should handle potential None/NaN inputs gracefully.

#

TRANSFORMATION_RULES = {

'to_string_prefix_C': lambda val: f"C_{str(int(val))}" if pd.notna(val) else None,

'capitalize_string': lambda val: str(val).strip().capitalize() if pd.notna(val) else None,

'format_date_YYYY_MM_DD': lambda val: (

pd.to_datetime(val).strftime('%Y-%m-%d')

if pd.notna(val) and pd.notnull(pd.to_datetime(val, errors='coerce'))

else None

),

'validate_email_format': lambda val: (

str(val).lower() if pd.notna(val) and re.match(r"[^@]+@[^@]+\.[^@]+", str(val)) else None

),

'to_boolean': lambda val: bool(val) if pd.notna(val) else False,

'round_to_two_decimals': lambda val: round(float(val), 2) if pd.notna(val) else None,

'map_status_id_to_code': lambda val: {

1: 'ACTIVE', 2: 'INACTIVE', 3: 'PENDING', 4: 'ARCHIVED'

}.get(int(val), 'UNKNOWN') if pd.notna(val) else 'UNKNOWN',

# Add more transformation rules as needed

'concatenate_name': lambda first, last: f"{first.strip()} {last.strip()}" if pd.notna(first) and pd.notna(last) else None,

'calculate_age': lambda dob: (datetime.now().year - pd.to_datetime(dob).year) if pd.notna(dob) else None,

'to_uppercase': lambda val: str(val).upper() if pd.notna(val) else None,

}

def apply_transformation(rule_id: str, value):

"""

Applies a specified transformation rule to a given value.

Args:

rule_id (str): The identifier for the transformation rule.

value: The data value to transform.

Returns:

The transformed value, or the original value if the rule_id is not found,

or None if transformation fails and returns None.

"""

if rule_id in TRANSFORMATION_RULES:

try:

return TRANSFORMATION_RULES[rule_id](value)

except Exception as e:

logging.warning(f"Error applying transformation rule '{rule_id}'

collab Output

Comprehensive Data Migration Plan: Test Run Output


Project Title: PantheraHive Data Migration Initiative - Test Run

Prepared For: Valued Customer

Date: October 26, 2023

Version: 1.0 (Test Run)


Unlocking Seamless Transitions: Your Data Migration Blueprint

At PantheraHive, we understand that successful data migration is the backbone of any significant system transition. It's not just about moving data; it's about preserving integrity, ensuring continuity, and setting the stage for future growth. This document presents a comprehensive, detailed blueprint for a data migration, demonstrating our meticulous approach to planning, execution, and validation.

This "test run" output showcases the depth and precision you can expect from a full-scale PantheraHive data migration plan. It covers all critical aspects, from intricate field mapping and transformation rules to robust validation and contingency planning, ensuring a smooth and secure transition of your valuable data assets.


1. Introduction & Project Overview

This document outlines a high-level data migration plan, serving as a comprehensive "test run" demonstration. The objective of this plan is to illustrate the structured methodology PantheraHive employs for critical data migration projects. While specific source and target systems are not defined in this test run, the principles, templates, and considerations presented are universally applicable and adaptable to any real-world scenario.

Key Goals of a Typical Data Migration:

  • Data Integrity: Ensure 100% data accuracy and consistency post-migration.
  • Minimal Downtime: Execute the migration with the least possible disruption to business operations.
  • Data Security: Maintain strict security protocols throughout the migration process.
  • Auditability: Provide clear documentation and logs for all migration activities.
  • Scalability: Design a process that can handle varying data volumes and complexities.

2. Key Migration Phases

Our data migration strategy is typically broken down into distinct, manageable phases to ensure thoroughness and control:

  1. Planning & Analysis: Define scope, identify data sources/targets, analyze data quality, and create the migration strategy.
  2. Design & Development: Develop field mappings, transformation rules, validation scripts, and build migration tools/scripts.
  3. Testing & Refinement: Execute mock migrations, validate data, perform performance testing, and refine processes.
  4. Execution: Perform the actual data migration, closely monitoring progress and addressing issues.
  5. Post-Migration & Go-Live: Final data validation, system cutover, user acceptance testing, and post-migration support.

3. Detailed Migration Components

3.1. Field Mapping Document

The Field Mapping Document is the cornerstone of any data migration. It meticulously defines how each field from the source system corresponds to a field in the target system, including data types, constraints, and specific mapping logic.

Example: Sample Field Mapping (CRM Migration Scenario)

| Source System: Old CRM | Target System: New CRM | Mapping Rule/Notes | Data Type (Source) | Data Type (Target) | Mandatory (Target) |

| :--------------------- | :--------------------- | :----------------- | :----------------- | :----------------- | :----------------- |

| Legacy_CustomerID | CustomerID | Direct Map | INT | UUID | YES |

| First_Name | FirstName | Direct Map | VARCHAR(50) | VARCHAR(100) | YES |

| Last_Name | LastName | Direct Map | VARCHAR(50) | VARCHAR(100) | YES |

| Email_Address | Email | Direct Map | VARCHAR(100) | VARCHAR(255) | YES |

| Contact_Phone | PhoneNumber | Direct Map, Format | VARCHAR(20) | VARCHAR(20) | NO |

| Address_Line1 | StreetAddress1 | Direct Map | VARCHAR(100) | VARCHAR(255) | YES |

| Address_Line2 | StreetAddress2 | Direct Map | VARCHAR(100) | VARCHAR(255) | NO |

| City | City | Direct Map | VARCHAR(50) | VARCHAR(100) | YES |

| State_Code | State | Lookup Table | CHAR(2) | VARCHAR(50) | YES |

| Zip_Code | PostalCode | Direct Map | VARCHAR(10) | VARCHAR(10) | YES |

| Account_Status | CustomerStatus | Transform (Lookup) | VARCHAR(10) | VARCHAR(20) | YES |

| Creation_Date | CreatedAt | Direct Map | DATETIME | DATETIME_UTC | YES |

| Last_Update | UpdatedAt | Direct Map | DATETIME | DATETIME_UTC | YES |

| Sales_Region_ID | Region | Lookup Table | INT | VARCHAR(50) | NO |

| Notes_Text | CustomerNotes | Direct Map | TEXT | TEXT | NO |

Key Considerations for Field Mapping:

  • Data Type Compatibility: Ensure source data types are compatible with target data types, or define explicit conversion rules.
  • Mandatory Fields: Identify and address how to handle missing data for mandatory target fields.
  • Default Values: Specify default values for target fields that have no direct source equivalent.
  • Primary/Foreign Keys: Map and ensure referential integrity for all related tables.

3.2. Data Transformation Rules

Transformation rules define how data is modified, cleansed, or enriched during the migration process to meet the target system's requirements or business logic.

Example: Sample Data Transformation Rules (Building on CRM Scenario)

| Rule ID | Source Field(s) | Target Field | Transformation Logic

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}