Data Migration Planner
Run ID: 69b6fa0f896970b089464aad2026-03-29Development
PantheraHive BOS
BOS Dashboard

Plan a complete data migration with field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates.

Data Migration Plan: AI Technology Data Consolidation (Test Run)

This document outlines a comprehensive plan for a data migration project, specifically focusing on consolidating data related to "AI Technology" as a "Test run." The goal is to migrate relevant AI-specific data from a legacy system to a new, modern platform, ensuring data integrity, accuracy, and usability for advanced analytics and AI model development.


1. Project Overview & Scope

Project Title: AI Technology Data Consolidation - Test Run

Description: This test run migration aims to transfer a representative subset of AI-related data (e.g., model metadata, training run logs, dataset references, performance metrics) from an existing legacy database (Source) to a new AI Data Lake / ML Platform (Target). The primary objective is to validate the migration process, field mappings, transformation rules, and rollback procedures before a full-scale migration.

Migration Goal: To establish a robust, repeatable, and validated process for migrating AI technology-related data, ensuring data quality and readiness for the new platform's functionalities.

Scope: Migration of selected tables/entities pertinent to AI models, training runs, and associated datasets. Excludes historical archival data not directly used by the new platform.


2. Data Migration Strategy

For this test run, a Phased Approach is recommended, focusing on a critical subset of data. This allows for iterative testing, validation, and refinement of the migration process with minimal impact.

  • Phase 1 (Pre-Migration): Discovery, planning, source data profiling, mapping, rule definition, environment setup.
  • Phase 2 (Development & Testing): ETL script development, unit testing, integration testing, performance testing on a staging environment.
  • Phase 3 (Test Run Execution): Execute migration for a defined subset of data in a dedicated test environment, followed by comprehensive validation.
  • Phase 4 (Post-Migration Review): Analyze test run results, refine processes, document lessons learned.

3. Source and Target Systems

Source System:

  • Type: Legacy Relational Database (e.g., MySQL, PostgreSQL, SQL Server)
  • Name (Hypothetical): AI_Research_DB_Legacy
  • Key Entities (Examples): models, training_runs, datasets, experiments, metrics

Target System:

  • Type: Modern AI Data Lake / ML Platform (e.g., Databricks, Snowflake, Azure ML, AWS SageMaker Data Catalog)
  • Name (Hypothetical): AI_Platform_DataLake
  • Key Entities (Examples): ml_models, training_jobs, data_sources, experiment_logs, model_performance

4. Field Mapping (Example Subset)

Below is an example mapping for a hypothetical models table from the source to ml_models in the target.

| Source Table: models | Target Table: ml_models |

| :---------------------- | :------------------------ |

| Source Field Name | Target Field Name |

| model_id (INT, PK) | model_uuid (UUID, PK) |

| model_name (VARCHAR) | model_name (VARCHAR) |

| algo_type (VARCHAR) | algorithm_type (VARCHAR)|

| version (VARCHAR) | model_version (VARCHAR) |

| owner_id (INT) | owner_user_id (UUID) |

| created_ts (DATETIME) | created_at (TIMESTAMP) |

| last_mod_ts (DATETIME)| last_modified_at (TIMESTAMP) |

| status (VARCHAR) | deployment_status (VARCHAR) |

| accuracy_score (FLOAT)| primary_metric_value (FLOAT) |

| dataset_id (INT, FK) | training_dataset_uuid (UUID, FK) |

| model_path (TEXT) | model_artifact_uri (TEXT) |

| notes (TEXT) | description (TEXT) |

Key Considerations:

  • Primary Keys: Often change from auto-increment integers to UUIDs in modern distributed systems.
  • Timestamps: Standardize to UTC and consistent formats (e.g., ISO 8601).
  • Foreign Keys: Ensure referential integrity is maintained or re-established using new target primary keys.
  • Data Types: Align with target system's best practices (e.g., TEXT in source might map to VARCHAR(MAX) or STRING in target).

5. Transformation Rules

Specific rules for data manipulation during the migration process:

  1. UUID Generation for Primary Keys:

* Rule: For models.model_id and datasets.dataset_id, generate a new UUID (model_uuid, training_dataset_uuid) in the target system. Store the original model_id / dataset_id as an original_source_id column for traceability.

* Impact: Ensures uniqueness across potentially merged sources and aligns with modern platform practices.

  1. User ID Mapping:

* Rule: Map models.owner_id (INT) to ml_models.owner_user_id (UUID) using a lookup table from a central Identity Management system. If no mapping exists, default to a generic "unassigned" UUID.

* Impact: Standardizes user identification across systems.

  1. Algorithm Type Standardization:

* Rule: Standardize models.algo_type values (e.g., 'RF', 'GBM', 'XGBoost', 'CNN') to a predefined set of canonical names (e.g., 'Random Forest', 'Gradient Boosting Machine', 'XGBoost', 'Convolutional Neural Network'). Handle variations and potential misspellings.

* Impact: Improves data consistency and queryability in the target.

  1. Deployment Status Normalization:

* Rule: Map models.status values (e.g., 'D', 'P', 'R', 'A') to meaningful ml_models.deployment_status values (e.g., 'Draft', 'Pending Review', 'Ready for Deployment', 'Active').

* Impact: Provides clear, human-readable status indicators.

  1. Metric Aggregation/Renaming:

* Rule: Rename models.accuracy_score to ml_models.primary_metric_value. If multiple metrics exist in the source, select the primary one for this field, or create additional fields for other metrics.

* Impact: Aligns with target system's metric terminology.

  1. Path Conversion:

* Rule: Convert models.model_path (e.g., /legacy/storage/models/v1/model_A.pkl) to a new ml_models.model_artifact_uri (e.g., s3://ai-artifacts-bucket/models/model_A/v1/model.pkl) based on the new storage architecture. This may involve string manipulation and prefixing.

* Impact: Ensures artifacts are correctly referenced in the new environment.

  1. Timestamp Conversion:

* Rule: Convert all DATETIME fields (created_ts, last_mod_ts) to UTC TIMESTAMP format (YYYY-MM-DDTHH:MM:SSZ).

* Impact: Ensures time zone consistency and simplifies time-based queries.

  1. Description Enrichment:

* Rule: Concatenate models.notes with a standard prefix "Migrated from Legacy AI Research DB: " to form ml_models.description.

* Impact: Provides context for migrated data.


6. Validation Scripts & Procedures

Robust validation is critical to ensure data integrity.

A. Pre-Migration Validation (Source Data Profiling & Quality Checks):

  • Tool: SQL queries, Data Profiling tools (e.g., Great Expectations, custom scripts).
  • Checks:

* Completeness: Identify NULL values in mandatory fields (model_id, model_name).

* Uniqueness: Verify uniqueness of primary keys (model_id).

* Consistency: Check for consistent data formats (e.g., algo_type values).

* Referential Integrity: Verify foreign key relationships (dataset_id exists in datasets table).

* Volume: Record total row counts for each source table.

B. Post-Migration Validation (Target Data Integrity & Accuracy Checks):

  • Tool: SQL queries, Spark/Python scripts, data comparison tools.
  • Checks:

1. Record Count Verification:

Script: SELECT COUNT() FROM source.models; vs. SELECT COUNT(*) FROM target.ml_models;

* Expected: Counts should match for the migrated subset.

2. Key Field Uniqueness:

Script: SELECT model_uuid, COUNT() FROM target.ml_models GROUP BY model_uuid HAVING COUNT(*) > 1;

* Expected: Returns no rows.

3. Data Type Verification:

* Script: Sample data from target and verify data types match schema (e.g., primary_metric_value is FLOAT).

4. Referential Integrity Check:

* Script: Verify training_dataset_uuid in ml_models correctly links to data_sources.dataset_uuid.

* Expected: All foreign keys should resolve.

5. Sample Data Comparison:

* Script: Randomly select 5-10 records from the source, transform them manually (or using a test script), and compare with the corresponding migrated records in the target. Focus on complex transformations.

* Expected: Transformed data in target matches expected output.

6. Business Rule Validation:

* Script: Verify specific business rules (e.g., ml_models.deployment_status must be one of the canonical values; primary_metric_value must be between 0 and 1 for accuracy scores).

* Expected: All migrated data conforms to new business rules.

7. Performance Check: Query target tables to ensure acceptable query performance.


7. Rollback Procedures

A robust rollback plan is essential for mitigating risks during a test run or actual migration.

  1. Pre-Migration Backups:

* Source System: Perform a full logical and physical backup of the AI_Research_DB_Legacy before initiating any migration activities.

* Target System: If the target system is not empty, ensure a snapshot or backup of the AI_Platform_DataLake state immediately prior to the migration attempt.

  1. Transaction Management:

* Strategy: For the test run, perform migrations within a transaction block if the target system supports it, allowing for an immediate ROLLBACK if issues are detected during the migration process itself. If not, use a "truncate and reload" strategy for the test subset.

  1. Rollback Steps (if issues detected post-migration):

* Step 1: Halt Migration Process: Immediately stop any ongoing migration jobs.

* Step 2: Isolate Target Data: If the target system was not empty, identify and delete only the data inserted/updated by the failed migration batch using audit logs or the original_source_id field.

* Step 3: Restore Target State: If isolating is too complex or risky for the test run, restore the AI_Platform_DataLake tables/entities relevant to the migration from the pre-migration backup/snapshot.

* Step 4: Verify Rollback: Confirm that the target system has reverted to its pre-migration state.

* Step 5: Incident Communication: Inform all stakeholders about the rollback and the reasons for it.

* Step 6: Post-Mortem & Remediation: Analyze the root cause of the failure, update migration scripts/plan, and re-test.

  1. Contingency Plan:

* Manual Data Entry: In extreme cases where automated rollback fails for a small test subset, be prepared for manual data correction or re-entry for critical records, though this should be avoided.

* Temporary Data Freeze: Implement a temporary freeze on data updates in the source system if issues are critical and require extended investigation.


8. Timeline Estimates (Test Run Focus)

This timeline is a high-level estimate for a test run of a medium complexity migration, assuming dedicated resources.

| Phase | Estimated Duration (Test Run) | Key Activities |

| :-------------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |

| 1. Planning & Analysis | 3 Days | Project scope finalization, stakeholder alignment, source data profiling, field mapping, transformation rule definition, rollback strategy. |

| 2. Development | 5 Days | ETL script coding (extraction, transformation, loading), unit testing of individual components. |

| 3. Testing & QA | 4 Days | Integration testing, data validation script development, performance testing on test dataset, bug fixing. |

| 4. Test Run Execution | 1 Day | Execute migration scripts on a representative test subset, real-time monitoring. |

| 5. Post-Migration Review| 2 Days | Comprehensive validation, error analysis, performance review, documentation updates, lessons learned. |

| Total Estimated Time | 15 Working Days | |

Note: This timeline is for the test run only. A full-scale migration would require significantly more time for larger data volumes, complex transformations, and broader stakeholder coordination. Buffer days should be added for unforeseen issues.


9. Key Risks & Mitigation

| Risk | Mitigation Strategy |

| :--------------------------------- | :-------------------------------------------------------------------------------------------------------------- |

| Data Loss/Corruption | Comprehensive backups, transaction management, robust validation scripts, phased migration, small test subsets. |

| Inaccurate/Incomplete Mapping | Thorough data profiling, iterative review of mappings with SMEs, automated schema comparison tools. |

| Performance Bottlenecks | Load testing, optimize ETL scripts, incremental migration strategy, scale target system resources. |

| Downtime Impact | Schedule migration during off-peak hours, communicate downtime clearly, utilize near-zero downtime techniques. |

| Scope Creep | Clearly defined scope document, strict change control process, regular stakeholder communication. |

| Skill Gaps in Team | Provide training, engage experienced consultants, leverage automated migration tools. |

| Rollback Failure | Test rollback procedures in isolation, ensure multiple backup strategies, detailed rollback documentation. |


10. Recommendations & Next Steps

Recommendations:

  1. Start Small: Prioritize a small, non-critical but representative dataset for the initial test run to minimize risk and gather early feedback.
  2. Automate Everything: Automate ETL processes, validation scripts, and even rollback steps where possible to reduce manual error and increase repeatability.
  3. Involve Stakeholders Early: Engage data owners, AI engineers, and business users from the start to validate mappings and transformation rules.
  4. Document Thoroughly: Maintain detailed documentation of source system analysis, field mappings, transformation rules, and all migration scripts.
  5. Monitor Closely: Implement robust monitoring during the migration execution to detect issues in real-time.

Next Steps:

  1. Detailed Source System Analysis: Conduct a deep dive into AI_Research_DB_Legacy schema, data types, constraints, and data quality.
  2. Finalize Field Mapping: Complete the comprehensive field mapping for all in-scope tables, including data types and nullability constraints.
  3. Develop ETL & Validation Scripts: Begin development of the extraction, transformation, loading (ETL) scripts and corresponding validation checks.
  4. Environment Setup: Ensure the dedicated test migration environment (source and target) is provisioned and configured.
  5. Kick-off Meeting: Schedule a formal kick-off meeting with all relevant stakeholders to review and approve this plan.

data_migration_planner.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}