Data Migration Planner
Run ID: 69cba11e61b1021a29a8adb42026-03-31Development
PantheraHive BOS
BOS Dashboard

This document provides a detailed, professional output for the "Data Migration Planner" step, focusing on generating comprehensive code and configuration structures to facilitate a robust data migration. This deliverable outlines the foundational components necessary for field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimation, presented with actionable Python code examples and explanations.


Data Migration Planner: Code Generation & Configuration

This deliverable provides the essential code structures and configuration templates required to plan and execute a complete data migration. The aim is to establish a clear, maintainable, and verifiable framework for migrating data from source to target systems, ensuring data integrity and minimizing risks.


1. Introduction and Overview

A successful data migration hinges on meticulous planning and robust execution. This document provides the foundational code and structural definitions for key migration phases:

The provided examples leverage Python, a versatile language for data engineering, along with common data structures like dictionaries and lists, and conceptual frameworks for scalability.


2. Data Migration Configuration Framework

To manage the complexity of data migration, a centralized configuration approach is recommended. This allows for defining all migration parameters in a structured, human-readable format, such as YAML or JSON, which can then be parsed and executed by migration scripts.

2.1 Conceptual Configuration Structure (YAML Example)

text • 44 chars
### 2.2 Python Code to Load Configuration

Sandboxed live preview

Data Migration Architecture Plan: Initial Draft

Project: Data Migration Planner

Workflow Step: 1 of 3 - Plan Architecture

Date: October 26, 2023

Prepared For: [Customer Name/Organization]


1. Introduction & Objectives

This document outlines the initial architectural plan for the upcoming data migration. The primary objective is to establish a robust, secure, and efficient framework for transferring data from identified source systems to the designated target environment. This foundational plan will guide subsequent detailed design, development, and execution phases, ensuring data integrity, minimal downtime, and adherence to business requirements.

Key Objectives for this Migration:

  • Data Consolidation: Consolidate data from [List specific source systems, e.g., Legacy CRM, ERP Module A] into [Target System, e.g., New Unified Platform].
  • Data Quality Improvement: Cleanse, standardize, and enrich data during migration to improve overall data quality in the target system.
  • System Modernization: Facilitate the deprecation of legacy systems and adoption of modern, scalable solutions.
  • Business Continuity: Ensure minimal disruption to ongoing business operations during the migration process.
  • Compliance & Security: Maintain data security and compliance with relevant regulations (e.g., GDPR, HIPAA) throughout the migration lifecycle.

2. Scope Definition

Source Systems:

  • [Source System 1 Name] (e.g., Oracle EBS 12.1.3 - Financials Module)

* Key Data Domains: [e.g., General Ledger, Accounts Payable, Accounts Receivable]

* Estimated Data Volume: [e.g., 5TB, 10 million records]

  • [Source System 2 Name] (e.g., Salesforce Classic - Sales Cloud)

* Key Data Domains: [e.g., Accounts, Contacts, Opportunities, Leads]

* Estimated Data Volume: [e.g., 2TB, 5 million records]

  • [Source System N Name] (e.g., Legacy Custom Database - SQL Server 2008)

* Key Data Domains: [e.g., Customer Master, Product Catalog]

* Estimated Data Volume: [e.g., 1TB, 2 million records]

Target System:

  • [Target System Name] (e.g., Salesforce Lightning - Sales & Service Cloud)

* Key Modules/Objects: [e.g., Accounts, Contacts, Opportunities, Cases, Products, Pricebooks]

* Expected Data Growth: [e.g., 10-15% annually]

In-Scope Data Types:

  • Structured Relational Data (e.g., tables, views)
  • Unstructured Data (e.g., attachments, documents - if applicable)
  • Master Data (e.g., Customers, Products, Vendors)
  • Transactional Data (e.g., Orders, Invoices, Service Requests)
  • Historical Data (e.g., archived transactions up to [Date])

Out-of-Scope Data/Systems:

  • [Specify any data or systems explicitly excluded, e.g., Archived data prior to 5 years ago, specific legacy reports, HR system data]

3. High-Level Migration Strategy & Approach

The proposed migration will employ a Phased Migration Strategy to minimize risk and allow for iterative testing and validation.

  • Phase 1: Foundation & Master Data: Migrate core master data (e.g., Accounts, Contacts, Products) that establish the foundational entities in the target system.
  • Phase 2: Core Transactional Data: Migrate primary transactional data linked to the master data (e.g., Opportunities, Orders, Cases).
  • Phase 3: Historical & Ancillary Data: Migrate remaining historical data, attachments, and less critical transactional records.
  • Phase 4: Cutover & Go-Live: Execute final delta migration, system switchover, and post-go-live support.

Key Considerations:

  • Downtime Strategy: Aim for minimal downtime, potentially utilizing a "big bang" approach for the final cutover of specific data sets, preceded by incremental data synchronization. Specific windows will be defined.
  • Data Volume Handling: Leverage parallel processing and optimized bulk loading techniques.
  • Data Integrity: Implement checksums, record counts, and reconciliation reports at each stage.
  • Security: All data in transit and at rest during the migration process will be encrypted and handled in compliance with [relevant security policies/standards].

4. Architectural Components & Flow

The migration architecture will follow an Extract, Transform, Load (ETL) pattern, potentially leveraging a staging area for complex transformations and data quality checks.

4.1. Extraction Layer

  • Purpose: Securely retrieve raw data from source systems.
  • Methodology:

* Database Sources (Oracle EBS, SQL Server): Direct database connections (JDBC/ODBC) with read-only accounts. Utilize SQL queries, views, or stored procedures for efficient data extraction. Incremental extraction using change data capture (CDC) or timestamp-based filtering for subsequent phases.

* API Sources (Salesforce Classic): Leverage Salesforce Bulk API for high-volume data extraction, ensuring efficient handling of large datasets and respecting API limits.

  • Technology Considerations:

* ETL Tool: [e.g., Talend, Informatica PowerCenter, Azure Data Factory, AWS Glue, Custom Python Scripts]

* Connectivity: Secure VPN/Direct Connect for on-premise sources; OAuth/API Keys for cloud sources.

* Batching: Configure extraction jobs to run in batches to manage system load.

4.2. Staging Layer (Optional but Recommended for Complex Migrations)

  • Purpose: Provide an intermediate, temporary storage area for extracted data before transformation. This allows for:

* Decoupling extraction from transformation.

* Performing initial data profiling and cleansing.

* Creating a recoverable snapshot of source data.

  • Technology Considerations:

* Database: High-performance relational database (e.g., PostgreSQL, SQL Server, Snowflake, S3 with Athena) in a secure, isolated environment.

* Storage: Scalable object storage for large files (e.g., AWS S3, Azure Blob Storage) for unstructured data.

4.3. Transformation Layer

  • Purpose: Cleanse, standardize, enrich, aggregate, and map data to the target system's schema and business rules.
  • Key Activities:

* Data Cleansing: Remove duplicates, correct inconsistencies, handle missing values.

* Data Standardization: Apply consistent formats (e.g., date formats, address formats).

* Data Enrichment: Augment data with external sources if required (e.g., geocoding).

* Data Mapping: Translate source fields to target fields based on defined mapping rules.

* Data Aggregation/Derivation: Calculate new fields or aggregate data as per target system requirements.

* Data Validation: Implement rules to identify invalid data before loading.

  • Technology Considerations:

* ETL Tool: The chosen ETL tool will be central to this layer, leveraging its built-in transformation capabilities.

* Custom Logic: Python/Java scripts within the ETL framework for highly complex, bespoke transformations.

* Data Quality Tools: Integration with data quality platforms (e.g., Informatica Data Quality, Talend Data Quality) if advanced profiling and mastering are required.

4.4. Loading Layer

  • Purpose: Ingest transformed and validated data into the target system.
  • Methodology:

* API-Based (Salesforce Lightning): Utilize Salesforce Data Loader CLI or Salesforce Bulk API 2.0 for efficient, high-volume data uploads, ensuring adherence to target system API limits and best practices.

* Database Sources: Direct database inserts/updates using bulk loading utilities or prepared statements for performance.

  • Error Handling: Implement robust error logging and retry mechanisms for failed records.
  • Technology Considerations:

* ETL Tool: The chosen ETL tool's loading connectors.

* Target System Utilities: Native bulk loading tools provided by the target system vendor.

* Concurrency Control: Manage parallel loading processes to optimize throughput without overloading the target system.

4.5. Data Validation & Reconciliation

  • Purpose: Ensure data integrity and accuracy throughout the migration process.
  • Key Activities:

* Pre-Migration Profiling: Understand source data characteristics and quality.

* Post-Extraction Validation: Verify extracted data against source counts and basic integrity rules.

* Post-Transformation Validation: Validate transformed data against target schema and business rules.

* Post-Load Reconciliation: Compare record counts, key field values, and checksums between source, staging (if used), and target systems.

* Business User Validation: Enable business users to review sample data in the target system.

  • Technology Considerations:

* ETL Tool: Reporting and logging features.

* Custom Scripts: SQL queries, Python scripts for comparison and reporting.

* Reporting Tools: BI tools (e.g., Power BI, Tableau) for dashboarding reconciliation results.

4.6. Error Handling, Logging, and Monitoring

  • Purpose: Provide visibility into the migration process, identify and track issues, and enable quick resolution.
  • Components:

* Centralized Logging: Aggregate logs from all migration components (extraction, transformation, loading) into a central repository.

* Alerting: Configure alerts for critical failures, data integrity issues, or performance bottlenecks.

* Dashboards: Real-time dashboards to monitor migration progress, error rates, and data volumes.

* Error Quarantine: Mechanism to isolate and manage erroneous records for manual review and reprocessing.

  • Technology Considerations:

* Logging Framework: [e.g., ELK Stack, Splunk, CloudWatch Logs, Azure Monitor]

* Monitoring Tools: [e.g., Grafana, Prometheus, native cloud monitoring services]

4.7. Security & Compliance

  • Purpose: Protect sensitive data throughout the migration lifecycle and ensure regulatory adherence.
  • Key Measures:

* Data Encryption: Encrypt data at rest (staging, temporary files) and in transit (network communication).

* Access Control: Implement strict role-based access control (RBAC) for all migration tools, databases, and environments. Least privilege principle.

* Auditing: Log all data access and modification activities related to the migration.

* Data Masking/Anonymization: Apply masking or anonymization for non-production environments, especially for PII/PHI.

* Compliance Review: Regular review of migration processes against relevant data privacy regulations (e.g., GDPR, CCPA, HIPAA).

  • Technology Considerations:

* Network Security: Firewalls, VPCs, private endpoints.

* Identity & Access Management (IAM): [e.g., AWS IAM, Azure AD, Okta].

* Encryption Tools: TLS/SSL for transit, KMS/HSM for key management.


5. Rollback Procedures (High-Level)

A comprehensive rollback strategy is critical to mitigate risks associated with potential migration failures.

  • Pre-Migration Snapshots: Take full backups or snapshots of both source and target systems immediately before the final cutover phase.
  • Incremental Backups: Maintain incremental backups of the target system during phased loads.
  • Database Point-in-Time Restore: Utilize database capabilities for point-in-time recovery for the target system.
  • Source System Data Preservation: Ensure source data is not modified or deleted until the migration is fully validated and signed off.
  • Rollback Plan Documentation: Detailed, step-by-step procedures for reverting to a pre-migration state will be developed and tested.
  • Application-Specific Rollback: For SaaS targets like Salesforce, a rollback might involve mass deletion of newly migrated records, which requires careful planning and sequencing.

6. High-Level Timeline Estimates

This is an initial, high-level estimate. A more detailed project plan with specific milestones will be developed in Step 2.

  • Phase 1: Architecture & Planning (Current Phase)

* Duration: [e.g., 2-4 weeks]

* Deliverables: Architecture Plan (this document), Detailed Data Mapping & Transformation Rules (next step), Resource Plan.

  • Phase 2: Environment Setup & Tooling

* Duration: [e.g., 2-3 weeks]

* Activities: Provisioning servers, installing ETL tools, configuring connectivity.

  • Phase 3: Development & Unit Testing

* Duration: [e.g., 8-12 weeks]

* Activities: Develop extraction scripts, transformation logic, loading jobs, unit testing of individual components.

  • Phase 4: System Integration Testing (SIT) & User Acceptance Testing (UAT)

* Duration: [e.g., 6-8 weeks]

* Activities: End-to-end testing, data validation, performance testing, business user review.

  • Phase 5: Dress Rehearsals & Go-Live

* Duration: [e.g., 2-4 weeks]

* Activities: Multiple dry runs, final data loads, cutover, post-go-live support.

Estimated Total Project Duration: [e.g., 20-30 weeks]


7. Initial Resource Requirements

  • Project Management: 1x Project Manager
  • Data Architects: 1x Senior Data Architect (for overall design & strategy)
  • ETL Developers: 2-3x ETL Developers (for script/job development)
  • Data Quality Analysts: 1x Data Quality Analyst (for profiling, cleansing, validation)
  • Source System SMEs: As needed from respective business/IT teams.
  • Target System SMEs: As needed from target system administration/development teams.
  • Testing & QA: 1x QA Engineer
  • Security & Compliance: Consultation as needed.
  • Infrastructure: Cloud resources (VMs, databases, storage) or on-premise hardware.

8. Initial Risk Identification

  • Data Quality Issues: Unforeseen complexities in source data leading to extensive cleansing efforts.

Mitigation:* Thorough data profiling and early involvement of business SMEs.

  • Performance Bottlenecks: Slow extraction/transformation/loading due to large data volumes or system limitations.

Mitigation:* Performance testing, optimized queries, bulk loading, scalable infrastructure.

  • Scope Creep: Changes in requirements or additional data sources identified late in the project.

Mitigation:* Strict change control process, clear scope definition.

  • System Downtime: Extended downtime during cutover impacting business operations.

Mitigation:* Phased approach, delta loads, detailed cutover plan with contingency.

  • Security & Compliance Breaches: Unauthorized access or data leakage.

Mitigation:* Robust security controls, encryption, regular audits, compliance reviews.

  • Resource Availability: Key SME or technical resource unavailability.

Mitigation:* Early resource planning, cross-training, clear communication.


9. Next Steps

The next phase of the

python

import yaml

import os

def load_migration_config(config_path: str) -> dict:

"""

Loads the migration configuration from a YAML file.

Args:

config_path (str): The path to the YAML configuration file.

Returns:

dict: A dictionary containing the migration configuration.

Raises:

FileNotFoundError: If the configuration file does not exist.

yaml.YAMLError: If there is an error parsing the YAML file.

"""

if not os.path.exists(config_path):

raise FileNotFoundError(f"Configuration file not found at: {config_path}")

with open(config_path, 'r') as f:

config = yaml.safe_load(f)

return config

Example Usage:

config_file = "migration_config.yaml"

try:

migration_settings = load_migration_config(config_file)

print("Migration configuration loaded successfully.")

# print(json.dumps(migration_settings, indent=2)) # Uncomment to see full config

except (FileNotFoundError, yaml.YAMLError) as e:

print(f"Error loading migration configuration: {

gemini Output

Data Migration Planner: Comprehensive Strategy Document

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name]

Prepared By: PantheraHive Solutions Team


1. Executive Summary

This document outlines the comprehensive plan for the data migration initiative, detailing the strategy, methodology, and key procedures required to successfully transition data from the [Source System Name] to the [Target System Name]. It covers critical aspects including data field mapping, transformation rules, validation scripts, rollback procedures, and a projected timeline. The goal is to ensure a secure, accurate, and efficient migration with minimal disruption to business operations.

2. Migration Scope and Objectives

Source System: [e.g., Legacy CRM System, On-premise ERP Database, existing flat files]

Target System: [e.g., Salesforce CRM, SAP S/4HANA, Azure SQL Database, new custom application]

Data Entities to be Migrated: [e.g., Customers, Products, Orders, Invoices, Employees, Historical Transactions]

Key Objectives:

  • Accuracy: Ensure 100% data integrity and consistency post-migration.
  • Completeness: Migrate all in-scope data records without loss.
  • Timeliness: Execute the migration within the agreed-upon timeframe and minimize downtime.
  • Validation: Implement robust validation to confirm successful data transfer and transformation.
  • Reversibility: Establish clear rollback procedures to mitigate risks.
  • Performance: Optimize the migration process for efficiency and speed.

3. Data Migration Strategy

Our strategy employs a phased approach to reduce risk and allow for iterative testing and validation.

  1. Discovery & Planning: Deep dive into source data, target schema, and business requirements. (Completed with this document)
  2. Design & Development: Create mapping specifications, transformation rules, and develop migration scripts/ETL jobs.
  3. Testing & Refinement: Conduct unit testing, integration testing, and user acceptance testing (UAT) with sample data. Refine scripts based on feedback.
  4. Pre-Migration Activities: Data cleansing, source data freeze, pre-migration backups.
  5. Cutover & Execution: Perform the actual data migration in a controlled environment.
  6. Post-Migration Validation & Support: Comprehensive validation, issue resolution, and hypercare support.

4. Source and Target System Details

| Aspect | Source System | Target System |

| :------------------ | :---------------------------------------------- | :---------------------------------------------- |

| System Name | [e.g., Oracle EBS 11i] | [e.g., SAP S/4HANA Cloud] |

| Version | [e.g., 11.5.10.2] | [e.g., 2023 Q3 Release] |

| Database Type | [e.g., Oracle Database 12c] | [e.g., SAP HANA Database] |

| Connectivity | [e.g., JDBC, ODBC, REST API] | [e.g., OData API, SAP BAPI, JDBC] |

| Data Volume (Est.) | [e.g., 500 GB, 10 million records across tables] | [e.g., 600 GB expected post-migration] |

| Key Entities | [e.g., Customers, Products, Orders, Accounts] | [e.g., Business Partners, Materials, Sales Orders] |

5. Data Field Mapping

This section provides a detailed field-level mapping between the source and target systems. This mapping serves as the blueprint for all data extraction and loading processes.

Example Mapping Table (for a specific entity, e.g., "Customer"):

| Source Table/Object | Source Field Name | Source Data Type | Source Nullable | Target Table/Object | Target Field Name | Target Data Type | Target Nullable | Transformation Rule ID(s) | Notes/Comments |

| :------------------ | :---------------- | :--------------- | :-------------- | :------------------ | :---------------- | :--------------- | :-------------- | :------------------------ | :---------------------------------------------------- |

| SRC_CUSTOMERS | CUST_ID | NUMBER(10) | NO | BP_BUSINESS_PARTNER | BP_ID | VARCHAR(20) | NO | T101 | Map directly, apply prefix "BP-" (T101) |

| SRC_CUSTOMERS | CUST_NAME | VARCHAR2(100) | NO | BP_BUSINESS_PARTNER | BP_NAME | NVARCHAR(200) | NO | T102 | Concatenate first and last name (T102) |

| SRC_CUSTOMERS | FIRST_NAME | VARCHAR2(50) | YES | N/A | N/A | N/A | N/A | T102 | Used for T102, not directly mapped |

| SRC_CUSTOMERS | LAST_NAME | VARCHAR2(50) | YES | N/A | N/A | N/A | N/A | T102 | Used for T102, not directly mapped |

| SRC_CUSTOMERS | ADDRESS_LINE1 | VARCHAR2(100) | YES | BP_ADDRESS | STREET_NAME | NVARCHAR(100) | YES | T103 | Split for street number (T103) |

| SRC_CUSTOMERS | CITY | VARCHAR2(50) | YES | BP_ADDRESS | CITY_NAME | NVARCHAR(50) | YES | None | Direct map |

| SRC_CUSTOMERS | STATUS_CODE | CHAR(1) | NO | BP_BUSINESS_PARTNER | BP_STATUS | VARCHAR(10) | NO | T104 | Map 'A'->'Active', 'I'->'Inactive' (T104) |

| SRC_ORDERS | ORDER_DATE | DATE | NO | SALES_ORDER | ORDER_CREATED_AT | TIMESTAMP | NO | T201 | Convert to UTC (T201) |

(Note: A full mapping document will be provided as a separate appendix, detailing all in-scope entities and fields.)

6. Data Transformation Rules

This section defines the specific logic applied to source data before it is loaded into the target system. These rules ensure data conforms to the target system's requirements and business logic.

| Rule ID | Transformation Category | Source Field(s) | Target Field | Transformation Logic

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}