Data Migration Planner
Run ID: 69cd31fb3e7fb09ff16a8dc62026-04-01Development
PantheraHive BOS
BOS Dashboard

This document outlines a detailed and professional approach to planning a complete data migration, encompassing field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates. The output includes clean, well-commented, and production-ready Python code examples designed to serve as a foundational framework for your migration project.


Data Migration Planner: Detailed Implementation Plan

This deliverable provides the core code and structural definitions required for meticulous data migration planning. It focuses on modular, maintainable, and verifiable components essential for a successful migration.

1. Introduction to the Data Migration Framework

A successful data migration requires a structured approach, breaking down the complex process into manageable, testable, and auditable steps. This framework provides a blueprint for defining:

The code examples provided are in Python, a versatile language widely used for data engineering and scripting, making them adaptable to various data sources and targets (databases, APIs, files).

2. Core Components and Code Implementation Details

We will structure our migration plan using several Python modules, each responsible for a specific aspect of the migration.

2.1. Configuration Management (migration_config.py)

Centralized configuration is crucial for managing parameters, connection strings, and overall migration settings. This module will store all static and dynamic configurations.

text • 869 chars
**Explanation:**
*   **Environment Variables:** Uses `os.getenv` for sensitive information (like passwords) and for easy environment-specific configuration.
*   **Database Config:** Standard dictionary format for various database types.
*   **File Paths:** Defines locations for logs and temporary data.
*   **Migration Settings:** Parameters like batch size and retry logic.
*   **Table Order:** Crucial for managing dependencies (e.g., `orders` depends on `users`).
*   **`setup_directories`:** A utility to ensure necessary file paths exist before the migration starts.

#### 2.2. Field Mapping Definition (`migration_config.py` - part of config)

Field mapping defines the relationship between source and target table columns, including data types and any specific notes for transformation. This will be integrated into our `migration_config.py` for easy access.

Sandboxed live preview

Data Migration Architecture Plan

This document outlines the architectural plan for the upcoming data migration, detailing the core components, strategies, and procedures required for a successful and robust transition. This plan serves as a foundational blueprint, ensuring clarity, consistency, and a structured approach throughout the migration lifecycle.


1. Data Migration Strategy & High-Level Architecture

Objective: Define the overarching approach and the key architectural components involved in the data migration.

  • Migration Approach:

* Phased Migration: Data will be migrated in logical stages or modules to minimize risk, allow for incremental testing, and reduce the impact on business operations. Specific phases (e.g., foundational data, transactional data, historical archives) will be defined during the detailed planning phase.

* Cutover Strategy: A "Big Bang" cutover is planned for each phase/module, where the old system is taken offline for a defined period while the migrated data is brought online in the new system. Downtime will be meticulously planned and communicated.

  • Source Systems:

* [List specific source systems, e.g., "Legacy CRM (SQL Server 2012)", "ERP System (Oracle 12c)", "Flat Files (CSV, XML)"]

  • Target Systems:

* [List specific target systems, e.g., "New Cloud CRM (Salesforce)", "Modern ERP (SAP S/4HANA)", "Data Lake (Azure Data Lake Storage Gen2)"]

  • Migration Tools & Technologies:

* ETL Tool: [Suggest specific tool, e.g., "Microsoft SQL Server Integration Services (SSIS)", "Informatica PowerCenter", "Talend Open Studio for Data Integration", "Azure Data Factory", "AWS Glue"] for orchestration, transformation, and loading.

* Scripting Languages: Python/PowerShell for custom data manipulation, API interactions, and automation of tasks.

* Database Tools: SQL Developer, SSMS for direct data manipulation, validation, and schema management.

* Version Control: Git for managing all scripts, mapping documents, and configuration files.


2. Detailed Field Mapping Strategy

Objective: Establish a comprehensive and accurate mapping between source and target data fields, including data types, constraints, and relationships.

  • Mapping Methodology:

* Discovery & Analysis: Initial automated schema comparison tools will be used, followed by detailed manual review and business user interviews to identify all relevant fields.

* Iterative Refinement: Mapping documents will be created iteratively, reviewed by data owners, subject matter experts (SMEs), and technical teams, and updated based on feedback.

  • Documentation Format:

* A centralized Data Mapping Document (DMD) will be maintained, typically in an Excel or dedicated data governance tool. Each entry will include:

* Source System, Table, Field Name

* Source Data Type, Length, Nullability

* Target System, Table, Field Name

* Target Data Type, Length, Nullability

* Transformation Rule ID/Description (link to Transformation Rules)

* Default Value (if applicable)

* Comments/Notes (e.g., business context, potential issues)

* Primary Key / Foreign Key Relationship Indicator

  • Handling of Data Types & Constraints:

* Explicit conversion rules will be defined for incompatible data types (e.g., VARCHAR to DATE).

* Target system constraints (e.g., unique keys, foreign keys, check constraints) will be identified and considered during mapping and transformation to prevent data integrity errors.

  • Primary and Foreign Key Management:

* Strategy for handling surrogate keys vs. natural keys.

* If target systems generate new primary keys, a mechanism for mapping old primary keys to new ones will be established for historical reference and relationship integrity.

* Referential integrity will be maintained by migrating parent data before child data.


3. Data Transformation Rules Definition

Objective: Define precise rules for manipulating and enriching source data to meet the target system's requirements and business logic.

  • Transformation Categories:

* Data Cleansing: Standardizing formats (e.g., dates, addresses), removing duplicates, correcting erroneous values.

* Data Standardization: Applying consistent values (e.g., "CA" for California, "M" for Male).

* Data Enrichment: Augmenting source data with additional information (e.g., lookups from reference tables).

* Data Aggregation: Summarizing data (e.g., rolling up monthly sales into quarterly figures).

* Data Splitting/Joining: Breaking single source fields into multiple target fields or combining multiple source fields.

* Format Conversion: Changing data formats (e.g., string to numeric, specific date formats).

* Derivation: Calculating new values based on existing source data (e.g., age from date of birth).

  • Rule Specification:

* Each transformation rule will be documented in a Transformation Rules Document (TRD), linked from the DMD.

* Rules will be described using clear, unambiguous language, pseudo-code, or SQL examples.

* Example Rule Format:

* Rule ID: TRN-001

* Source Field(s): LegacyCRM.Customer.FirstName, LegacyCRM.Customer.LastName

* Target Field: NewCRM.Contact.FullName

* Transformation Logic: Concatenate FirstName and LastName with a space. Handle nulls by returning only the non-null part if one is null.

* Error Handling: If both are null, FullName will be null.

  • Handling Defaults & Nulls:

* Explicit rules will be defined for fields that allow or disallow nulls in the target system.

* Default values for target fields will be specified where source data is missing or inappropriate.

  • Complex Business Logic:

* Any complex business rules requiring multi-field evaluation or external lookups will be meticulously documented and developed as modular functions within the ETL process.


4. Data Validation Strategy & Scripting

Objective: Ensure the accuracy, completeness, and integrity of data before, during, and after migration.

  • Validation Phases:

* Pre-Migration (Source Data Quality):

* Objective: Identify and flag data quality issues in the source system before migration.

* Scripts: SQL queries to check for nulls in mandatory fields, duplicate records, data type mismatches, referential integrity violations, and out-of-range values.

* Action: Data cleansing efforts will be prioritized based on these findings.

* In-Migration (Transformation Validation):

* Objective: Verify that transformation rules are applied correctly.

* Scripts: Unit tests within the ETL tool, sample data comparisons, and intermediate data checks.

* Action: Error rows will be logged and quarantined for review and reprocessing.

* Post-Migration (Target Data Integrity & Reconciliation):

* Objective: Confirm that all data has been accurately and completely migrated to the target system.

* Scripts:

* Row Count Validation: Compare record counts between source and target tables.

* Checksum/Hash Validation: Compare checksums of critical columns or entire rows for a representative sample.

* Data Sample Validation: Random sampling of records to manually verify field-level accuracy.

* Key Field Validation: Verify uniqueness of primary keys and integrity of foreign keys in the target.

* Business Rule Validation: Execute queries to ensure business rules (e.g., "total orders must equal sum of line items") are met in the target system.

* Financial Reconciliation: For financial data, reconcile totals and balances between source and target.

  • Error Logging & Reporting:

* A robust error logging mechanism will be implemented within the ETL process to capture all transformation failures, data quality issues, and validation discrepancies.

* Daily/weekly validation reports will be generated for review by technical and business stakeholders.

  • Data Reconciliation Process:

* A formal reconciliation process will be established, involving business users, to sign off on the migrated data's accuracy and completeness, especially for critical data sets.


5. Rollback Procedures

Objective: Develop clear, tested procedures to revert the migration in case of critical failure or unforeseen issues.

  • Contingency Planning:

* Trigger Conditions: Clearly define criteria that would necessitate a rollback (e.g., major data corruption, critical business process failure, inability to meet recovery time objectives).

* Rollback Team: Identify key personnel responsible for initiating and executing rollback procedures.

  • Rollback Strategies:

* Database Snapshot/Restore (Primary):

* Prior to any major data load, a full backup or snapshot of the target database will be taken.

* In case of critical failure, the target database can be restored to its pre-migration state.

* This is the preferred method for its speed and reliability.

* Reverse ETL (Secondary/Partial):

* For specific data sets or minor issues, a reverse ETL process might be designed to delete or revert specific migrated data. This is more complex and less preferred for full rollbacks.

* Application-Level Rollback:

* If the target application itself has rollback capabilities (e.g., Salesforce data loader undo functionality for recent imports), these will be explored.

  • Rollback Point Definition:

* Each migration phase/module will have clearly defined rollback points, typically immediately before the data load commences.

  • Communication Plan:

* A communication protocol will be established to inform stakeholders immediately upon a rollback decision, providing status updates throughout the process.

  • Testing of Rollback Procedures:

* Rollback procedures will be thoroughly tested in a non-production environment prior to the actual migration. This includes simulating failure scenarios and executing the restore process to validate its effectiveness and timing.


6. Timeline Estimates (High-Level)

Objective: Provide a preliminary estimation of the time required for each major phase of the data migration, recognizing that detailed planning will refine these estimates.

  • Phase 1: Discovery & Planning (Current Phase)

* Duration: [e.g., 4-6 Weeks]

* Activities: Requirements gathering, source/target analysis, high-level architecture design, tool selection, initial risk assessment, detailed project plan.

  • Phase 2: Design & Mapping

* Duration: [e.g., 6-8 Weeks]

* Activities: Detailed field mapping, transformation rule definition, validation script design, rollback procedure design, security considerations.

  • Phase 3: Development & Unit Testing

* Duration: [e.g., 10-14 Weeks]

* Activities: ETL script development, transformation logic implementation, validation script coding, initial data loading tests with small datasets.

  • Phase 4: System Integration Testing (SIT) & User Acceptance Testing (UAT)

* Duration: [e.g., 8-10 Weeks]

* Activities: End-to-end migration cycles with full datasets in a test environment, performance testing, business user validation of migrated data, defect resolution.

  • Phase 5: Dress Rehearsals & Go/No-Go Decision

* Duration: [e.g., 2-4 Weeks]

* Activities: Multiple full migration dry runs, cutover planning refinement, rollback procedure testing, final performance tuning, stakeholder sign-off.

  • Phase 6: Production Cutover & Post-Migration Support

* Duration: [e.g., 1-2 Weeks (Cutover) + 4-6 Weeks (Support)]

* Activities: Execution of the final migration, go-live support, monitoring, issue resolution, data reconciliation sign-off.

  • Contingency Buffer: An additional [e.g., 15-20%] contingency buffer will be added to the overall timeline to account for unforeseen complexities, scope changes, or technical challenges.
  • Resource Allocation: This timeline assumes dedicated resources from technical, business, and project management teams. Specific resource requirements will be detailed in subsequent planning phases.

This architectural plan provides a solid foundation for the data migration project. The next steps will involve diving deeper into each section, producing detailed documentation, and securing stakeholder alignment.

python

migration_project/scripts/transformations.py

import uuid

from datetime import datetime, timezone

import re

import logging

Configure logging for transformations

logger = logging.getLogger(__name__)

class TransformationRules:

gemini Output

Data Migration Planner: Comprehensive Migration Plan

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name]

Prepared By: PantheraHive Solutions Team


1. Executive Summary

This document outlines a comprehensive plan for the data migration project from [Source System Name, e.g., Legacy CRM] to [Target System Name, e.g., Salesforce Cloud]. It details the strategy, scope, technical execution steps including field mapping, data transformation rules, validation procedures, and robust rollback plans. Furthermore, it provides initial timeline estimates and identifies key risks and mitigation strategies to ensure a smooth and successful transition. The objective is to migrate critical business data accurately and efficiently, minimizing disruption and maximizing data integrity in the new system.

2. Introduction

The successful migration of data is a critical undertaking for the [Customer Name] organization, enabling the transition to a more modern, efficient, and scalable [Target System Name]. This plan serves as a foundational blueprint, detailing the structured approach we will take to ensure all essential data is transferred accurately, securely, and with minimal operational impact. Adherence to this plan will facilitate a seamless transition, empowering users with reliable data in the new environment from day one.

3. Source and Target Systems Overview

  • Source System:

* Name: [e.g., Legacy Microsoft Dynamics CRM 2011, Oracle EBS 11i]

* Database/Technology: [e.g., SQL Server 2008 R2, Oracle 11g]

* Key Modules/Data Areas for Migration: [e.g., Accounts, Contacts, Opportunities, Products, Orders, Historical Sales Data]

* Access Methods: [e.g., ODBC, API, Direct Database Access]

  • Target System:

* Name: [e.g., Salesforce Sales Cloud Enterprise Edition, SAP S/4HANA]

* Database/Technology: [e.g., Salesforce Platform, SAP HANA Database]

* Key Modules/Data Areas for Migration: [e.g., Accounts, Contacts, Leads, Opportunities, Products, Quotes, Orders]

* Access Methods: [e.g., Salesforce API (SOAP/REST), SAP IDoc/BAPI]

4. Data Scope and Volume

  • Data Entities to be Migrated:

* Customer Accounts

* Contact Persons

* Sales Opportunities (Open and Closed for the last 3 years)

* Products and Price Books

* Historical Orders (last 5 years)

* Support Cases (last 2 years)

  • Estimated Data Volume:

* Accounts: ~50,000 records

* Contacts: ~150,000 records

* Opportunities: ~75,000 records

* Products: ~5,000 records

* Orders: ~200,000 records

* Support Cases: ~100,000 records

* Total Data Size: Approximately 20GB (excluding attachments)

  • In-Scope Data: All active and relevant historical data required for ongoing business operations in the target system.
  • Out-of-Scope Data: Archived data older than specified retention periods, temporary records, system logs, and data deemed irrelevant for the target system's functionality.

5. Data Migration Strategy

Our proposed data migration strategy is a Phased Incremental Approach, designed to minimize risk and allow for thorough testing and validation at each stage.

  • Phases:

1. Pilot Migration: Migrate a small subset of critical data to validate the end-to-end process, mappings, and transformations.

2. Staged Migrations: Migrate data entities in logical groups (e.g., master data first, then transactional data). This allows for focused testing.

3. Delta Migrations (if applicable): For longer migration windows, a mechanism to capture and migrate changes made in the source system after initial data extraction.

4. Final Cutover Migration: The ultimate migration of all remaining data, followed by a switch to the target system.

  • Tools & Technologies:

* ETL Tool: [e.g., Talend, Informatica, custom scripts using Python/SQL, Salesforce Data Loader]

* Data Quality Tool: [e.g., Trillium, Ataccama, internal scripts]

* Version Control: Git for all scripts, mappings, and documentation.

  • Data Quality Focus: A strong emphasis will be placed on data cleansing and de-duplication before migration to ensure high data quality in the target system.

6. Detailed Migration Plan

6.1. Field Mapping (Source to Target)

A comprehensive field mapping document will be developed and maintained in a centralized repository (e.g., Confluence, Excel workbook, dedicated mapping tool). Below is an illustrative example of the structure:

| Source System (Legacy CRM) | Source Field Name | Source Data Type | Target System (Salesforce) | Target Field Name | Target Data Type | Transformation Rule ID | Notes/Comments |

| :------------------------- | :---------------- | :--------------- | :------------------------- | :---------------- | : :--------------- | :--------------------- | :------------- |

| Account | Customer_ID | VARCHAR(50) | Account | External_ID__c | TEXT(50) | TR-001 | Unique external ID |

| Account | Company_Name | VARCHAR(255) | Account | Name | TEXT(255) | TR-002 | Trim whitespace |

| Account | Address_Line1 | VARCHAR(255) | Account | BillingStreet | TEXT(255) | TR-003 | Concatenate with Line2 |

| Contact | FirstName | VARCHAR(100) | Contact | FirstName | TEXT(100) | TR-004 | Direct Map |

| Contact | Contact_Status | VARCHAR(20) | Contact | Status__c | PICKLIST | TR-005 | Map 'Active'->'Engaged'|

| Opportunity | Opp_Value | DECIMAL(18,2) | Opportunity | Amount | CURRENCY(18,2) | TR-006 | Direct Map |

6.2. Data Transformation Rules

Each transformation rule will be documented with a unique ID, description, logic, and example. Key transformation categories include:

  • Direct Mapping: Field values are copied directly from source to target. (e.g., TR-004: Contact.FirstName -> Contact.FirstName)
  • Format Conversion: Changing data types or formats (e.g., date formats, currency symbols).

* Example (TR-007): Source: Date_Created (DD-MM-YYYY) -> Target: CreatedDate (YYYY-MM-DD)

  • Concatenation/Splitting: Combining or separating multiple source fields into one target field, or vice versa.

* Example (TR-003): Source: Address_Line1 + ', ' + Address_Line2 -> Target: BillingStreet

  • Lookup/Reference Data Mapping: Translating source system codes/values to target system equivalents using a lookup table.

* Example (TR-005): Source: Contact_Status ('Active'='Engaged', 'Inactive'='Archived', 'Lead'='Prospect')

  • Default Values: Assigning a default value to a target field if the source field is null or empty.

* Example (TR-008): If Source.Industry is NULL, then Target.Industry = 'Unspecified'

  • Conditional Logic: Applying transformations based on specific conditions.

* Example (TR-009): If Source.Customer_Type = 'Premium', then Target.SLA_Tier = 'Tier 1', else Target.SLA_Tier = 'Tier 2'

  • Data Cleansing/Standardization: Removing special characters, standardizing addresses, de-duplication.

* Example (TR-010): Remove all non-alphanumeric characters from Source.Phone_Number before mapping to Target.Phone.

6.3. Data Validation Scripts and Procedures

Robust validation is crucial to ensure data integrity. Validation will occur at multiple stages:

  • Pre-Migration Validation (Source Data Profiling):

* Scripts: SQL queries or ETL tool functions to profile source data for completeness, uniqueness, consistency, and validity (e.g., check for mandatory fields, duplicate primary keys, referential integrity issues).

* Procedure: Run profiling scripts, generate reports, identify and address data quality issues in the source system or define specific transformation rules to handle them.

  • During Migration Validation (ETL Process Checks):

* Scripts: Built-in ETL tool validation rules (e.g., data type checks, length constraints).

* Procedure: Monitor ETL logs for errors, reject invalid records, and log details for review and remediation.

  • Post-Migration Validation (Target System Verification):

* Count Validation: Compare record counts for each entity between source and target.

Script Example: SELECT COUNT() FROM Source.Accounts; vs SELECT COUNT(*) FROM Target.Account;

* Sum Validation: Verify aggregate values for financial fields or quantities.

* Script Example: SELECT SUM(Amount) FROM Source.Opportunities; vs SELECT SUM(Amount) FROM Target.Opportunity;

* Random Sample Data Validation: Manually verify a statistically significant sample of records (e.g., 5% of each entity) for accuracy of all mapped fields.

* Key Field Validation: Verify uniqueness constraints, mandatory fields, and referential integrity (e.g., ensuring all contacts are linked to a valid account).

* Business Rule Validation: Execute reports or queries in the target system to ensure migrated data adheres to new system's business rules (e.g., "All opportunities over $1M must have an assigned 'VP Sponsor'").

* Procedure: Execute validation scripts, generate discrepancy reports, escalate critical errors for immediate remediation. User Acceptance Testing (UAT) will be a key part of post-migration validation.

6.4. Error Handling and Logging

  • Error Logging: All migration processes will include comprehensive logging of successful records, warnings, and errors.

* Details Captured: Timestamp, source record ID, target object/field, error type, error message, and original data value.

  • Error Reporting: Automated reports will be generated for failed records, categorized by error type.
  • Remediation Process: A defined process for reviewing error logs, correcting source data, adjusting transformation rules, or manually updating target data for critical exceptions. Retries for transient errors will be implemented where appropriate.

6.5. Rollback Procedures

A robust rollback plan is essential for mitigating risk in the event of unforeseen issues during or immediately after the migration.

  • Rollback Triggers:

* Significant data corruption detected in the target system.

* Critical business functionality is impaired post-migration.

* Performance degradation in the target system due to migrated data.

* Failure to meet agreed-upon validation criteria during UAT.

  • Rollback Steps (Example Scenario: Database-level Rollback):

1. Halt Target System Access: Immediately restrict user access to the target system.

2. Backup Target System: Perform an immediate full backup of the target system's database before any rollback actions, if not already done.

3. Restore Target Database: Restore the target system's database to its pre-migration state using the most recent clean backup.

4. Re-enable Source System: Ensure the legacy source system is fully operational and users are directed back to it.

5. Communicate: Inform all stakeholders about the rollback and the plan for recovery/re-migration.

6. Post-Rollback Analysis: Conduct a thorough root cause analysis of the migration failure to refine the plan before any re-attempt.

  • Alternative Rollback (if full database restore is not feasible/desired):

* Delete Migrated Data: For systems with soft-delete or easily identifiable migrated records, a script to delete all data imported during the migration window. This requires careful planning to avoid deleting legitimate data.

* Switch to Read-Only: Set target system to read-only while investigation and remediation occur, allowing users to temporarily revert to the source system.

6.6. Testing Strategy

  • Unit Testing: Individual ETL scripts and transformation rules will be tested with sample data.
  • Integration Testing: End-to-end testing of the migration process for a small subset of data, including extraction, transformation, loading, and initial validation.
  • Performance Testing: Assess the migration tool's performance and identify bottlenecks, especially for large data volumes.
  • User Acceptance Testing (UAT): Key business users will validate migrated data in the target system against source data, ensuring it meets business requirements and is functional. This includes verifying reports, dashboards, and key workflows.

6.7. Cutover Strategy

The cutover will be performed during a planned maintenance window to minimize business disruption.

  1. Pre-Cutover: Final delta migration preparation, communication to users.
  2. Source System Freeze: Suspend all data entry and modifications in the source system.
  3. Final Data Extraction: Extract the last batch of delta data from the source.
  4. Final Migration & Validation: Execute the final migration processes and perform critical post-migration validation checks.
  5. Target System Go-Live: Enable user access to
data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}