Data Migration Planner
Run ID: 69cd15cc3e7fb09ff16a7d262026-04-01Development
PantheraHive BOS
BOS Dashboard

Plan a complete data migration with field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates.

Data Migration Planner: Comprehensive Study Plan

This document outlines a detailed study plan designed to equip individuals with the knowledge and skills necessary to effectively plan and execute complex data migration projects. This plan covers foundational concepts, architectural design, practical implementation strategies, and crucial risk management, preparing you to become a proficient Data Migration Planner.


1. Overall Goal & Learning Philosophy

The primary goal of this study plan is to enable you to design, document, and oversee a complete data migration lifecycle, from initial discovery to post-migration validation and rollback planning. The learning philosophy emphasizes a blend of theoretical understanding, practical application through case studies, and hands-on exposure to relevant tools and methodologies.


2. Weekly Schedule (4 Weeks Intensive)

This schedule assumes approximately 10-15 hours of dedicated study per week, including reading, exercises, and project work.


Week 1: Foundations & Discovery

  • Focus: Understanding data migration types, methodologies, and initial project phases.
  • Daily Breakdown:

* Day 1-2: Introduction to Data Migration

* Types of migrations (on-prem to cloud, system upgrades, mergers/acquisitions).

* Migration strategies (Big Bang vs. Phased, Coexistence, Trickle).

* Key roles and responsibilities in a migration project.

* Day 3-4: Source & Target System Analysis

* Data profiling techniques and tools.

* Schema analysis and comparison.

* Understanding data quality dimensions and issues.

* Day 5-6: Data Scoping & Requirements Gathering

* Identifying critical data elements.

* Defining business rules and data dependencies.

* Stakeholder interviews and documentation.

* Day 7: Review & Planning

* Consolidate Week 1 learning.

* Outline a basic data migration scope document for a hypothetical scenario.


Week 2: Design & Mapping

  • Focus: Detailed design of data structures, mappings, and transformation logic.
  • Daily Breakdown:

* Day 1-2: Data Model Understanding & Harmonization

* Reviewing source and target data models.

* Identifying discrepancies and opportunities for data harmonization.

* Conceptual vs. Logical vs. Physical data models.

* Day 3-4: Field Mapping & Data Dictionary Creation

* Developing comprehensive field-to-field mapping documents.

* Handling data types, lengths, and constraints.

* Populating a data dictionary with definitions and business rules.

* Day 5-6: Transformation Rules & Logic

* Defining complex transformation rules (e.g., aggregations, lookups, derivations).

* Pseudo-code development for transformation logic.

* Handling NULLs, defaults, and error conditions.

* Day 7: Review & Project Work

* Complete detailed field mapping and transformation rules for a sample dataset.

* Peer review of mapping documents (if possible).


Week 3: Execution & Validation Planning

  • Focus: Preparing for the migration execution, including testing, validation, and error handling.
  • Daily Breakdown:

* Day 1-2: ETL/ELT Process Design

* Overview of ETL/ELT tools and architectures.

* Designing data extraction, loading, and transformation pipelines.

* Batch vs. Real-time migration considerations.

* Day 3-4: Data Quality & Validation Strategies

* Pre-migration data cleansing and enrichment.

* Post-migration data validation techniques (row counts, checksums, reconciliation reports).

* Developing validation scripts and test cases.

* Day 5-6: Error Handling & Logging

* Designing robust error logging and reporting mechanisms.

* Strategies for data rejection and remediation.

* Exception handling frameworks.

* Day 7: Review & Practical Exercise

* Develop a detailed validation plan and sample validation script (pseudo-code) for a hypothetical migration.


Week 4: Risk Management, Rollback & Project Management

  • Focus: Mitigating risks, planning for contingencies, and overall project management aspects.
  • Daily Breakdown:

* Day 1-2: Risk Assessment & Mitigation

* Identifying common data migration risks (data loss, corruption, downtime).

* Developing mitigation strategies and contingency plans.

* Security and compliance considerations.

* Day 3-4: Rollback Procedures & Business Continuity

* Designing comprehensive rollback plans.

* Strategies for minimal business disruption during migration.

* Backup and recovery protocols.

* Day 5-6: Project Management & Timeline Estimation

* Estimating effort, resources, and timelines.

* Stakeholder communication and change management.

* Post-migration support and cutover activities.

* Day 7: Final Project Synthesis & Presentation

* Consolidate all learned concepts into a comprehensive mock data migration plan.

* Prepare a high-level presentation of your plan.


3. Learning Objectives

Upon completion of this study plan, you will be able to:

  • Comprehend Data Migration Fundamentals: Articulate different data migration types, strategies, and their suitability for various business scenarios.
  • Conduct Thorough Data Analysis: Perform detailed source and target system data profiling, schema comparison, and data quality assessments.
  • Design Robust Data Mappings: Create comprehensive field-to-field mapping documents and define complex data transformation rules with associated business logic.
  • Plan for Data Quality & Validation: Develop effective pre-migration data cleansing strategies and post-migration validation test cases and scripts.
  • Outline Error Handling & Rollback: Design robust error logging mechanisms and formulate detailed rollback procedures to ensure business continuity.
  • Estimate & Manage Migration Projects: Develop realistic timeline estimates, identify key resources, and understand critical project management aspects for data migrations.
  • Identify & Mitigate Risks: Proactively identify potential data migration risks and design effective mitigation and contingency plans.
  • Communicate Effectively: Articulate complex data migration plans and strategies to technical and non-technical stakeholders.

4. Recommended Resources

  • Books:

* "The Data Warehouse Toolkit" by Ralph Kimball (for foundational data modeling and ETL concepts).

* "Designing Data-Intensive Applications" by Martin Kleppmann (for understanding distributed systems and data processing).

* "Data Migration: The Complete Guide" by Adrian O'Connell (specific to data migration practices).

  • Online Courses (Self-Paced):

* Coursera/Udemy/edX: Courses on Data Warehousing, ETL Processes, SQL for Data Analysis, Cloud Data Migrations (e.g., AWS Database Migration Service, Azure Data Migration).

* LinkedIn Learning: "Data Migration Best Practices," "Learning ETL Tools."

* Vendor-Specific Training: AWS, Azure, Google Cloud offer detailed documentation and courses on their migration services.

  • Articles & Blogs:

* Gartner, Forrester research on data migration trends and best practices.

* Blogs from leading cloud providers (AWS, Azure, GCP) on their migration services.

* Industry-specific forums and communities (e.g., DBA Stack Exchange, specific ETL tool forums).

  • Tools & Software (Hands-on Practice):

* SQL Databases: PostgreSQL, MySQL (for schema analysis, querying, data manipulation).

* ETL Tools (Trial Versions/Community Editions): Talend Open Studio, Apache Nifi, Pentaho Data Integration (Kettle).

* Scripting Languages: Python (for data profiling, scripting transformations, validation checks).

* Spreadsheets/Documentation Tools: Microsoft Excel/Google Sheets (for mapping documents), Confluence/Jira (for project tracking).

  • Case Studies:

* Research real-world data migration success stories and failures to learn from practical experiences.

* Focus on lessons learned, challenges faced, and solutions implemented.


5. Milestones

  • End of Week 1: Completion of a High-Level Data Migration Scope Document for a hypothetical scenario, including identified source/target systems and initial data quality observations.
  • End of Week 2: Development of a Detailed Field Mapping Document and Transformation Rule Specifications (pseudo-code) for a key dataset within your hypothetical scenario.
  • End of Week 3: Creation of a Comprehensive Data Validation Plan (including pre- and post-migration checks) and Error Handling Strategy document.
  • End of Week 4: Submission of a Complete Mock Data Migration Plan, encompassing all elements from discovery to rollback, and a Summary Presentation of the plan.

6. Assessment Strategies

  • Weekly Self-Assessments/Quizzes: Short quizzes or checklists at the end of each week to test understanding of key concepts.
  • Practical Exercises:

* Data Profiling Exercise: Analyze a small sample dataset (provided or self-sourced) to identify data quality issues and propose cleansing strategies.

* Mapping & Transformation Exercise: Given a source and target schema, create detailed field mappings and define transformation rules for specific business requirements.

* Validation Scripting: Write pseudo-code or actual scripts (e.g., Python, SQL) to validate data integrity post-migration.

  • Project-Based Assessment (Final Deliverable): The primary assessment will be the Complete Mock Data Migration Plan developed over the four weeks. This plan will be evaluated based on its completeness, accuracy, practicality, and adherence to best practices.
  • Peer Review (Optional but Recommended): Exchange and review mapping documents or validation plans with a study partner to gain different perspectives and identify potential gaps.
  • Scenario-Based Problem Solving: Tackle complex hypothetical migration challenges, proposing solutions for difficult data types, performance bottlenecks, or unexpected errors.

This comprehensive study plan provides a structured pathway to mastering data migration planning. Consistent effort, practical application, and leveraging the recommended resources will be key to your success.

gemini Output

This document outlines a comprehensive plan for your data migration, encompassing detailed field mapping, transformation rules, validation scripts, rollback procedures, and a projected timeline. This plan is designed to ensure a smooth, secure, and accurate transfer of your valuable data from the source to the target system.


Data Migration Planner: Comprehensive Plan

1. Introduction

This document serves as the deliverable for Step 2 of the "Data Migration Planner" workflow. It provides a detailed blueprint for the upcoming data migration, covering all critical aspects from data mapping and transformation to validation and contingency planning. The goal is to establish a clear, actionable strategy to ensure data integrity, minimize downtime, and achieve a successful transition to the new system.

2. Data Migration Scope & Objectives

Source System: Legacy CRM (e.g., on-premise application with MySQL database)

Target System: New Cloud-based CRM (e.g., Salesforce, or a custom application with PostgreSQL database)

Primary Data Entities to be Migrated:

  • Customers
  • Orders
  • Products
  • Contacts (associated with Customers)

Key Objectives:

  • Accuracy: Ensure 100% data fidelity post-migration.
  • Completeness: Migrate all in-scope data records without loss.
  • Consistency: Maintain referential integrity and data relationships.
  • Efficiency: Execute the migration within defined timeframes and resource constraints.
  • Business Continuity: Minimize impact on ongoing business operations during the migration window.

3. Data Migration Strategy Overview

Our strategy will follow a phased approach:

  1. Discovery & Planning: Detailed analysis of source data, target schema, and business requirements (completed with this document).
  2. Extraction, Transformation, and Load (ETL) Development: Building scripts and tools for data extraction, applying transformation rules, and loading into the target system.
  3. Testing: Rigorous unit, integration, and user acceptance testing (UAT) with sample and full datasets in a staging environment.
  4. Migration Execution: Performing the actual data migration in a controlled production environment.
  5. Post-Migration Validation: Thorough verification of migrated data and system functionality.
  6. Cutover & Go-Live: Transitioning to the new system and decommissioning the old one (if applicable).

4. Detailed Migration Plan Components

4.1. Field Mapping Document

This section details the mapping of individual fields from the source system to the target system, including data types, constraints, and specific transformation rules.

Example: Customers Data Entity Mapping

| Source Table | Source Field Name | Source Data Type | Target Table | Target Field Name | Target Data Type | Nullable (Y/N) | Key (PK/FK/Index) | Transformation Rule ID | Notes/Comments |

| :---------------- | :---------------- | :--------------- | :-------------- | :---------------- | :--------------- | :------------- | :---------------- | :--------------------- | :---------------------------------------------------------- |

| legacy_crm.customers | customer_id | INT (PK) | new_crm.customers | id | UUID | N | PK | T001 | Auto-generate UUID in target. Map for audit trail. |

| legacy_crm.customers | first_name | VARCHAR(50) | new_crm.customers | first_name | VARCHAR(100) | N | - | - | Direct map. |

| legacy_crm.customers | last_name | VARCHAR(50) | new_crm.customers | last_name | VARCHAR(100) | N | - | - | Direct map. |

| legacy_crm.customers | email | VARCHAR(100) | new_crm.customers | email | VARCHAR(255) | N | UNIQUE INDEX | T002 | Normalize to lowercase. Validate email format. |

| legacy_crm.customers | phone_number | VARCHAR(20) | new_crm.customers | phone | VARCHAR(30) | Y | - | T003 | Cleanse and standardize format (e.g., E.164). |

| legacy_crm.customers | street_address | VARCHAR(255) | new_crm.addresses | street | VARCHAR(255) | Y | - | T004 | Split into street, city, state, zip. |

| legacy_crm.customers | city | VARCHAR(100) | new_crm.addresses | city | VARCHAR(100) | Y | - | T004 | Part of address splitting. |

| legacy_crm.customers | state | VARCHAR(50) | new_crm.addresses | state | VARCHAR(50) | Y | - | T004 | Part of address splitting. Map to ISO 3166-2 if applicable. |

| legacy_crm.customers | zip_code | VARCHAR(10) | new_crm.addresses | zip_code | VARCHAR(10) | Y | - | T004 | Part of address splitting. |

| legacy_crm.customers | created_at | DATETIME | new_crm.customers | created_at | TIMESTAMP WITH TZ | N | - | - | Direct map. Convert to UTC. |

| legacy_crm.customers | last_updated | DATETIME | new_crm.customers | updated_at | TIMESTAMP WITH TZ | N | - | - | Direct map. Convert to UTC. |

| legacy_crm.customers | status | ENUM('A','I') | new_crm.customers | is_active | BOOLEAN | N | - | T005 | Map 'A' to TRUE, 'I' to FALSE. |

| legacy_crm.customers | segment | VARCHAR(50) | new_crm.customers | customer_segment | VARCHAR(50) | Y | INDEX | T006 | Standardize values (e.g., 'SMB' -> 'Small Business'). |

| legacy_crm.customers | notes | TEXT | new_crm.customers | internal_notes | TEXT | Y | - | - | Direct map. |

| legacy_crm.customers | source_system_id| - | new_crm.customers | source_system_id | INT | Y | - | T007 | Store original customer_id for traceability. |

Note: This table is an example. A complete migration will require similar detailed mapping for all in-scope tables and fields.

4.2. Data Transformation Rules

This section describes the specific logic applied to data during the migration process to ensure compatibility and correctness in the target system.

| Rule ID | Description | Source Fields | Target Fields | Transformation Logic (Pseudocode/Explanation)

gemini Output

Data Migration Planner: Comprehensive Plan & Deliverable

This document outlines the complete plan for the upcoming data migration, encompassing detailed strategies for field mapping, transformation rules, validation, rollback, and a comprehensive timeline. This plan serves as a foundational deliverable, guiding the execution of a successful and reliable data transition.


1. Executive Summary

This Data Migration Plan details the strategy for migrating data from [Source System Name] to [Destination System Name]. The primary objective is to ensure a secure, accurate, and efficient transfer of critical business data, minimizing downtime and preserving data integrity. This document provides a framework for all migration activities, from initial data analysis and cleansing to post-migration validation and support. By adhering to this plan, we aim to achieve a seamless transition that supports the successful adoption of the new [Destination System Name] platform.


2. Project Scope

2.1. Migration Objectives

  • Migrate all relevant [Specify Data Type, e.g., customer, product, order] data from [Source System Name] to [Destination System Name].
  • Ensure data accuracy, completeness, and consistency in the destination system.
  • Minimize business disruption and system downtime during the migration window.
  • Implement robust data validation and error handling mechanisms.
  • Provide a clear rollback strategy in case of unforeseen issues.
  • Retire [Source System Name] post-migration.

2.2. Systems Involved

  • Source System: [Source System Name] (e.g., Legacy CRM, On-premise ERP)

* Database Type: [e.g., SQL Server, Oracle, flat files]

* Key Modules/Data Sets: [e.g., Customer Master, Sales Orders, Product Catalog]

  • Destination System: [Destination System Name] (e.g., Salesforce CRM, SAP S/4HANA, Custom Cloud Application)

* Database Type: [e.g., PostgreSQL, NoSQL, SaaS API]

* Key Modules/Data Sets: [e.g., Accounts, Opportunities, Products]

2.3. Data In-Scope

  • Core Data Sets: [List specific data entities, e.g., Customers, Contacts, Products, Orders, Invoices].
  • Historical Data: [Specify retention period, e.g., last 5 years of transactions, all active customer records].
  • Related Data: [e.g., Attachments, Notes, Audit Trails – if applicable].

2.4. Data Out-of-Scope

  • [List specific data entities or types not being migrated, e.g., archived inactive customers older than 10 years, marketing campaign data, temporary logs].
  • User preferences or system configurations not directly related to core business data.

3. Assumptions & Prerequisites

3.1. Assumptions

  • Source system data is accessible for extraction.
  • Destination system schema is finalized and stable.
  • Required hardware, software, and network infrastructure are in place and adequately provisioned.
  • Key business stakeholders are available for data validation and UAT.
  • Adequate system downtime can be scheduled for the cutover.
  • Data cleansing activities will be completed prior to the migration execution phase.

3.2. Prerequisites

  • Approved data mapping and transformation rules.
  • Dedicated migration environment (staging area) set up.
  • Secure access credentials for both source and destination systems.
  • Backup of both source and destination systems completed prior to migration execution.
  • Trained personnel available for migration execution, monitoring, and support.

4. Data Migration Strategy

4.1. Overall Approach

  • Phased Migration: Data will be migrated in logical batches (e.g., master data first, then transactional data, or by business unit) to allow for incremental validation and reduce overall risk.
  • Migration Type: ETL (Extract, Transform, Load) approach utilizing [Specify ETL Tool, e.g., custom scripts, Talend, SSIS, Informatica].
  • Downtime Strategy: A planned downtime window will be required for the final cutover of transactional data to ensure data consistency. Master data may be migrated with minimal disruption.

4.2. Migration Phases

  1. Analysis & Planning: Data profiling, mapping, strategy definition.
  2. Design & Development: ETL script development, transformation logic, validation script creation.
  3. Testing: Unit testing, integration testing, performance testing, UAT.
  4. Pre-Migration Activities: Data cleansing, source system freeze (if applicable), final backups.
  5. Execution (Cutover): Data extraction, transformation, loading into the destination system.
  6. Post-Migration Activities: Validation, reconciliation, hypercare support, source system decommissioning.

5. Data Analysis & Cleansing

5.1. Data Profiling

  • Activity: Analyze source data for completeness, accuracy, uniqueness, and consistency. Identify data types, formats, and potential anomalies.
  • Tools: [e.g., SQL queries, Excel, Data Profiling tools like Informatica Data Quality].
  • Deliverable: Data Profiling Report highlighting data quality issues.

5.2. Data Cleansing Strategy

  • Activity: Address identified data quality issues (e.g., duplicate records, missing values, inconsistent formats) in the source system before migration or as part of the transformation process.
  • Methods:

* Automated Cleansing: Using scripts to standardize formats, remove special characters, or fill in default values.

* Manual Cleansing: Business users reviewing and correcting critical data elements.

* Deduplication: Identifying and merging duplicate records based on defined rules.

  • Responsibility: [Data Owner/Business Analyst] for defining rules, [Data Migration Team] for execution.

6. Data Mapping & Transformation

This section details how data fields from the source system will correspond to the destination system, including any necessary modifications during the migration process.

6.1. Source and Destination Systems Overview

  • Source System: [e.g., Legacy CRM v5.0] contains customer, contact, and sales order information.
  • Destination System: [e.g., Salesforce Sales Cloud] will house Account, Contact, and Opportunity objects.

6.2. Field Mapping Matrix

The following table provides a template for the detailed field mapping. This matrix will be populated for each in-scope data entity (e.g., Customer, Product, Order).

Entity: Customer/Account

| Source System (Field Name) | Source Data Type | Source Sample Data | Destination System (Field Name) | Destination Data Type | Transformation Rule | Notes/Comments |

| :------------------------- | :--------------- | :----------------- | :------------------------------ | :-------------------- | :------------------ | :------------- |

| CustID | INT | 100123 | External_ID__c | Text(255) | Direct Map | Unique external identifier for Salesforce. |

| CompanyName | VARCHAR(255) | Acme Corp. | Name | Text(255) | Direct Map | Primary account name. |

| AddressLine1 | VARCHAR(255) | 123 Main St | BillingStreet | Text(255) | Concatenate | Combine with AddressLine2 if needed. |

| AddressLine2 | VARCHAR(255) | Suite 200 | BillingStreet | Text(255) | Concatenate | Appended to AddressLine1 with a newline. |

| City | VARCHAR(100) | Anytown | BillingCity | Text(100) | Direct Map | |

| StateCode | CHAR(2) | CA | BillingState | Text(100) | Lookup/Map | Map source 2-char code to full state name (e.g., CA -> California). |

| ZipCode | VARCHAR(10) | 90210 | BillingPostalCode | Text(20) | Direct Map | Format to NNNNN-NNNN if needed. |

| CustType | VARCHAR(50) | Premium | Type | Picklist | Lookup/Map | Map source values to destination picklist values (e.g., 'Premium' -> 'Enterprise', 'Standard' -> 'SMB'). |

| Status | VARCHAR(20) | Active | IsActive__c | Checkbox | Conditional | If Status = 'Active', set IsActive__c to TRUE, else FALSE. |

| CreationDate | DATETIME | 2020-01-15 10:30 | CreatedDate | DateTime | Direct Map | |

| LastUpdatedBy | VARCHAR(100) | jsmith | LastModifiedBy | Lookup | Lookup User ID | Map source user ID to destination system user ID. |

| LegacyNotes | TEXT | Called 01/01 | Description | Long Text Area | Concatenate | Combine multiple legacy note fields, prefix with "Legacy Notes:". |

(This table will be replicated for each major data entity: e.g., Contacts, Products, Orders, etc.)

6.3. Transformation Rules Catalogue

This section details the specific logic applied to data during the migration.

  • Rule 1: Concatenation (Address Lines)

* Description: Combine AddressLine1 and AddressLine2 from the source system into a single BillingStreet field in the destination system.

* Logic: Destination.BillingStreet = Source.AddressLine1 + NEWLINE + Source.AddressLine2 (if AddressLine2 is not null).

* Affected Fields: AddressLine1, AddressLine2 -> BillingStreet.

  • Rule 2: Lookup/Mapping (State Codes)

* Description: Convert 2-character state codes from the source system (e.g., 'CA') to full state names (e.g., 'California') for the destination system.

* Logic: Use a predefined lookup table: MAP(Source.StateCode, {'CA':'California', 'NY':'New York', ...}).

* Affected Fields: StateCode -> BillingState.

  • Rule 3: Conditional Mapping (Customer Status)

* Description: Map a string-based status field from the source to a boolean checkbox in the destination.

* Logic: IF (Source.Status = 'Active', TRUE, FALSE).

* Affected Fields: Status -> IsActive__c.

  • Rule 4: Default Value Assignment

* Description: If a source field is null or empty, assign a predefined default value in the destination system.

* Logic: IF (Source.Industry IS NULL OR Source.Industry = '', 'Unspecified', Source.Industry).

* Affected Fields: Industry -> Industry.

  • Rule 5: Data Type Conversion

* Description: Convert source data types (e.g., Numeric) to destination data types (e.g., Decimal) with appropriate precision.

* Logic: CAST(Source.Price AS DECIMAL(10,2)).

* Affected Fields: Price -> Price__c.

  • Rule 6: Date Format Standardization

* Description: Ensure all date fields are in a consistent YYYY-MM-DD format.

* Logic: FORMAT_DATE(Source.OrderDate, 'YYYY-MM-DD').

* Affected Fields: OrderDate -> Order_Date__c.


7. Data Validation Plan

A robust validation plan is critical to ensure data integrity throughout the migration process.

7.1. Pre-Migration Validation

  • Source Data Profiling: As detailed in Section 5.1, analyze source data for quality issues.
  • Schema Validation: Verify that the source system schema matches the expected structure for extraction.
  • Data Volume Check: Record initial counts for all in-scope tables/entities in the source system.

7.2. During Migration Validation

  • Record Count Verification: Compare the number of records extracted from the source with the number of records loaded into the staging area and then into the destination.
  • Error Logging: Implement detailed logging for any failed transformations or load errors.
  • Checksum Verification: For critical large files or databases, use checksums to ensure data has not been corrupted during transfer.
  • Data Type and Format Checks: Automated checks during loading to ensure data conforms to destination field types and formats.

7.3. Post-Migration Validation

  • Record Count Reconciliation: Final comparison of record counts between source and destination for each entity.
  • Data Sample Verification: Manual review of a statistically significant sample of migrated records by business users to confirm accuracy and completeness (e.g., 5-10% of critical records).
  • Referential Integrity Checks: Verify relationships between migrated entities (e.g., all contacts are linked to an existing account
data_migration_planner.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}