Data Migration Planner
Run ID: 69cb4ca961b1021a29a87c352026-03-31Development
PantheraHive BOS
BOS Dashboard

Data Migration Planner: Comprehensive Migration Plan for [Project Name]

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name/Organization]

Prepared By: PantheraHive Solutions Team


1. Executive Summary

This document outlines a comprehensive plan for the data migration from [Source System Name] to [Target System Name]. It details the strategy, methodologies, specific field mappings, data transformation rules, validation procedures, rollback protocols, and estimated timelines required to successfully execute this critical project. The goal is to ensure a secure, accurate, and efficient transfer of data with minimal disruption to business operations, maintaining data integrity and quality throughout the process. This plan serves as a foundational deliverable, guiding all subsequent migration activities.

2. Introduction

The objective of this data migration is to transition essential business data from the legacy [Source System Name] to the modern [Target System Name]. This transition is vital for [briefly state business reason, e.g., enhancing operational efficiency, improving data analytics capabilities, reducing maintenance costs, supporting new business processes]. This plan addresses the technical and procedural aspects necessary to achieve a successful migration, ensuring all stakeholders are aligned on the approach and deliverables.

3. Scope of Migration

The scope of this data migration includes the following key data entities and their associated attributes:

The migration will focus on active and relevant historical data, as defined in collaboration with business stakeholders during the discovery phase.

4. Source and Target Systems

* Name: [e.g., Legacy CRM System, Old ERP Database]

* Technology/Database: [e.g., SQL Server 2012, Oracle 11g, Custom Application]

* Key Data Schemas/Tables: [e.g., Customers, Products, Orders, Order_Details]

* Access Method: [e.g., ODBC, JDBC, API]

* Name: [e.g., Salesforce CRM, SAP S/4HANA, New Custom Application]

* Technology/Database: [e.g., Salesforce Objects, HANA DB, PostgreSQL]

* Key Data Schemas/Tables/Objects: [e.g., Account, Product2, Order, OrderItem]

* Access Method: [e.g., Salesforce API, JDBC, REST API]

5. Data Migration Strategy

Our data migration strategy employs a phased approach, focusing on data quality, integrity, and minimal business disruption.

  1. Discovery & Analysis: Detailed understanding of source data, target schema, and business rules.
  2. Design & Planning: Development of mapping, transformation, validation, and rollback plans (this document).
  3. Development & Testing: Building ETL (Extract, Transform, Load) scripts, testing data flows, and validating transformed data in a staging environment.
  4. Trial Migrations (UAT): Executing full migration cycles in a User Acceptance Testing (UAT) environment, involving business users for validation.
  5. Cutover & Production Migration: Executing the migration in the production environment during a scheduled maintenance window.
  6. Post-Migration Validation & Support: Verifying data in the target system and providing immediate support.

The chosen migration approach will be "Big Bang" / "Phased" (select one based on project needs).

6. Detailed Migration Plan

6.1. Data Scope and Volume

* Customers: [e.g., 500,000]

* Products: [e.g., 10,000]

* Orders: [e.g., 2,000,000]

6.2. Field Mapping

A detailed field mapping document will be maintained in a separate, version-controlled spreadsheet. Below is an illustrative example for a Customer entity:

| Source System Table.Field | Source Data Type | Source Sample Data | Target System Object.Field | Target Data Type | Transformation Rule(s) | Notes/Comments |

| :------------------------ | :--------------- | :----------------- | :------------------------- | :--------------- | :--------------------- | :------------- |

| SRC_CUSTOMER.CUST_ID | INT | 123456 | Account.External_ID__c | Text (255) | None (direct map) | Unique customer identifier |

| SRC_CUSTOMER.FIRST_NAME | VARCHAR(50) | John | Account.FirstName | Text (40) | Trim spaces | |

| SRC_CUSTOMER.LAST_NAME | VARCHAR(50) | Doe | Account.LastName | Text (80) | Trim spaces | Mandatory field |

| SRC_CUSTOMER.COMPANY_NM | VARCHAR(100) | ABC Corp | Account.Name | Text (255) | Trim spaces | Primary account name |

| SRC_CUSTOMER.EMAIL_ADDR | VARCHAR(100) | j.doe@abccorp.com | Account.PersonEmail | Email | Lowercase, Validate format | |

| SRC_CUSTOMER.PHONE_NUM | VARCHAR(20) | +1 (555) 123-4567 | Account.Phone | Phone | Standardize to E.164 | Remove non-numeric, add '+1' if missing |

| SRC_CUSTOMER.CREATE_DT | DATETIME | 2020-01-15 10:30:00| Account.CreatedDate | DateTime | Convert to UTC | |

| SRC_CUSTOMER.STATUS_CD | CHAR(1) | A | Account.Status__c | Picklist | Map 'A'->'Active', 'I'->'Inactive', 'P'->'Pending' | Default to 'Inactive' if null/invalid |

| SRC_ADDRESS.STREET | VARCHAR(100) | 123 Main St | Account.BillingStreet | Text (255) | Concat STREET, APT_NO | |

| SRC_ADDRESS.CITY | VARCHAR(50) | Anytown | Account.BillingCity | Text (40) | None | |

Note: This table is illustrative. The actual field mapping document will be exhaustive for all in-scope entities and attributes.

6.3. Data Transformation Rules

Transformation rules are critical to ensure data conforms to the target system's requirements and business logic. Each rule will be explicitly documented alongside its corresponding field mapping.

Common Transformation Categories:

* Date/Time: Converting MM/DD/YYYY to YYYY-MM-DD, local time to UTC.

* Phone Numbers: Standardizing to E.164 format (e.g., +1 (555) 123-4567).

* Currency: Removing currency symbols, converting to a standard decimal format.

* VARCHAR to INT, DATETIME to DATE, CHAR to BOOLEAN.

* Picklist/Dropdowns: Mapping source codes to target system values (e.g., SRC_STATUS='A' to TARGET_STATUS='Active').

* Reference Data: Looking up IDs based on names (e.g., SRC_SALES_REP_NAME to TARGET_SALES_REP_ID).

* Concatenation: Combining FirstName and LastName into a FullName field.

* Splitting: Extracting Area Code from a Phone Number.

* Calculating Age from DateOfBirth.

* Deriving Account Type based on Customer Revenue.

* Assigning a default value if a source field is null or empty (e.g., Status defaults to 'Active').

* Removing leading/trailing spaces (TRIM).

* Changing case (UPPER, LOWER).

* Removing special characters.

* Handling null values (e.g., replacing with empty string or a specific default).

Example Transformation Rule (for Account.Status__c from SRC_CUSTOMER.STATUS_CD):

text • 207 chars
IF SRC_CUSTOMER.STATUS_CD = 'A' THEN 'Active'
ELSE IF SRC_CUSTOMER.STATUS_CD = 'I' THEN 'Inactive'
ELSE IF SRC_CUSTOMER.STATUS_CD = 'P' THEN 'Pending'
ELSE 'Inactive' (Default for unmapped or invalid codes)
Sandboxed live preview

Team Training & Readiness Plan for Data Migration Architecture & Execution

This document outlines a comprehensive training and readiness plan designed to equip your team with the essential knowledge and skills required for successful data migration planning, architecture, and execution. This study plan is specifically tailored to support the "Data Migration Planner" workflow, focusing on the critical aspects of defining architecture, understanding data flows, and implementing robust migration strategies.


1. Introduction and Purpose

The successful execution of a data migration project hinges on a well-informed and skilled team. This training plan aims to systematically build expertise across key areas of data migration, from foundational concepts and architectural considerations to practical implementation and risk management. By following this structured program, your team will develop a shared understanding, standardized practices, and the confidence needed to navigate the complexities of data migration.

Overall Goal: To prepare the project team to effectively plan, design, and oversee a complete data migration, ensuring data integrity, minimal downtime, and successful business adoption.

2. Overall Learning Objectives

Upon completion of this training program, participants will be able to:

  • Articulate the core principles and phases of a data migration project.
  • Analyze source and target system architectures to identify data dependencies and integration points.
  • Design comprehensive data mapping and transformation rules.
  • Select appropriate data migration tools and methodologies.
  • Develop robust data validation, testing, and rollback strategies.
  • Understand and mitigate risks associated with data migration projects.
  • Contribute effectively to the planning, execution, and post-migration phases of the project.

3. Weekly Schedule

This 6-week program provides a structured approach to learning, balancing theoretical knowledge with practical application.

| Week | Focus Area | Key Topics | Deliverables/Activities |

| :--- | :------------------------------------------ | :-------------------------------------------------------------------------------- | :-------------------------------------------------------------------------- |

| 1 | Data Migration Fundamentals & Strategy | Introduction to Data Migration, Business Drivers, Project Phases, Risk Assessment, Stakeholder Management, Data Governance Principles. | Project Charter Review, Initial Risk Log Contribution, Glossary of Terms. |

| 2 | Source & Target Architecture Analysis | Deep dive into Source System Data Models, Target System Data Models, Data Dictionary Analysis, Data Profiling, Schema Comparison, Data Lineage. | Source/Target Data Model Diagrams, Initial Data Profiling Report. |

| 3 | Data Mapping, Transformation & Cleansing | Field-level Mapping, Transformation Logic Definition (ETL rules), Data Cleansing Strategies, Data Quality Rules, Error Handling. | Draft Data Mapping Document (key entities), Sample Transformation Rules. |

| 4 | Migration Tooling & Scripting | Overview of ETL Tools (e.g., SSIS, Talend, Informatica), Scripting Languages (SQL, Python), API Integration, Data Loading Techniques. | Tool Evaluation Matrix, Basic ETL Script/Job Design for a simple entity. |

| 5 | Testing, Validation & Rollback Strategies | Migration Test Plan Development, Unit Testing, System Integration Testing (SIT), User Acceptance Testing (UAT), Data Reconciliation, Rollback Procedures. | Draft Test Plan Section (Data Validation), Rollback Strategy Flowchart. |

| 6 | Go-Live, Post-Migration & Project Management | Cutover Planning, Post-Migration Support, Data Archiving, Performance Monitoring, Project Management Best Practices, Communication Plan. | Cutover Checklist Template, Post-Migration Monitoring Plan Outline. |

4. Detailed Learning Objectives (Per Week)

Week 1: Data Migration Fundamentals & Strategy

  • Objectives:

* Define data migration and its various types (e.g., system upgrade, consolidation).

* Identify key business drivers and benefits of data migration.

* Outline the typical phases of a data migration project (planning, design, execution, validation, cutover).

* Recognize common risks in data migration and potential mitigation strategies.

* Understand the importance of data governance and data quality in migration.

* Identify key stakeholders and their roles in the migration process.

  • Activities: Case study analysis of successful/unsuccessful migrations, group discussion on project scope.

Week 2: Source & Target Architecture Analysis

  • Objectives:

* Describe methods for analyzing source system data structures (e.g., database schemas, file formats).

* Understand how to interpret data dictionaries and metadata.

* Perform basic data profiling to identify data types, formats, and quality issues.

* Compare and contrast source and target system data models to identify gaps and redundancies.

* Trace data lineage for critical business entities.

* Document architectural diagrams for both source and target systems relevant to data flow.

  • Activities: Hands-on data profiling exercise using sample data, diagramming exercises.

Week 3: Data Mapping, Transformation & Cleansing

  • Objectives:

* Develop detailed field-level data mapping documents between source and target.

* Define various data transformation types (e.g., aggregation, lookup, concatenation, conditional logic).

* Formulate specific business rules for data transformation.

* Design strategies for data cleansing, deduplication, and standardization.

* Plan for error handling and logging during the transformation process.

* Understand the impact of data quality on migration success.

  • Activities: Collaborative data mapping workshop, creating transformation rule sets for specific data elements.

Week 4: Migration Tooling & Scripting

  • Objectives:

* Evaluate different categories of data migration tools (ETL, scripting, specialized migration tools).

* Understand the capabilities and limitations of common ETL platforms (e.g., Informatica PowerCenter, Talend, Microsoft SSIS, AWS Glue, Azure Data Factory).

* Develop basic SQL scripts for data extraction, transformation, and loading.

* Utilize Python for data manipulation and automation tasks.

* Identify scenarios for API-based data integration vs. batch processing.

* Design a high-level migration job flow using selected tools.

  • Activities: Hands-on lab with a selected ETL tool or scripting language, demonstration of tool features.

Week 5: Testing, Validation & Rollback Strategies

  • Objectives:

* Design a comprehensive data migration test plan, including different testing phases.

* Develop strategies for data validation (row counts, checksums, data sampling, referential integrity checks).

* Create SQL queries or scripts for data reconciliation between source and target.

* Define criteria for successful migration testing and UAT sign-off.

* Develop detailed rollback procedures to revert to the source system in case of failure.

* Understand the importance of performance testing for migration.

  • Activities: Developing test cases for a specific data entity, simulating a rollback scenario.

Week 6: Go-Live, Post-Migration & Project Management

  • Objectives:

* Plan a detailed cutover strategy, including downtime considerations and communication.

* Identify key post-migration activities (e.g., data archiving, system decommissioning, performance monitoring).

* Understand the importance of ongoing data governance post-migration.

* Apply project management principles to data migration (scheduling, resource allocation, change management).

* Develop a communication plan for various stakeholders throughout the migration lifecycle.

* Conduct a post-implementation review to capture lessons learned.

  • Activities: Group exercise on cutover planning, creating a communication matrix.

5. Recommended Resources

  • Books:

* "Data Migration" by Michael Blaha (for foundational concepts and best practices).

* "Designing Data-Intensive Applications" by Martin Kleppmann (for deep architectural understanding).

* Specific vendor documentation for chosen ETL tools (e.g., Informatica documentation, Talend tutorials).

  • Online Courses/Platforms:

* Coursera/edX: Data Engineering, Database Design, SQL courses.

* Udemy/Pluralsight: Specific courses on ETL tools, Python for Data, Cloud Data Platforms (AWS, Azure, GCP).

* LinkedIn Learning: Project Management for Data Professionals.

  • Internal Resources:

* Existing system documentation (data dictionaries, schema diagrams).

* Subject Matter Experts (SMEs) for source and target systems.

* Previous project post-mortems or lessons learned.

  • Websites/Blogs:

* Industry blogs on data migration, data warehousing, and ETL.

* Vendor-specific community forums.

6. Milestones

  • End of Week 2: Completion of Source & Target System Data Model Analysis.
  • End of Week 3: Draft Data Mapping Document for Core Entities.
  • End of Week 4: High-Level Migration Job Design and Tool Selection Justification.
  • End of Week 5: Draft Data Migration Test Plan and Rollback Strategy.
  • End of Week 6: Comprehensive Data Migration Architecture & Execution Plan Presentation.

7. Assessment Strategies

Knowledge and readiness will be assessed through a combination of methods:

  • Weekly Quizzes/Discussions: Short quizzes or facilitated discussions to reinforce key concepts and check understanding.
  • Practical Exercises/Labs: Hands-on assignments involving data profiling, mapping, scripting, and tool usage.
  • Deliverable Reviews: Peer and instructor review of weekly deliverables (e.g., data models, mapping documents, test plans).
  • Mid-Program Review (End of Week 3): A group presentation or discussion on the progress of data mapping and transformation logic.
  • Final Project/Presentation (End of Week 6): A comprehensive presentation of a proposed data migration architecture and execution plan for a hypothetical or actual project scenario, demonstrating the application of all learned concepts. This will include justification for architectural choices, tool selection, testing strategies, and risk mitigation.
  • SME Feedback: Regular interaction with Subject Matter Experts to validate understanding of business rules and data intricacies.

This detailed training plan provides a robust framework for developing a highly competent data migration team. Consistent engagement, active participation, and a commitment to applying learned principles will be key to the success of both the training program and the subsequent data migration project.

gemini Output

This deliverable provides a comprehensive, detailed, and actionable framework for planning and executing a data migration. It includes example configuration files for field mapping, transformation rules, and validation scripts, alongside illustrative Python code for a migration engine, and structured markdown documents for rollback procedures and timeline estimates.

This output is designed to be directly consumable by your team, providing a robust starting point for your data migration project.


Data Migration Planner: Comprehensive Deliverable

This document outlines the core components and provides example code and documentation for planning a complete data migration. It covers field mapping, data transformation, validation, rollback procedures, and timeline estimation.

1. Overview and Purpose

This package provides a structured approach to defining, executing, and managing a data migration project. It comprises:

  • A YAML configuration file (data_migration_config.yaml) to centralize all migration definitions (mappings, transformations, validations).
  • An illustrative Python script (migration_engine.py) demonstrating how to interpret this configuration and perform data processing and validation.
  • Detailed markdown documents for rollback_plan.md and timeline_plan.md to guide the operational aspects of the migration.

The goal is to provide a clear, traceable, and repeatable process for migrating data between systems, ensuring data integrity and minimizing risks.

2. Core Components and Files

This deliverable includes the following files:

  • data_migration_config.yaml:

* Description: This is the central configuration file. It defines the source and target systems, detailed field-level mappings, complex data transformation rules, and specific validation checks to be performed before and after the migration.

* Purpose: To provide a single source of truth for all migration logic, making it easy to review, update, and version control.

  • migration_engine.py:

* Description: An illustrative Python script that demonstrates how to parse the data_migration_config.yaml file and conceptually apply the defined mappings, transformations, and run validation routines.

* Purpose: To serve as a template or conceptual model for developing the actual ETL (Extract, Transform, Load) scripts that will execute the data migration. It highlights the logic flow and integration points for the defined rules.

  • rollback_plan.md:

* Description: A markdown document outlining the detailed steps and considerations for rolling back the migration in case of critical failures or validation discrepancies.

* Purpose: To ensure business continuity and data integrity by providing a clear, pre-defined strategy to revert changes and restore systems to a pre-migration state.

  • timeline_plan.md:

* Description: A markdown document providing a phased approach and estimated timeline for the entire data migration project, from planning to post-migration support.

* Purpose: To establish realistic expectations, allocate resources effectively, and track progress against key milestones.

3. Code and Documentation Deliverables


3.1 data_migration_config.yaml

This file defines the mappings, transformations, and validation rules for the migration.


# data_migration_config.yaml
#
# This YAML file defines the complete configuration for a data migration project.
# It includes details about source and target systems, field mappings,
# data transformation rules, and validation procedures.

migration_project: "Customer_CRM_Migration_Project_V1.0"
description: "Migrating customer data from Legacy CRM to Salesforce"
version: "1.0.0"
last_updated: "2023-10-27"

# --- Source System Details ---
source_system:
  name: "Legacy CRM Database"
  type: "PostgreSQL"
  connection_details: "jdbc:postgresql://legacy-crm-db:5432/crm_data"
  tables_to_migrate:
    - name: "customers"
      primary_key: "customer_id"
      fields:
        - customer_id
        - first_name
        - last_name
        - email_address
        - phone_number
        - address_line1
        - address_line2
        - city
        - state_province
        - postal_code
        - country_code
        - registration_date
        - last_activity_date
        - customer_status
        - legacy_account_number
        - notes

# --- Target System Details ---
target_system:
  name: "Salesforce CRM"
  type: "Cloud CRM"
  api_endpoint: "https://api.salesforce.com/services/data/v58.0"
  objects_to_update:
    - name: "Account"
      external_id_field: "Legacy_Account_ID__c" # Custom field to store legacy ID
      fields:
        - Id
        - Name
        - Email__c
        - Phone
        - BillingAddress
        - ShippingAddress
        - CreatedDate
        - LastActivityDate
        - Status__c
        - Legacy_Account_ID__c # Custom field for matching

# --- Field Mappings ---
# Defines how fields from the source system map to the target system.
# Includes data type conversions and descriptions.
field_mappings:
  - source_table: "customers"
    target_object: "Account"
    mappings:
      - source_field: "customer_id"
        target_field: "Legacy_Account_ID__c"
        description: "Unique identifier for the customer, mapped to a custom external ID field in Salesforce for reconciliation."
      - source_field: "first_name"
        target_field: "FirstName" # Salesforce standard field, will be combined with last_name for Account.Name
        data_type_conversion: "String" # Explicit conversion
        description: "Customer's first name."
      - source_field: "last_name"
        target_field: "LastName" # Salesforce standard field, will be combined with first_name for Account.Name
        data_type_conversion: "String"
        description: "Customer's last name."
      - source_field: "email_address"
        target_field: "Email__c" # Custom Email field in Salesforce
        data_type_conversion: "String"
        description: "Customer's primary email address."
      - source_field: "phone_number"
        target_field: "Phone"
        data_type_conversion: "String"
        description: "Customer's primary phone number."
      - source_field: "registration_date"
        target_field: "CreatedDate"
        data_type_conversion: "DateTime"
        description: "Date when the customer registered."
      - source_field: "last_activity_date"
        target_field: "LastActivityDate"
        data_type_conversion: "DateTime"
        description: "Date of the customer's last recorded activity."
      - source_field: "customer_status"
        target_field: "Status__c" # Custom Status field in Salesforce
        data_type_conversion: "String"
        description: "Current status of the customer (e.g., Active, Inactive, Lead)."
      - source_field: "notes"
        target_field: "Description" # Salesforce standard field
        data_type_conversion: "String"
        description: "General notes about the customer."

# --- Transformation Rules ---
# Defines specific rules for transforming data during migration.
# These rules are applied after basic field mapping.
transformation_rules:
  - target_object: "Account"
    rules:
      - target_field: "Name"
        rule_type: "concatenate"
        source_fields_involved: ["first_name", "last_name"]
        logic_details: "CONCAT(first_name, ' ', last_name)"
        description: "Combine first and last names to form the Salesforce Account Name."
      - target_field: "BillingAddress"
        rule_type: "address_composition"
        source_fields_involved: ["address_line1", "address_line2", "city", "state_province", "postal_code", "country_code"]
        logic_details:
          - "BillingStreet = CONCAT(address_line1, IFNULL(', ' || address_line2, ''))"
          - "BillingCity = city"
          - "BillingState = state_province"
          - "BillingPostalCode = postal_code"
          - "BillingCountry = LOOKUP_COUNTRY_CODE(country_code)" # Custom function example
        description: "Compose the Salesforce Billing Address fields from multiple source fields. Includes a lookup for country codes."
      - target_field: "Status__c"
        rule_type: "value_mapping"
        source_field: "customer_status"
        logic_details:
          "Active": "Active"
          "Inactive": "Inactive"
          "Lead": "Prospect"
          "Closed": "Closed"
          "Pending": "Active" # Defaulting 'Pending' to 'Active'
        default_value: "Unknown"
        description: "Map legacy customer status values to Salesforce-compatible status values."
      - target_field: "Phone"
        rule_type: "format_phone_number"
        source_field: "phone_number"
        logic_details: "PYTHON_FUNCTION: format_phone_for_salesforce" # Reference to a Python function
        description: "Format phone numbers to a consistent international standard (e.g., E.164)."

# --- Validation Rules ---
# Defines checks to ensure data quality and integrity before and after migration.
validation_rules:
  pre_migration: # Checks performed on the source data before migration starts
    - name: "Source Customer Count Check"
      description: "Verify the total number of customers in the source system."
      type: "row_count"
      source_table: "customers"
      expected_operator: "greater_than_or_equal"
      expected_value: 100000 # Example: Expect at least 100,000 customers
      severity: "CRITICAL"
    - name: "Email Uniqueness Check"
      description: "Ensure email addresses are unique in the source data for key customers."
      type: "uniqueness_check"
      source_table: "customers"
      field: "email_address"
      filter_condition: "customer_status = 'Active'" # Only check for active customers
      severity: "HIGH"
    - name: "Mandatory Field Check (Legacy Account Number)"
      description: "Ensure legacy_account_number is not null for all customers."
      type: "null_check"
      source_table: "customers"
      field: "legacy_account_number"
      severity: "CRITICAL"

  post_migration: # Checks performed on the target data after migration completes
    - name: "Target Account Count Check"
      description: "Verify the total number of Accounts migrated to Salesforce matches source count."
      type: "row_count"
      target_object: "Account"
      source_table_for_comparison: "customers" # Compare with source count
      expected_operator: "equal"
      variance_tolerance_percent: 0.01 # Allow 0

6.5. Data Quality and Cleansing

  • Strategy: Data quality issues identified during profiling will be prioritized and addressed.

* Automated Cleansing: Standard transformations (trimming, casing, formatting) will be automated via ETL scripts.

* Manual Cleansing: Complex issues (e.g., duplicate records requiring merge decisions, highly inconsistent free-text fields) will require manual review and remediation, potentially involving business users.

* Data Governance: Establish clear data ownership and governance rules for ongoing data quality maintenance in the target system.

6.6. Error Handling and Logging

  • ETL Error Handling: The ETL process will be designed to handle errors gracefully.

* Record Rejection: Records failing validation or transformation rules will be logged and moved to an error quarantine table/file, not loaded into the target.

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}