Data Migration Planner
Run ID: 69cbbd5061b1021a29a8be102026-03-31Development
PantheraHive BOS
BOS Dashboard

Data Migration Planner: Detailed Implementation Plan

This document outlines a comprehensive data migration plan, including detailed field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates. The provided code examples are designed to be production-ready, well-commented, and serve as a robust foundation for your migration project.


1. Data Migration Strategy Overview

The goal of this migration is to reliably transfer data from a Source System to a Target System, ensuring data integrity, accuracy, and completeness. This plan emphasizes a structured, phased approach with robust validation and contingency measures.

Key Principles:


2. Field Mapping & Schema Definition

Field mapping defines the relationship between source and target system attributes, including data types and nullability. This section provides a structured way to define these mappings.

Description:

The mapping specifies which source field corresponds to which target field. It also notes the target data type and whether the target field can be null. This mapping is crucial for both data extraction and loading, as well as for identifying necessary transformations.

Code Example: Python Dictionary Mapping (In-Application Configuration)

This Python dictionary provides a direct, programmatic way to define field mappings.

text • 539 chars
---

### 3. Data Transformation Rules

Transformation rules define how source data is manipulated to fit the target system's requirements. These rules are applied during the extraction and loading process.

**Description:**
Each transformation rule is implemented as a Python function, ensuring modularity and reusability. These functions handle data type conversions, formatting, cleaning, and business logic application. A dictionary maps rule names to their corresponding functions.

**Code Example: Python Transformation Functions**

Sandboxed live preview

Comprehensive Study Plan for Data Migration Planning Professional

Project: Data Migration Planner Workflow - Step 1 of 3: plan_architecture

Deliverable: Detailed Study Plan for Data Migration Planning


1. Introduction & Goal

This study plan is designed to guide an aspiring or current IT professional through the comprehensive knowledge and skill development required to excel as a Data Migration Planner. The goal is to build a robust understanding of data migration methodologies, best practices, tools, and critical considerations, enabling the individual to effectively plan, strategize, and oversee successful data migration projects. By the end of this program, the learner will be equipped to define scope, map data, establish transformation rules, design validation strategies, plan for contingencies, and estimate project timelines.

2. Overall Learning Objectives

Upon completion of this study plan, the learner will be able to:

  • Understand Data Migration Fundamentals: Grasp the core concepts, types, phases, and common challenges of data migration projects.
  • Conduct Comprehensive Discovery & Assessment: Identify data sources, targets, stakeholders, and business requirements.
  • Perform Data Profiling & Analysis: Utilize tools and techniques to understand data quality, structure, and relationships.
  • Design Field Mapping & Transformation Rules: Develop detailed specifications for mapping source to target fields, including complex data transformations and cleansing rules.
  • Develop Data Validation Strategies: Create robust validation scripts and procedures to ensure data integrity and accuracy post-migration.
  • Formulate Rollback Procedures & Risk Mitigation: Design comprehensive plans for reverting to the original state and identifying/mitigating potential project risks.
  • Estimate Project Timelines & Resources: Develop realistic timeline estimates, resource allocation plans, and budget considerations for data migration projects.
  • Select Appropriate Migration Tools & Technologies: Evaluate and recommend suitable ETL tools, scripting languages, and database utilities.
  • Understand Governance & Compliance: Integrate data governance, security, and regulatory compliance into migration planning.
  • Communicate & Document Effectively: Prepare professional documentation, reports, and presentations for various stakeholders.

3. Weekly Schedule

This 8-week schedule provides a structured approach to learning, balancing theoretical knowledge with practical application. Each week includes core topics, specific learning objectives, and suggested activities.

Week 1: Data Migration Fundamentals & Project Initiation

  • Learning Objectives:

* Define data migration, its purpose, and common drivers (e.g., system upgrades, cloud adoption, mergers).

* Identify different types of data migration (e.g., storage, database, application, cloud).

* Understand the typical phases of a data migration project lifecycle.

* Learn stakeholder identification and initial requirements gathering techniques.

* Introduce common data migration challenges and success factors.

  • Key Activities:

* Read foundational articles/chapters on data migration.

* Research case studies of successful and failed data migrations.

* Participate in online discussions about migration triggers.

* Begin compiling a glossary of data migration terms.

  • Deliverable for the week: A brief summary report on data migration types and project phases.

Week 2: Data Source Analysis & Profiling

  • Learning Objectives:

* Learn techniques for identifying and documenting source data systems.

* Understand the importance of data profiling and its role in migration.

* Utilize basic SQL queries or profiling tools to analyze data quality, completeness, and consistency.

* Identify primary keys, foreign keys, and relationships within source data.

* Document source data schema and data dictionary.

  • Key Activities:

Practice SQL queries for data profiling (e.g., COUNT(), DISTINCT, GROUP BY, AVG, MIN, MAX).

* Explore open-source data profiling tools (e.g., Talend Open Studio for Data Quality, Apache Nifi).

* Analyze a sample dataset (provided or self-selected) for quality issues.

  • Deliverable for the week: A data profiling report for a sample dataset, highlighting data quality issues and schema overview.

Week 3: Field Mapping & Transformation Rules

  • Learning Objectives:

* Master the creation of detailed field-level mapping documents from source to target.

* Define various types of data transformations (e.g., concatenation, splitting, lookup, aggregation, data type conversion).

* Develop complex business rules for data cleansing and enrichment.

* Understand the impact of data quality on transformation logic.

* Learn to document transformation rules clearly and unambiguously.

  • Key Activities:

* Create a sample field mapping document for a hypothetical migration scenario (e.g., migrating customer data from an old CRM to a new one).

* Design transformation logic for common data issues (e.g., standardizing addresses, currency conversion).

* Practice expressing transformation rules in pseudocode or a scripting language (e.g., Python).

  • Deliverable for the week: A detailed field mapping document including transformation rules for a given scenario.

Week 4: Migration Strategy, Tooling & Architecture

  • Learning Objectives:

* Evaluate different migration strategies (e.g., "big bang," "trickle," "phased").

* Understand the role of ETL (Extract, Transform, Load) tools in data migration.

* Compare and contrast various commercial and open-source ETL tools (e.g., Informatica, SSIS, Talend, Apache Airflow).

* Design a high-level data migration architecture, including staging areas and data pipelines.

* Consider performance, scalability, and security aspects of the migration architecture.

  • Key Activities:

* Research and compare features of 2-3 prominent ETL tools.

* Outline a migration strategy for a specific business requirement.

* Sketch a conceptual architecture diagram for a data migration project.

* Explore cloud-native migration services (e.g., AWS DMS, Azure Data Factory, Google Cloud Dataflow).

  • Deliverable for the week: A proposed migration strategy and high-level architectural diagram for a sample project, including tool recommendations.

Week 5: Data Validation & Testing

  • Learning Objectives:

* Develop comprehensive data validation plans and test cases.

* Design pre-migration, in-migration, and post-migration validation checks.

* Learn to write validation scripts (e.g., SQL scripts, Python scripts) to compare source and target data.

* Understand different validation metrics (e.g., record count, sum checks, data type verification, referential integrity).

* Familiarize with data reconciliation processes and reporting.

  • Key Activities:

* Draft a data validation plan for the scenario used in Week 3.

* Write sample SQL queries to compare record counts and checksums between source and target tables.

* Research data quality dashboards and reporting tools.

  • Deliverable for the week: A detailed data validation plan and sample validation scripts for a specific migration phase.

Week 6: Rollback Procedures & Risk Management

  • Learning Objectives:

* Understand the critical importance of rollback planning in data migration.

* Design a comprehensive rollback strategy and associated procedures.

* Identify potential risks in data migration projects (technical, operational, business).

* Develop risk mitigation strategies and contingency plans.

* Learn about business continuity and disaster recovery considerations.

  • Key Activities:

* Outline a step-by-step rollback procedure for a complex migration scenario.

* Conduct a mini-risk assessment for a hypothetical data migration.

* Discuss best practices for minimizing downtime during migration.

  • Deliverable for the week: A detailed rollback plan and a risk assessment matrix for a sample data migration project.

Week 7: Project Management, Governance & Security

  • Learning Objectives:

* Understand how data migration projects fit into broader program management frameworks.

* Learn to estimate project timelines, effort, and resource requirements.

* Familiarize with data governance principles and their application in migration.

* Address data security, privacy (e.g., GDPR, HIPAA), and compliance requirements.

* Develop communication plans for stakeholders throughout the migration lifecycle.

  • Key Activities:

* Create a high-level project timeline and resource plan for a data migration.

* Research data governance frameworks (e.g., DAMA-DMBoK).

* Discuss data anonymization, encryption, and access control strategies.

* Practice presenting a data migration plan to a simulated stakeholder group.

  • Deliverable for the week: A high-level project plan, including timeline, resource estimates, and key communication points.

Week 8: Capstone Project & Advanced Topics

  • Learning Objectives:

* Integrate all learned concepts into a comprehensive data migration plan.

* Explore advanced topics like real-time data migration, big data migration, and master data management (MDM) integration.

* Refine documentation and presentation skills.

* Prepare for potential certification exams or interviews.

  • Key Activities:

* Capstone Project: Design a complete data migration plan for a complex, real-world scenario (e.g., migrating an on-premise ERP system to a cloud-based SaaS solution). This should include all elements covered in previous weeks.

* Review and refine previous week's deliverables.

* Explore emerging trends in data migration technology.

  • Deliverable for the week: A complete, professional Data Migration Plan document for the chosen Capstone Project.

4. Recommended Resources

  • Books:

* "Data Migration: Strategies for Success" by Peter Aiken and David Allen

* "Designing Data-Intensive Applications" by Martin Kleppmann (for architectural insights)

* "The DAMA Guide to the Data Management Body of Knowledge (DAMA-DMBoK)" (for data governance)

  • Online Courses & Certifications:

* Coursera/edX/Udemy: Courses on Data Engineering, ETL Fundamentals, Cloud Data Migration (e.g., "Google Cloud Data Engineering," "AWS Certified Database - Specialty").

* Vendor-Specific Certifications: Microsoft Certified: Azure Data Engineer Associate, AWS Certified Database - Specialty, Talend Data Integration Certification.

* LinkedIn Learning: Courses on SQL, Python for Data, Data Warehousing.

  • Tools & Technologies (Hands-on Practice):

* Databases: PostgreSQL, MySQL (for practicing SQL queries, schema analysis).

* Scripting Languages: Python (with pandas, SQLAlchemy for data manipulation and scripting).

* ETL Tools (Community/Trial Editions): Talend Open Studio for Data Integration, Apache Nifi, Pentaho Data Integration (K

python

transformations.py

import re

from datetime import datetime

import logging

Configure logging for transformations

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

def transform_name_case(value: str) -> str:

"""Transforms a name string to title case."""

if value is None:

return None

return value.strip().title()

def validate_email_format(value: str) -> str:

"""Validates email format and returns it, or None if invalid."""

if value is None:

return None

email_regex = r"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$"

if re.match(email_regex, value):

return value.lower()

logging.warning(f"Invalid email format detected: '{value}'. Returning None.")

return None

def normalize_phone_number(value: str) -> str:

"""Removes non-digit characters from a phone number."""

if value is None:

return None

digits = re.sub(r'\D', '', value)

# Example: Format to E.164 if it's a US number, otherwise just clean digits

if len(digits) == 10:

return f"+1{digits}" # Assuming US numbers

return digits # Return cleaned digits for other formats

def parse_date(value: str, date_format: str = "%Y-%m-%d") -> datetime.date:

"""Parses a date string into a datetime.date object."""

if value is None:

return None

try:

return datetime.strptime(value, date_format).date()

except ValueError:

logging.error(f"Failed to parse date '{value}' with format '{date_format}'. Returning None.")

return None

def parse_datetime(value: str, datetime_format: str = "%Y-%m-%d %H:%M:%S") -> datetime:

"""Parses a datetime string into a datetime object."""

if value is None:

return None

try:

return datetime.strptime(value, datetime_format)

except ValueError:

logging.error(f"Failed to parse datetime '{value}' with format '{datetime_format}'. Returning None.")

return None

def parse_datetime_or_null(value: str, datetime_format: str = "%Y-%m-%d %H:%M:%S") -> datetime:

"""Parses a datetime string into a datetime object, returns None on failure."""

return parse_datetime(value, datetime_format) # Reusing the existing parser

def map_status_to_boolean(value: str) -> bool:

"""Maps a source status code to a boolean (True for active, False for inactive)."""

if value is None:

return False # Default to inactive if status is missing

active_statuses = ["ACTIVE", "A", "1", "ENABLED"]

return str(value).upper() in active_statuses

def default_to_zero_if_null(value: any) -> int:

"""Returns 0 if value is None, otherwise casts to int."""

if value is None or value == '':

return 0

try:

return int(value)

except ValueError:

logging.warning(f"Could not convert '{value}' to integer, defaulting to 0.")

return 0

def standardize_state_codes(value: str) -> str:

"""Standardizes state/province codes (e.g., 'California' to 'CA')."""

if value is None:

return None

state_map = {

"CALIFORNIA": "CA", "NEW YORK": "NY", "TEXAS": "TX",

"FLORIDA": "FL", "ILLINOIS": "IL",

# ... add more mappings as needed

}

cleaned_value = value.strip().upper()

return state_map.get(cleaned_value, cleaned_value) # Return

gemini Output

Data Migration Planner: Comprehensive Plan & Strategy

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name]

Prepared By: PantheraHive Solutions Team


1. Executive Summary

This document outlines a comprehensive plan for the data migration from the [Source System Name] to the [Target System Name]. It details the strategy, scope, technical specifications for field mapping and transformation, validation procedures, rollback protocols, and a projected timeline. The goal is to ensure a smooth, accurate, and secure transition of critical data, minimizing downtime and mitigating risks. This plan serves as a foundational deliverable, guiding all subsequent migration activities and ensuring alignment with business objectives.


2. Project Scope & Objectives

2.1. Project Scope:

The scope of this data migration project encompasses the transfer of all relevant historical and active data pertaining to [specific data categories, e.g., customer records, product catalog, sales orders, financial transactions] from the legacy [Source System Name] to the new [Target System Name].

In-Scope Data Entities:

  • [Example: Customer Master Data]
  • [Example: Product Information Management (PIM)]
  • [Example: Sales Order History]
  • [Example: Financial Transaction Ledgers]
  • [Example: Employee Records]
  • [List all specific modules/tables to be migrated]

Out-of-Scope Data Entities:

  • [Example: Archived data older than 5 years (unless specified)]
  • [Example: System configuration settings (will be manually configured)]
  • [Example: Temporary or transient data]
  • [List any data explicitly excluded from migration]

2.2. Project Objectives:

  • Data Integrity: Ensure 100% accuracy, completeness, and consistency of migrated data in the target system.
  • Minimal Downtime: Execute the migration with the least possible disruption to business operations.
  • Security & Compliance: Maintain data security standards and compliance with relevant regulations throughout the migration process.
  • Data Accessibility: Ensure all critical business data is readily accessible and usable in the new target system post-migration.
  • Auditability: Provide a clear audit trail of all migration activities and transformations.
  • Successful System Cutover: Facilitate a seamless transition to the new [Target System Name] with validated data.

3. Source & Target Systems Overview

3.1. Source System:

  • Name: [e.g., Legacy CRM 7.x, SAP ECC 6.0, Custom Oracle DB Application]
  • Database Type: [e.g., SQL Server, Oracle, MySQL, PostgreSQL]
  • Version: [Specific version number]
  • Key Data Structures: [Briefly mention main tables/modules involved, e.g., Customer, Product, Order]

3.2. Target System:

  • Name: [e.g., Salesforce Sales Cloud, SAP S/4HANA, Dynamics 365, Custom Web Application]
  • Database Type: [e.g., Salesforce Objects, HANA DB, SQL Server, MongoDB]
  • Version: [Specific version number or Cloud instance]
  • Key Data Structures: [Briefly mention main objects/tables involved, e.g., Account, Product2, Order]

4. Data Inventory & Volume Assessment

A detailed data inventory and volume assessment will be conducted as part of the initial data analysis phase. This will involve:

  • Identifying all relevant tables/objects in the source system.
  • Estimating record counts for each entity.
  • Assessing data types and storage requirements.
  • Identifying potential data quality issues (e.g., missing values, inconsistent formats, duplicates).

Initial Estimates (Subject to detailed analysis):

  • Total Data Volume: [e.g., 500 GB, 2 TB]
  • Number of Records (Approx.): [e.g., 10 Million Customer records, 5 Million Product records]
  • Key Entities: [e.g., Customers, Products, Orders, Invoices, Employees]

5. Data Migration Strategy

The proposed data migration strategy is a [Phased / Big Bang / Incremental] approach.

[Choose one and elaborate, example for Phased:]

Phased Migration: This approach involves migrating data in distinct stages, starting with less critical data sets or specific modules, allowing for thorough testing and validation after each phase. This reduces overall project risk and provides opportunities for learning and adjustment.

  • Phase 1: Master Data (e.g., Customers, Products)
  • Phase 2: Transactional Data (e.g., Historical Orders, Invoices)
  • Phase 3: Remaining Dependent Data (e.g., Service Cases, Marketing Campaigns)

Key Steps:

  1. Extraction: Data will be extracted from the [Source System] using [e.g., SQL queries, API calls, ETL tools].
  2. Transformation: Extracted data will be cleansed, standardized, and transformed according to the defined rules to fit the [Target System] schema. This will be performed using [e.g., SSIS, Talend, Informatica, custom scripts].
  3. Loading: Transformed data will be loaded into the [Target System] using [e.g., Target System APIs, bulk import tools, direct database inserts].
  4. Validation: Post-load validation will ensure data integrity and accuracy in the target system.

6. Field Mapping

Field mapping is a critical component that defines how each data element from the source system corresponds to a data element in the target system. This will be documented in a detailed mapping specification, a template for which is provided below.

6.1. Mapping Process:

  1. Identify Source Fields: List all relevant fields from the source system entities.
  2. Identify Target Fields: List all corresponding fields in the target system entities, including mandatory fields.
  3. Define Direct Mapping: For fields that have a direct one-to-one correspondence.
  4. Define Transformation Rules: For fields requiring data manipulation (see Section 7).
  5. Identify Unmapped Fields: Address source fields with no target equivalent (archive, drop, or map to a generic field).
  6. Identify New Target Fields: Address target fields with no source equivalent (default values, manual entry, or derived data).
  7. Document: Create a comprehensive mapping document.

6.2. Field Mapping Template (Example for Customer Entity):

| Source System: [Legacy CRM] | Target System: [Salesforce Account] |

| :-------------------------- | :---------------------------------- |

| Source Field Name | Source Data Type | Source Max Length | Target Field Name | Target Data Type | Target Max Length | Mandatory (Target) | Transformation Rule | Notes / Business Logic |

| CRM_CustomerID | INT | 10 | External_ID__c | TEXT | 255 | Yes | Direct Map | Unique identifier for customer, will be used for upserts. |

| CRM_CompanyName | VARCHAR | 100 | Name | TEXT | 255 | Yes | Direct Map | Primary company name. |

| CRM_AddressLine1 | VARCHAR | 200 | BillingStreet | TEXT | 255 | No | Concatenate with Line2 | CRM_AddressLine1 + ', ' + CRM_AddressLine2 |

| CRM_AddressLine2 | VARCHAR | 200 | N/A | N/A | N/A | No | Part of Concatenation | Not mapped directly, combined with CRM_AddressLine1. |

| CRM_City | VARCHAR | 50 | BillingCity | TEXT | 40 | No | Direct Map | |

| CRM_StateCode | CHAR | 2 | BillingState | TEXT | 80 | No | Lookup: State Abbr | Map 2-letter state code to full state name if target requires. E.g., 'CA' -> 'California'. |

| CRM_ZipCode | VARCHAR | 10 | BillingPostalCode | TEXT | 20 | No | Direct Map | Ensure format 'XXXXX' or 'XXXXX-XXXX'. |

| CRM_CreationDate | DATETIME | 8 | CreatedDate | DATETIME | 8 | Yes (System) | Convert UTC to Local | Convert from UTC to target system's local timezone. |

| CRM_Status | VARCHAR | 20 | Account_Status__c | PICKLIST | 255 | Yes | Value Map | Map 'Active'->'Open', 'Inactive'->'Closed', 'Pending'->'Prospect'. Default 'Open'. |

| CRM_AnnualRevenue | DECIMAL | 18,2 | AnnualRevenue | CURRENCY | 18,2 | No | Direct Map | |

| N/A | N/A | N/A | Industry | PICKLIST | 255 | No | Default: 'Other' | No source field, set a default value or manual update post-migration. |


7. Data Transformation Rules

Data transformation rules define how data values are manipulated during the migration process to ensure compatibility and consistency with the target system's requirements.

7.1. Common Transformation Categories:

  • Data Type Conversion: Changing data from one type to another (e.g., String to Integer, DateTime to Date).
  • Format Standardization: Ensuring data conforms to target system's required formats (e.g., phone numbers, dates, currency).
  • Value Mapping/Lookup: Translating source system codes or values to target system equivalents (e.g., 'ACT' -> 'Active', 'INACT' -> 'Inactive').
  • Concatenation/Splitting: Combining multiple source fields into one target field (e.g., First Name + Last Name -> Full Name) or splitting one source field into multiple target fields.
  • Default Value Assignment: Populating target fields with a default value when no source data is available.
  • Data Cleansing: Removing invalid characters, trimming whitespace, correcting common errors.
  • Derivation: Calculating new values based on existing source data (e.g., Age from DateOfBirth).
  • Referential Integrity Resolution: Mapping foreign keys to new primary keys in the target system.

7.2. Transformation Rule Examples:

  • Rule ID: TRN-CUST-001 (Customer Status Mapping)

* Source Field: CRM_Status (VARCHAR)

* Target Field: Account_Status__c (Picklist)

* Logic:

* IF CRM_Status = 'Active' THEN 'Open'

* IF CRM_Status = 'Inactive' THEN 'Closed'

* IF CRM_Status = 'Pending' THEN 'Prospect'

* ELSE 'Open' (Default)

* Notes: This ensures all legacy statuses map to defined picklist values in Salesforce.

  • Rule ID: TRN-PROD-002 (Product Category Hierarchy)

* Source Fields: Prod_Category1, Prod_Category2, Prod_SubCategory (VARCHAR)

* Target Field: Product_Category_Path__c (Text)

* Logic: Concatenate Prod_Category1 + ' > ' + Prod_Category2 + ' > ' + Prod_SubCategory.

* Example: 'Electronics > Audio > Headphones'

* Notes: Creates a hierarchical path for easier searching and reporting in the target system.

  • Rule ID: TRN-DATE-003 (Date Format & Timezone Conversion)

* Source Field: Order_Date (DATETIME)

* Target Field: OrderDate (Date)

* Logic:

1. Convert source DATETIME from UTC to [Target System's Local Timezone, e.g., EST].

2. Extract only the date component (YYYY-MM-DD).

* Notes: Ensures consistent date representation and timezone accuracy.


8. Data Validation Strategy & Scripts

Robust data validation is crucial to ensure the success of the migration. This involves pre-migration data quality checks and extensive post-migration verification.

8.1. Pre-Migration Validation (Source Data Quality):

  • Purpose: Identify and cleanse data quality issues in the source system before migration.
  • Checks:

* Completeness: Identify records with critical missing values (e.g., mandatory fields).

* Uniqueness: Detect duplicate records based on primary keys or business keys.

* Consistency: Check for inconsistent data formats or values (e.g., state codes, date formats).

* Referential Integrity: Verify relationships between tables (e.g., orphaned child records).

  • Scripts: SQL scripts will be developed to analyze source data, generate reports on identified issues, and facilitate cleansing efforts.

8.2. Post-Migration Validation (Target Data Integrity):

  • Purpose: Verify that data has been accurately, completely, and consistently migrated to the target system.
  • Checks:

* Record Counts: Compare the total number of records migrated for each entity against source counts.

Script Example: SELECT COUNT() FROM Source.Customers; vs SELECT COUNT(*) FROM Target.Accounts;

* Data Completeness: Verify that all expected records are present in the target system.

* Data Accuracy: Spot-check a statistically significant sample of records to ensure field values match expectations after transformations.

Script Example:* Randomly select 100 customer records, extract source and target values for key fields, and compare.

* Data Integrity (Referential): Verify that relationships between migrated entities are correctly established in the target system (e.g., an Order correctly links to its Customer).

Script Example: SELECT COUNT() FROM Target.Orders WHERE Customer_ID__c IS NULL; (should be 0)

*Business Rule

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}