Data Migration Planner
Run ID: 69cace2deff1ba2b79624f992026-03-30Development
PantheraHive BOS
BOS Dashboard

This document outlines the architectural plan for the complete data migration, serving as Step 1 of 3 in the "Data Migration Planner" workflow. This plan details the strategic approach to move data from source systems to the target environment, ensuring data integrity, minimal downtime, and robust recovery mechanisms.


Data Migration Architectural Plan

Project: Data Migration Initiative

Workflow Step: 1 of 3 - Plan Architecture

Deliverable Date: October 26, 2023

Prepared For: [Customer Name/Stakeholder Group]


1. Executive Summary

This architectural plan defines the foundational strategy for migrating data from identified source systems to the new target environment. It encompasses the high-level architecture, data flow, key components, and strategic considerations for field mapping, data transformation, validation, and rollback procedures. The objective is to ensure a secure, efficient, and accurate transfer of data, minimizing business disruption and maximizing data quality in the target system.

2. High-Level Migration Architecture

The migration architecture will follow an Extract, Transform, Load (ETL) pattern, leveraging a dedicated staging area to ensure data quality and integrity before loading into the target system.

2.1. Architectural Diagram (Conceptual)

text • 1,094 chars
+-------------------+      +-----------------+      +-----------------+      +-------------------+
|  Source Systems   |      |  Data Extraction|      |  Data Staging   |      |  Target System    |
| (e.g., Legacy DBs,|----->|  & Profiling    |----->|  Area (DWH/DB)  |----->|  (e.g., New CRM/ERP)|
|  Flat Files, APIs)|      |  (ETL Tool)     |      |                 |      |                   |
+-------------------+      +-----------------+      +-----------------+      +-------------------+
        ^                               ^                     ^                      ^
        |                               |                     |                      |
        +-----------------------------------------------------------------------------+
                                      |
                                      V
                             +-------------------+
                             |  Migration Control|
                             |  & Monitoring     |
                             |  (Orchestration)  |
                             +-------------------+
Sandboxed live preview

2.2. Key Components & Data Flow

  • Source Systems: Original data repositories (e.g., relational databases, NoSQL databases, flat files, APIs, SaaS applications).
  • Data Extraction & Profiling:

* Extraction Layer: Responsible for connecting to source systems and extracting raw data. This will involve appropriate connectors (JDBC, ODBC, API clients, file readers).

* Data Profiling: Initial analysis of source data to understand its structure, quality, completeness, and identify potential issues (e.g., nulls, duplicates, inconsistencies).

  • ETL Tooling: A robust ETL platform (e.g., Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue, Google Cloud Dataflow) will orchestrate extraction, transformation, and loading.
  • Data Staging Area: A temporary, secure database or data warehouse environment.

* Landing Zone: Raw extracted data is first loaded here.

* Transformation Zone: Data is cleansed, standardized, de-duplicated, and transformed according to defined rules.

* Validation Zone: Pre-load validation and reconciliation occur here.

  • Target System: The new application or database where migrated data will reside.
  • Migration Control & Monitoring: A centralized system for scheduling, monitoring progress, managing errors, and auditing the migration process.

3. Scope Definition

3.1. In-Scope Data Entities:

  • [List specific entities, e.g., Customers, Orders, Products, Employees, Accounts, Transactions]
  • Example: Customer Master Data (Name, Address, Contact Info, Account Status)
  • Example: Order History (Order ID, Date, Items, Quantity, Price, Status)
  • Example: Product Catalog (SKU, Name, Description, Price, Inventory Levels)

3.2. Out-of-Scope Data Entities:

  • [List any data that will NOT be migrated, e.g., historical data older than X years, specific log data]
  • Example: Archived customer interaction logs older than 5 years.
  • Example: Non-transactional website analytics data.

3.3. In-Scope Systems:

  • [List specific source systems, e.g., Legacy CRM (SQL Server), ERP v1 (Oracle DB), Excel Spreadsheets]
  • Example: Legacy_CRM_DB (Microsoft SQL Server 2012)
  • Example: Old_Product_Catalog (CSV files on Network Share)
  • Example: SAP_ECC_6.0 (Specific modules: SD, MM)

3.4. Target System:

  • [Specify the target application/database, e.g., Salesforce Sales Cloud, SAP S/4HANA, Custom .NET Application DB]
  • Example: New_CRM_Application_DB (PostgreSQL 14)
  • Example: Salesforce Service Cloud (via API)

4. Data Discovery & Profiling Strategy

A comprehensive data discovery and profiling phase is critical to understand the source data landscape.

  • Tools: Automated data profiling tools (built-in ETL features, dedicated data quality tools) combined with manual review.
  • Activities:

* Schema Analysis: Documenting table structures, data types, primary/foreign keys.

* Data Type Analysis: Identifying data type mismatches between source and target.

* Completeness Check: Quantifying null values, empty strings, missing records.

* Uniqueness Check: Identifying duplicate records based on natural keys.

* Consistency Check: Verifying data format, referential integrity, and business rule adherence.

* Data Volume Assessment: Estimating record counts and overall data size per entity.

* Data Lineage Mapping: Understanding how data flows within source systems.

  • Output: Detailed Data Profiling Report, identifying data quality issues and informing transformation rules.

5. Field Mapping Strategy

Field mapping will be meticulously documented to ensure every relevant source field is correctly mapped to its corresponding target field.

  • Methodology:

1. Direct Mapping: 1:1 mapping where source field directly corresponds to a target field with no transformation.

2. Derived Mapping: Target field is populated by combining or transforming multiple source fields.

3. Default Value Mapping: Target field is populated with a default value if no source data exists or is applicable.

4. Lookup Mapping: Source values are mapped to target values using reference tables (e.g., old status codes to new status codes).

  • Documentation: A comprehensive "Data Mapping Document" will be created, detailing:

* Source System & Table/File

* Source Field Name

* Source Data Type

* Source Field Description

* Target System & Table

* Target Field Name

* Target Data Type

* Target Field Description

* Transformation Rule (if any)

* Validation Rule (if any)

* Remarks/Dependencies

6. Transformation Rules Strategy

Data transformation is essential to align source data with target system requirements and business rules.

  • Common Transformation Types:

* Data Type Conversion: e.g., VARCHAR to INTEGER, DATE string to DATE object.

* Format Standardization: e.g., "MM/DD/YYYY" to "YYYY-MM-DD", phone number formatting.

* Data Cleansing: Handling nulls (default values, imputation), removing special characters, trimming whitespace.

* De-duplication: Identifying and merging duplicate records based on defined keys.

* Value Mapping/Lookup: Translating old codes to new codes (e.g., "Active" -> "A", "Inactive" -> "I").

* Aggregation/Splitting: Combining multiple source fields into one target field, or splitting one source field into multiple target fields.

Derivation: Calculating new values based on existing source fields (e.g., total_amount = quantity unit_price).

* Referential Integrity: Ensuring foreign key relationships are maintained or established in the target.

  • Rule Definition: Each transformation rule will be explicitly defined in the Data Mapping Document and implemented within the ETL tool. Complex rules may require custom scripting.
  • Pre-Transformation Validation: Validation will occur before complex transformations to ensure input data quality.

7. Validation Strategy

A multi-stage validation approach will be implemented to ensure data accuracy and completeness throughout the migration lifecycle.

  • 7.1. Pre-Migration Validation:

* Source Data Profiling: As described in Section 4.

* Schema Comparison: Verify source and target schema compatibility.

* Business Rule Review: Ensure source data adheres to target business rules where possible.

  • 7.2. During-Migration Validation (Staging Area):

* Record Count Verification: Compare extracted record counts with source system counts.

* Data Type & Format Validation: Ensure data conforms to defined types and formats after initial load to staging.

Constraint Checks: Verify uniqueness, nullability, and referential integrity within the staging area before* loading to target.

* Transformation Rule Validation: Spot checks and automated tests to ensure transformation logic is correctly applied.

  • 7.3. Post-Migration Validation (Target System):

* Record Count Reconciliation: Compare total records loaded in target with expected counts from source.

* Random Sample Data Verification: Manual review of a statistically significant sample of records in the target system against source data.

* Key Field Comparison: Verify primary keys and critical business fields match between source and target for selected records.

* Data Aggregation Checks: Run aggregate queries (SUM, COUNT, AVG) on key numeric fields in both source and target to ensure totals match.

* Application-Level Testing: User Acceptance Testing (UAT) by business users to validate data functionality within the target application.

* Report Verification: Validate key reports generated from the target system produce expected results.

8. Rollback Strategy

A robust rollback plan is essential to mitigate risks and allow for recovery in case of critical failures during or immediately after migration.

  • 8.1. Pre-Migration Snapshots/Backups:

* Source System: Ensure comprehensive backups of all source systems are taken immediately prior to migration.

* Target System: If the target system is pre-existing, a full backup/snapshot should be taken before any migration data is loaded. For new systems, this may be less critical but still advised.

  • 8.2. Phased Loading with Transaction Control:

* Data will be loaded in batches or phases, allowing for easier identification and isolation of issues.

* Leverage database transaction capabilities for atomic operations, enabling rollbacks of individual batches.

  • 8.3. Rollback Procedures:

* Data Deletion: Clear migrated data from the target system tables (for the affected batch/entity). This requires carefully designed delete scripts.

* Restore from Backup: In severe cases, restore the target system to its pre-migration state using the snapshot/backup.

* Re-run Migration: After rectifying the identified issues, the migration process can be re-initiated from a clean state.

  • 8.4. Communication Plan: Clear communication protocols will be established for notifying stakeholders in the event of a rollback.
  • 8.5. Rollback Testing: The rollback procedure will be tested in a non-production environment prior to the actual migration.

9. Security & Compliance Considerations

Data security and compliance will be paramount throughout the migration process.

  • Data Encryption: Data at rest (in staging, backups) and in transit (between systems) will be encrypted using industry-standard protocols (e.g., TLS/SSL, AES-256).
  • Access Control: Strict role-based access control (RBAC) will be implemented for all migration tools, staging environments, and source/target systems, limiting access to authorized personnel only.
  • Data Masking/Anonymization: For sensitive non-production environments (e.g., UAT), personal identifiable information (PII) or other sensitive data will be masked or anonymized.
  • Audit Trails: Comprehensive logging and auditing will be enabled for all migration activities, detailing who accessed what data, when, and what operations were performed.
  • Compliance: Adherence to relevant regulatory requirements (e.g., GDPR, HIPAA, CCPA) will be ensured, especially concerning PII handling.
  • Secure Infrastructure: All migration infrastructure (servers, network) will be hardened and regularly patched.

10. Performance & Scalability Considerations

The architecture will be designed to handle the anticipated data volumes efficiently.

  • Parallel Processing: The ETL tool will be configured to leverage parallel processing for extraction, transformation, and loading tasks to minimize overall migration time.
  • Batch Sizing: Optimal batch sizes will be determined for data loads to balance performance with error handling and rollback efficiency.
  • Indexing: Strategic indexing on target tables will be considered for faster data loading where applicable, balanced against the overhead of index maintenance during large inserts.
  • Resource Allocation: Adequate CPU, memory, and I/O resources will be allocated to the ETL server, staging database, and target system during the migration window.
  • Network Bandwidth: Sufficient network bandwidth between source, ETL, staging, and target systems will be ensured.
  • Database Tuning: Source and target databases will be tuned for optimal read/write performance during the migration.

11. High-Level Timeline Estimates (Phased Approach)

This timeline is a high-level estimate and will be refined during subsequent planning phases.

  • Phase 1: Discovery & Planning (Current Phase)

* Data Discovery & Profiling: 2-4 weeks

* Architecture & Strategy Definition: 1-2 weeks

* Tool Selection & Setup: 1-2 weeks

Estimated Duration: 4-8 weeks*

  • Phase 2: Design & Development

* Detailed Data Mapping & Transformation Rule Design: 3-5 weeks

* ETL Script/Job Development: 4-8 weeks

* Validation Script Development: 2-3 weeks

* Rollback Procedure Development: 1-2 weeks

Estimated Duration: 6-10 weeks*

  • Phase 3: Testing & Refinement

* Unit Testing of ETL Jobs: 2-3 weeks

* System Integration Testing (SIT): 2-4 weeks

* User Acceptance Testing (UAT): 3-5 weeks

* Performance Testing: 1-2 weeks

Estimated Duration: 5-9 weeks*

  • Phase 4: Execution & Cutover

* Pre-Migration Activities (Backups, System Freeze): 1-3 days

* Pilot Migration / Dry Runs: 1-2 weeks

* Final Data Migration & Cutover: 3-7 days (depending on data volume and downtime tolerance)

* Post-Migration Validation & Reconciliation: 1-3 days

Estimated Duration: 2-4 weeks*

Total Estimated Project Duration: 17 - 31 weeks (Approx.

gemini Output

Data Migration Planner: Code Generation for a Robust Migration Framework

This document provides a comprehensive set of code templates and structural guidelines to facilitate a robust and well-planned data migration. These components are designed to define, execute, and validate your data migration process, ensuring data integrity, traceability, and recoverability.

The generated code focuses on Python due to its versatility, extensive libraries for data manipulation, and readability, making it an excellent choice for scripting data migration tasks.


1. Introduction to the Migration Framework Components

The following sections provide code for the core components of your data migration framework:

  • Field Mapping Definitions: Clearly define how source fields map to target fields.
  • Data Transformation Rules: Implement the logic required to convert source data into the target format.
  • Data Validation Scripts: Ensure data quality and integrity both before and after the migration.
  • Rollback Procedure Definition: Outline the steps to revert the migration in case of issues.
  • Migration Orchestration: A high-level script to manage the execution flow.

These components are modular, allowing for independent development, testing, and maintenance.


2. Core Migration Planning Components (Code Examples)

2.1. Field Mapping Definition

This section provides a Python dictionary structure to define the mapping between source and target fields. This structure is designed to be easily extensible and human-readable.


# data_migration/config/field_mappings.py

"""
Defines the field mappings from source to target systems.
Each entry in the MAPPINGS dictionary represents a target table,
containing a list of field mapping rules.
"""

# Define common data types for reference or validation
DATA_TYPES = {
    "STRING": "VARCHAR",
    "INTEGER": "INT",
    "DECIMAL": "NUMERIC(18, 4)",
    "BOOLEAN": "BOOLEAN",
    "DATETIME": "TIMESTAMP",
    "DATE": "DATE",
    "UUID": "UUID"
}

# --- FIELD MAPPINGS ---
# Structure:
# {
#     "target_table_name": [
#         {
#             "source_field": "source_table.source_column_name",
#             "target_field": "target_column_name",
#             "target_data_type": DATA_TYPES["STRING"], # Expected target data type
#             "transformation_rule": "transform_string_to_uppercase", # Optional: Name of a transformation function
#             "default_value": None, # Optional: Value to use if source is null and no transformation
#             "is_nullable": True, # Optional: Whether the target field can be null
#             "description": "Maps the customer's name from source to target."
#         },
#         {
#             "source_field": "source_table.creation_date",
#             "target_field": "created_at",
#             "target_data_type": DATA_TYPES["DATETIME"],
#             "transformation_rule": "transform_date_format",
#             "default_value": "CURRENT_TIMESTAMP",
#             "is_nullable": False,
#             "description": "Maps the record creation date, applying a date format transformation."
#         },
#         # Add more field mappings as needed
#     ],
#     "target_table_products": [
#         {
#             "source_field": "source_products.product_id",
#             "target_field": "product_uuid",
#             "target_data_type": DATA_TYPES["UUID"],
#             "transformation_rule": "generate_uuid_or_map_existing", # Example for complex logic
#             "is_nullable": False,
#             "description": "Unique identifier for the product, potentially generating UUIDs for new records."
#         },
#         {
#             "source_field": "source_products.price_usd",
#             "target_field": "unit_price",
#             "target_data_type": DATA_TYPES["DECIMAL"],
#             "transformation_rule": "convert_currency_to_usd", # Example: if source has multiple currencies
#             "default_value": 0.00,
#             "is_nullable": False,
#             "description": "Product unit price in USD."
#         }
#     ]
# }

MAPPINGS = {
    "customer": [
        {
            "source_field": "CRM_Customers.customer_id",
            "target_field": "customer_uuid",
            "target_data_type": DATA_TYPES["UUID"],
            "transformation_rule": "generate_uuid",
            "is_nullable": False,
            "description": "Unique identifier for the customer, generated if not present."
        },
        {
            "source_field": "CRM_Customers.first_name",
            "target_field": "first_name",
            "target_data_type": DATA_TYPES["STRING"],
            "transformation_rule": "trim_and_capitalize",
            "is_nullable": False,
            "description": "Customer's first name, trimmed and capitalized."
        },
        {
            "source_field": "CRM_Customers.last_name",
            "target_field": "last_name",
            "target_data_type": DATA_TYPES["STRING"],
            "transformation_rule": "trim_and_capitalize",
            "is_nullable": False,
            "description": "Customer's last name, trimmed and capitalized."
        },
        {
            "source_field": "CRM_Customers.email",
            "target_field": "email_address",
            "target_data_type": DATA_TYPES["STRING"],
            "transformation_rule": "validate_email_format",
            "is_nullable": True,
            "description": "Customer's email address, validated for format."
        },
        {
            "source_field": "CRM_Customers.status",
            "target_field": "customer_status",
            "target_data_type": DATA_TYPES["STRING"],
            "transformation_rule": "map_status_codes",
            "default_value": "ACTIVE",
            "is_nullable": False,
            "description": "Customer's status, mapped from source codes to target standard."
        },
        {
            "source_field": "CRM_Customers.registration_date",
            "target_field": "registered_on",
            "target_data_type": DATA_TYPES["DATETIME"],
            "transformation_rule": "to_iso_datetime",
            "is_nullable": False,
            "description": "Date and time of customer registration."
        },
        {
            "source_field": "CRM_Customers.last_activity",
            "target_field": "last_active_at",
            "target_data_type": DATA_TYPES["DATETIME"],
            "transformation_rule": "to_iso_datetime",
            "is_nullable": True,
            "description": "Last known activity timestamp."
        }
    ],
    "order": [
        {
            "source_field": "ERP_Orders.order_id",
            "target_field": "order_uuid",
            "target_data_type": DATA_TYPES["UUID"],
            "transformation_rule": "generate_uuid",
            "is_nullable": False,
            "description": "Unique identifier for the order."
        },
        {
            "source_field": "ERP_Orders.customer_ref_id",
            "target_field": "customer_uuid",
            "target_data_type": DATA_TYPES["UUID"],
            "transformation_rule": "lookup_customer_uuid", # Requires a lookup mechanism
            "is_nullable": False,
            "description": "Foreign key to the customer table, mapped via lookup."
        },
        {
            "source_field": "ERP_Orders.order_date",
            "target_field": "order_date",
            "target_data_type": DATA_TYPES["DATE"],
            "transformation_rule": "to_iso_date",
            "is_nullable": False,
            "description": "Date when the order was placed."
        },
        {
            "source_field": "ERP_Orders.total_amount",
            "target_field": "total_amount",
            "target_data_type": DATA_TYPES["DECIMAL"],
            "transformation_rule": "to_decimal_2dp",
            "is_nullable": False,
            "description": "Total amount of the order."
        },
        {
            "source_field": "ERP_Orders.currency",
            "target_field": "currency_code",
            "target_data_type": DATA_TYPES["STRING"],
            "transformation_rule": "standardize_currency",
            "default_value": "USD",
            "is_nullable": False,
            "description": "ISO 4217 currency code for the order."
        }
    ]
}

# Example of how to access a mapping:
# customer_mappings = MAPPINGS["customer"]
# for mapping in customer_mappings:
#     print(f"Source: {mapping['source_field']} -> Target: {mapping['target_field']}")

2.2. Data Transformation Rules

This section provides a set of Python functions to apply common data transformations. These functions are designed to be generic and reusable.


# data_migration/transforms/rules.py

import datetime
import uuid
import re
from typing import Any, Optional

"""
Defines a collection of reusable data transformation functions.
Each function takes a source value and returns the transformed value.
"""

def transform_string_to_uppercase(value: Optional[str]) -> Optional[str]:
    """Converts a string to uppercase."""
    if value is None:
        return None
    return str(value).upper()

def transform_string_to_lowercase(value: Optional[str]) -> Optional[str]:
    """Converts a string to lowercase."""
    if value is None:
        return None
    return str(value).lower()

def trim_whitespace(value: Optional[str]) -> Optional[str]:
    """Removes leading/trailing whitespace from a string."""
    if value is None:
        return None
    return str(value).strip()

def trim_and_capitalize(value: Optional[str]) -> Optional[str]:
    """Trims whitespace and capitalizes the first letter of each word."""
    if value is None:
        return None
    return " ".join([word.capitalize() for word in str(value).strip().split()])

def to_integer(value: Any) -> Optional[int]:
    """Converts a value to an integer."""
    if value is None or value == '':
        return None
    try:
        return int(float(value)) # Handle potential string representations of floats
    except (ValueError, TypeError):
        return None

def to_decimal_2dp(value: Any) -> Optional[float]:
    """Converts a value to a float, rounded to 2 decimal places."""
    if value is None or value == '':
        return None
    try:
        return round(float(value), 2)
    except (ValueError, TypeError):
        return None

def to_iso_date(value: Any) -> Optional[str]:
    """Converts a date-like value to ISO 8601 date string (YYYY-MM-DD)."""
    if value is None or value == '':
        return None
    try:
        if isinstance(value, datetime.date):
            return value.isoformat()
        elif isinstance(value, datetime.datetime):
            return value.date().isoformat()
        else:
            # Attempt to parse common date formats
            for fmt in ["%Y-%m-%d", "%m/%d/%Y", "%d-%m-%Y", "%Y%m%d"]:
                try:
                    return datetime.datetime.strptime(str(value), fmt).date().isoformat()
                except ValueError:
                    continue
            return None # Could not parse
    except Exception:
        return None

def to_iso_datetime(value: Any) -> Optional[str]:
    """Converts a datetime-like value to ISO 8601 datetime string (YYYY-MM-DDTHH:MM:SS)."""
    if value is None or value == '':
        return None
    try:
        if isinstance(value, datetime.datetime):
            return value.isoformat()
        elif isinstance(value, datetime.date):
            return datetime.datetime(value.year, value.month, value.day).isoformat()
        else:
            # Attempt to parse common datetime formats
            for fmt in ["%Y-%m-%d %H:%M:%S", "%m/%d/%Y %H:%M:%S", "%Y-%m-%dT%H:%M:%S", "%Y%m%d%H%M%S"]:
                try:
                    return datetime.datetime.strptime(str(value), fmt).isoformat()
                except ValueError:
                    continue
            return None # Could not parse
    except Exception:
        return None

def generate_uuid(value: Any = None) -> str:
    """Generates a new UUID. If an existing value is provided and valid, it's returned."""
    if value and isinstance(value, str) and len(value) == 36 and re.match(r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$", value, re.IGNORECASE):
        return value
    return str(uuid.uuid4())

def validate_email_format(value: Optional[str]) -> Optional[str]:
    """Validates email format using a basic regex. Returns None if invalid."""
    if value is None:
        return None
    email_regex = r"^[a-zA
gemini Output

Data Migration Planner: Comprehensive Plan & Strategy

Project: [Insert Project Name]

Date: October 26, 2023

Version: 1.0

Prepared For: [Customer Name]

Prepared By: PantheraHive Solutions


1. Executive Summary

This document outlines a comprehensive plan for the data migration project, detailing the strategy, processes, and procedures required to successfully transfer data from the designated source system(s) to the target system(s). It covers critical aspects including field mapping, data transformation rules, validation scripts, robust rollback procedures, and a high-level timeline estimate. The objective is to ensure a secure, accurate, and efficient migration with minimal disruption to business operations, maintaining data integrity and quality throughout the process.

2. Introduction and Project Scope

This data migration project aims to consolidate and transfer critical business data from [Specify Source System(s) e.g., "Legacy CRM (Salesforce Classic instance)", "On-premise SQL Database"] to [Specify Target System(s) e.g., "New Cloud-based ERP (SAP S/4HANA)", "Upgraded CRM (Salesforce Lightning instance)"].

Objectives:

  • Ensure 100% data integrity and accuracy post-migration.
  • Minimize business downtime during the migration window.
  • Retire legacy systems efficiently.
  • Improve data accessibility and reporting capabilities in the new system.
  • Comply with all relevant data governance and regulatory requirements.

In-Scope Data Entities:

  • [Example: Customer Master Data]
  • [Example: Product Catalog]
  • [Example: Sales Orders (historical 3 years)]
  • [Example: Employee Records]
  • [Example: Financial Transactions (summary data, detail to be archived)]
  • [Add/remove as per project specifics]

Out-of-Scope Data Entities:

  • [Example: Archived historical data older than 5 years]
  • [Example: Temporary operational logs]
  • [Add/remove as per project specifics]

3. Source and Target Systems Overview

Source System(s):

  • System Name: [e.g., Legacy CRM - Salesforce Classic]
  • Database Type/Version: [e.g., Salesforce Cloud Database]
  • Key Modules/APIs: [e.g., Accounts, Contacts, Opportunities, Custom Objects, Salesforce API]
  • Data Volume (Estimated): [e.g., 500 GB, 10 million records]
  • Current Data Quality Assessment: [e.g., Moderate, known duplicates in Accounts, inconsistent address formats]

Target System(s):

  • System Name: [e.g., New ERP - SAP S/4HANA]
  • Database Type/Version: [e.g., SAP HANA Database]
  • Key Modules/APIs: [e.g., Customer Master, Material Master, Sales & Distribution, SAP OData APIs, IDOCs]
  • Expected Data Volume (Post-migration): [e.g., 600 GB, 12 million records (due to new fields/relationships)]
  • Target Data Model/Schema: [e.g., New standard SAP S/4HANA Customer Master data model, with specific custom fields]

4. Data Migration Strategy

The migration will follow a Phased Approach to minimize risk and allow for iterative testing and validation.

  1. Discovery & Planning: Deep dive into source data, target schema, define mappings, transformations, and overall strategy.
  2. Development & Unit Testing: Build ETL scripts, transformation logic, and validation routines. Test components individually.
  3. Pilot Migration (UAT/Staging): Migrate a subset of data to a non-production environment. Business users validate data and functionality.
  4. Full Migration (Production Dry Run): Execute a complete migration in a production-like staging environment to simulate the actual cutover. This includes full data volume and downtime simulation.
  5. Production Cutover: Execute the final migration to the live production environment during a defined maintenance window.
  6. Post-Migration Validation & Support: Final checks, issue resolution, and hypercare support.

5. Detailed Field Mapping

Field mapping is a critical component that defines how each data element from the source system will correspond to a data element in the target system. This will be documented in a comprehensive "Data Mapping Document" spreadsheet, reviewed and signed off by relevant business and technical stakeholders.

Example Structure for Data Mapping Document:

| Source System | Source Object/Table | Source Field Name | Source Data Type | Source Max Length | Source Description | Transformation Rule ID | Target System | Target Object/Table | Target Field Name | Target Data Type | Target Max Length | Target Required? | Target Description | Notes/Comments |

| :------------ | :------------------ | :---------------- | :--------------- | :---------------- | :----------------- | :--------------------- | :------------ | :------------------ | :---------------- | :--------------- | :---------------- | :--------------- | :----------------- | :------------- |

| Legacy CRM | Account | Name | String | 255 | Company Name | TR-001 | New ERP | KNA1 | NAME1 | CHAR | 35 | Yes | Customer Name | Truncate if > 35 |

| Legacy CRM | Account | BillingAddress | String | 500 | Billing Address | TR-002 | New ERP | ADRC | STREET, HOUSE_NUM | CHAR | 60, 10 | Yes | Street & House No. | Parse into components |

| Legacy CRM | Contact | IsPrimary | Boolean | 1 | Primary Contact | TR-003 | New ERP | ZCONTACT | PRIMARY_FLAG | BOOLEAN | 1 | No | Primary Contact | Map true/false to X/ ' ' |

| Legacy CRM | Opportunity | Amount | Currency | 18,2 | Opportunity Value | TR-004 | New ERP | VBAK | NETWR | CURR | 13,2 | Yes | Net Value of Sales | Convert USD to EUR |

Key Considerations:

  • Data Type Conversion: Ensure compatibility between source and target data types.
  • Field Length Constraints: Handle potential truncation or expansion.
  • Mandatory Fields: Identify and ensure all target mandatory fields are populated.
  • Default Values: Define default values for target fields where source data is missing or out of scope.
  • New Fields: Identify any new fields in the target system that require new data input or derivation.
  • Deprecated Fields: Note source fields that will not be migrated.

6. Data Transformation Rules

Data transformation rules define how source data will be manipulated to fit the target system's requirements, data model, and business logic. Each rule will be documented with a unique ID and detailed specification.

Common Transformation Categories:

  1. Data Type Conversion:

* TR-001 (Name Truncation): Source Account.Name (String, 255) to Target KNA1.NAME1 (CHAR, 35). If source length > 35, truncate to 35 characters. Log truncated records.

* TR-004 (Currency Conversion): Source Opportunity.Amount (Currency, USD) to Target VBAK.NETWR (CURR, EUR). Apply daily exchange rate on the CloseDate of the opportunity.

  1. Data Parsing/Splitting:

* TR-002 (Address Parsing): Source Account.BillingAddress (single String) to Target ADRC.STREET, ADRC.HOUSE_NUM, ADRC.POST_CODE, ADRC.CITY. Use regex and custom logic to parse address components.

  1. Data Concatenation/Combination:

* TR-005 (Full Name): Source Contact.FirstName and Contact.LastName to Target ZCONTACT.FULL_NAME. Concatenate with a space: FirstName + ' ' + LastName.

  1. Lookup/Reference Data Mapping:

* TR-006 (Country Code Mapping): Source Account.Country (e.g., "United States", "Germany") to Target ADRC.COUNTRY (ISO 3166-1 alpha-2 codes: "US", "DE"). Use a predefined lookup table.

  1. Conditional Logic/Business Rules:

* TR-003 (Primary Contact Flag): Source Contact.IsPrimary (Boolean TRUE/FALSE) to Target ZCONTACT.PRIMARY_FLAG (CHAR(1) 'X' for true, ' ' for false).

* TR-007 (Customer Group Assignment): If Account.AnnualRevenue > $1M, assign CustomerGroup = 'Large Enterprise'. Else if AnnualRevenue > $100K, assign 'Mid-Market'. Else 'SMB'.

  1. Aggregation/Derivation:

* TR-008 (Total Sales YTD): Calculate sum of Opportunity.Amount for current year from Source Opportunity records for each Account and map to Target KNA1.ZTOTAL_SALES_YTD.

  1. Default Value Assignment:

* TR-009 (Missing Phone Number): If Contact.Phone is null/empty, set Target ZCONTACT.PHONE to 'N/A'.

7. Data Validation Strategy and Scripts

A robust validation strategy is paramount to ensuring data quality and integrity throughout the migration lifecycle.

A. Pre-Migration Source Data Validation (Data Quality Assessment)

  • Purpose: Identify data quality issues in the source system before migration to minimize transformation errors and ensure clean data enters the target system.
  • Scripts/Checks:

* Completeness Checks:

* Identify records with missing mandatory fields (e.g., Account.Name is empty).

* Count of null values for critical fields.

* Uniqueness Checks:

* Identify duplicate records based on key identifiers (e.g., Account.EIN, Contact.Email).

* Identify duplicate values within fields that should be unique.

* Format/Pattern Checks:

* Validate email addresses (Contact.Email) against regex patterns.

* Validate phone numbers, postal codes against expected formats.

* Referential Integrity Checks:

* Ensure child records have valid parent references (e.g., Opportunity.AccountId points to an existing Account).

* Range/Value Checks:

* Ensure numeric fields are within acceptable ranges (e.g., Opportunity.Amount > 0).

* Validate picklist values against allowed sets.

  • Action: Generate data quality reports. Work with business stakeholders to cleanse, enrich, or decide on handling strategies for identified issues.

B. Post-Migration Target Data Validation (Integrity, Completeness, Accuracy)

  • Purpose: Verify that data has been migrated correctly, completely, and accurately into the target system.
  • Scripts/Checks (to be run on target system after migration):

* Record Count Validation:

* Compare total record counts for each migrated entity between source and target (e.g., COUNT(Source.Accounts) == COUNT(Target.KNA1)).

* Compare counts after applying filters/transformations (e.g., only active accounts migrated).

* Data Completeness Validation:

* Check for null values in mandatory target fields.

* Verify that all expected records from the source are present in the target.

* Data Accuracy Validation (Spot Checks & Aggregate Checks):

* Random Sample Verification: Select a random sample of records (e.g., 5-10% for high-value entities) and manually verify field-by-field accuracy against source.

* Aggregate Sums/Averages: Compare sums of critical numeric fields (e.g., SUM(Opportunity.Amount) in source vs. SUM(VBAK.NETWR) in target, considering currency conversion).

* Unique Constraint Verification: Ensure no duplicate primary keys or unique identifiers exist in the target.

* Referential Integrity Validation: Verify relationships between migrated entities in the target system (e.g., Sales Order correctly linked to Customer Master).

* Transformation Rule Verification: Select records that underwent specific transformations (e.g., address parsing, currency conversion) and verify the output.

* System Functionality Checks:

* Ensure basic CRUD (Create, Read, Update, Delete) operations work correctly with migrated data.

* Verify reports and dashboards using migrated data.

C. Validation Tools & Methods:

  • ETL Tool Built-in Capabilities: Utilize logging, error handling, and reporting features of the chosen ETL tool ([e.g., Informatica, Talend, Azure Data Factory, AWS Glue, custom Python scripts]).
  • SQL Scripts: Develop custom SQL queries for detailed data comparison and aggregate checks.
  • Reporting Tools: Leverage BI tools or target system reporting to visualize data counts and summaries.
  • Automated Testing Frameworks: Integrate validation scripts into an automated testing pipeline where feasible.

8. Data Quality and Cleansing Plan

Identified data quality issues during pre-migration validation will be addressed systematically:

  • Categorization: Issues will be categorized (e.g., duplicates, missing data, invalid format).
  • Prioritization: High-impact issues (e.g., affecting critical business processes, mandatory fields) will be prioritized.
  • Resolution Strategy:

* Automated Cleansing: For common, rule-based issues (e.g., standardizing phone numbers, correcting common spelling mistakes).

* Manual Cleansing: For complex issues requiring human judgment (e.g., merging duplicate customer records, enriching missing data).

* Business Review & Decision: For ambiguous data, business owners will decide on the correct values or if the data should be excluded.

  • Documentation: All cleansing rules and exceptions will be documented.

9. Rollback Procedures

A comprehensive rollback plan is essential to mitigate risks in case of unforeseen issues during or immediately after the production migration.

A. Trigger for Rollback:

A rollback will be initiated if any of the following critical criteria are not met or if severe issues are detected:

  • Significant data loss or corruption in the target system.
  • Failure of core business processes in the target system involving migrated data.
  • Critical validation checks fail (e.g., major discrepancies in record counts or financial sums).
data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}