Data Migration Planner
Run ID: 69cc11aa04066a6c4a1691312026-03-31Development
PantheraHive BOS
BOS Dashboard

Plan a complete data migration with field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates.

As a professional AI assistant within PantheraHive, I am providing a detailed study plan focused on mastering Data Migration Best Practices and Technologies. This plan is designed to equip individuals or teams with the foundational knowledge and practical skills required to effectively plan and execute complex data migration projects, aligning with the broader "Data Migration Planner" workflow.


Data Migration Mastery Study Plan

1. Introduction and Purpose

This study plan is meticulously designed to guide participants through a comprehensive learning journey in data migration. It aims to develop a deep understanding of data migration methodologies, tools, and best practices, enabling the planning and execution of successful data transitions. By the end of this program, learners will be proficient in designing robust migration architectures, managing data integrity, and mitigating risks associated with data movement.

2. Learning Objectives

Upon successful completion of this study plan, participants will be able to:

  • Understand Foundational Concepts: Articulate the various types of data migrations, their challenges, and the key phases of a data migration project lifecycle.
  • Perform Data Discovery & Profiling: Conduct thorough data source analysis, profiling, and quality assessment to identify data characteristics, inconsistencies, and dependencies.
  • Design Data Mapping & Transformation Rules: Develop detailed field mapping documents and define complex data transformation rules to ensure data compatibility and integrity between source and target systems.
  • Select Appropriate Tools & Technologies: Evaluate and select suitable ETL/ELT tools, cloud migration services, and scripting languages for different migration scenarios.
  • Develop Validation & Testing Strategies: Create comprehensive data validation plans, test cases, and quality assurance procedures to verify data accuracy and completeness post-migration.
  • Formulate Rollback & Contingency Plans: Design robust rollback procedures and contingency plans to minimize downtime and mitigate risks during critical migration phases.
  • Estimate Timelines & Resources: Accurately estimate project timelines, resource requirements, and budget for various data migration complexities.
  • Implement Cloud Migration Strategies: Understand and apply specific strategies for migrating data to and within cloud environments (e.g., AWS, Azure, GCP).
  • Manage Security & Compliance: Incorporate data security, privacy, and regulatory compliance considerations into the migration planning and execution.

3. Weekly Schedule

This 6-week intensive study plan is structured to provide a progressive learning experience, building knowledge and skills week by week.

Week 1: Data Migration Fundamentals & Discovery

  • Focus Areas:

* Introduction to Data Migration: Definition, types (on-premise to on-premise, on-premise to cloud, cloud to cloud), drivers, and common challenges.

* Data Migration Lifecycle: Phases (Assessment, Planning, Design, Execution, Validation, Post-Migration).

* Stakeholder Identification & Requirements Gathering.

* Data Source Analysis & System Landscape Mapping.

* Introduction to Data Profiling tools and techniques.

  • Activities:

* Read foundational articles on data migration.

* Identify a hypothetical migration scenario (e.g., CRM upgrade, database consolidation).

* Practice using a data profiling tool (e.g., OpenRefine, basic SQL queries for profiling).

Week 2: Data Mapping, Transformation & Quality

  • Focus Areas:

* Detailed Data Mapping: Field-level mapping, data type conversions, primary/foreign key relationships.

* Data Transformation Rules: Business rules, aggregations, derivations, standardization, de-duplication.

* Data Quality Management: Identifying and resolving data quality issues before migration.

* Introduction to Data Governance in migration contexts.

  • Activities:

* Create a detailed data mapping document for your hypothetical scenario.

* Define 5-7 complex transformation rules.

* Research common data quality issues and their solutions.

Week 3: Migration Architecture & Tooling

  • Focus Areas:

* Migration Strategies: Big Bang vs. Phased, Coexistence, Replication.

* ETL/ELT Concepts: Extract, Transform, Load (ETL) vs. Extract, Load, Transform (ELT).

* Data Migration Tools: Overview of commercial (e.g., Informatica, Talend, IBM DataStage) and open-source (e.g., Apache Nifi, Pentaho Data Integration) tools.

* Cloud-Specific Migration Services: AWS DMS, Azure Data Factory, Google Cloud Dataflow/Migration Service.

* Scripting for Migration: Python, SQL, Shell scripting basics.

  • Activities:

* Design a high-level migration architecture for your scenario, justifying your chosen strategy.

* Research 2-3 specific ETL/Cloud migration tools relevant to your scenario.

* Write a simple Python script to perform a basic data transformation (e.g., format a date, concatenate strings).

Week 4: Data Validation, Testing & Cutover Planning

  • Focus Areas:

* Pre-Migration Validation: Source data validation, schema validation.

* Post-Migration Validation: Row count validation, checksum validation, sample data verification, business rule validation.

* Testing Strategies: Unit testing, integration testing, user acceptance testing (UAT), performance testing.

* Defect Management & Resolution.

* Cutover Planning: Downtime considerations, communication plan, go/no-go criteria.

  • Activities:

* Develop a comprehensive data validation plan for your scenario.

* Design 5-10 test cases covering different data types and transformation rules.

* Draft a cutover checklist for a critical system.

Week 5: Rollback Procedures, Performance & Security

  • Focus Areas:

* Rollback Strategy: Defining triggers for rollback, data restoration methods, communication.

* Performance Optimization: Tuning ETL processes, network considerations, database indexing.

* Data Security & Compliance: Data encryption (in transit, at rest), access controls, GDPR, HIPAA, PCI DSS considerations.

* Error Handling & Logging.

  • Activities:

* Outline a detailed rollback procedure for your hypothetical migration.

* Research security best practices for data in transit and at rest in a cloud environment.

* Consider how to implement logging and error handling in your migration scripts/processes.

Week 6: Post-Migration, Monitoring & Advanced Topics

  • Focus Areas:

* Post-Migration Activities: Decommissioning old systems, archiving, performance monitoring.

* Continuous Improvement & Lessons Learned.

* Advanced Topics: Data virtualization, real-time data migration, streaming data.

* Case Studies & Industry Best Practices.

* Project Simulation: Bringing all learned concepts together.

  • Activities:

* Develop a post-migration checklist.

* Analyze 2-3 real-world data migration case studies.

* Final Project: Present a complete data migration plan for your hypothetical scenario, incorporating all elements learned.

4. Recommended Resources

  • Books:

* "The Data Warehouse Toolkit" by Ralph Kimball (for foundational ETL concepts).

* "Data Migration: Strategies and Best Practices" by various authors (look for recent editions).

* "Designing Data-Intensive Applications" by Martin Kleppmann (for distributed systems and data processing).

  • Online Courses & Certifications:

* Coursera/edX/Udemy/Pluralsight: Search for courses on "Data Migration," "ETL Development," "Cloud Data Engineering" (AWS, Azure, GCP specific).

* Vendor Certifications:

* AWS Certified Data Analytics – Specialty

* Microsoft Certified: Azure Data Engineer Associate

* Google Cloud Professional Data Engineer

* Certifications from specific ETL tool vendors (e.g., Informatica, Talend).

  • Documentation & Blogs:

* Official documentation for AWS Database Migration Service (DMS), Azure Data Factory, Google Cloud Dataflow, etc.

* Blogs from major cloud providers (AWS, Azure, GCP) on data migration case studies and best practices.

* Data architecture and engineering blogs (e.g., Martin Fowler, Medium articles by data professionals).

  • Tools (for hands-on practice):

* Data Profiling/Quality: OpenRefine, SQL queries.

* ETL/Scripting: Python (Pandas library), SQL, Apache NiFi (open-source ETL), Talend Open Studio (open-source ETL).

* Cloud Services (Free Tiers/Labs): AWS Free Tier (DMS, S3, RDS), Azure Free Account (Data Factory, Blob Storage), Google Cloud Free Tier (Cloud Storage, Dataflow).

  • Communities:

* Stack Overflow (for specific technical questions).

* LinkedIn groups for data engineers and architects.

5. Milestones

  • End of Week 1: Completion of Data Migration Fundamentals Quiz; Draft of Hypothetical Migration Scenario Overview.
  • End of Week 2: Submission of Detailed Data Mapping Document and Transformation Rules Definition.
  • End of Week 3: Presentation of High-Level Migration Architecture and Tool Selection Rationale. Completion of a Basic Transformation Script.
  • End of Week 4: Submission of Comprehensive Data Validation Plan and Key Test Cases.
  • End of Week 5: Outline of Detailed Rollback Procedure and Security/Compliance Checklist.
  • End of Week 6: Final Project Presentation: A complete, detailed Data Migration Plan for the chosen scenario.

6. Assessment Strategies

  • Weekly Quizzes/Self-Assessments: Short, multiple-choice or short-answer quizzes to check understanding of weekly concepts.
  • Practical Exercises/Mini-Projects: Hands-on tasks like creating mapping documents, writing transformation scripts, or designing validation plans.
  • Case Study Analysis: Reviewing and critically analyzing real-world data migration scenarios to apply learned principles.
  • Peer Reviews: Providing and receiving constructive feedback on practical exercises and project components.
  • Final Project Presentation: The culminating assessment, where participants present their comprehensive data migration plan, demonstrating their ability to integrate all learned concepts into a coherent and actionable strategy. This will be evaluated based on completeness, technical accuracy, strategic thinking, and presentation clarity.
  • Code Reviews (Optional): For those focusing on scripting, peer or instructor review of migration scripts.
gemini Output

Data Migration Planner: Comprehensive Strategy and Implementation Guide

This document outlines a detailed plan for a complete data migration, encompassing all critical stages from initial planning and design to execution, validation, and rollback procedures. The goal is to ensure a smooth, secure, and accurate transfer of data from the source system(s) to the target system(s) with minimal disruption.


1. Executive Summary

This Data Migration Planner provides a robust framework for migrating data, addressing key aspects such as data assessment, field mapping, transformation logic, validation, error handling, and contingency planning. It includes actionable steps, detailed technical specifications (including code examples for transformations and validations), and a realistic timeline to guide the migration project to a successful conclusion. The plan emphasizes data integrity, business continuity, and comprehensive testing to mitigate risks.


2. Scope of Migration

  • Data Scope: All transactional data (e.g., orders, invoices, payments, shipments), master data (e.g., customers, products, vendors), and historical records within the defined retention period. Specific modules/entities to be migrated will be listed in detail in Appendix A (e.g., Customer Master, Product Catalog, Sales Orders, Purchase Orders, GL Entries).
  • System Scope: Migration from [Source System Name(s)] to [Target System Name].
  • Migration Type: Typically a one-time "big bang" or phased migration, followed by potential delta migrations if required during a cutover period.
  • Exclusions: Specific archival data beyond the retention period, non-critical log files, or data explicitly identified as irrelevant for the target system.

3. Source and Target Systems

  • Source System(s):

* Name: [e.g., Legacy ERP System, CRM Database]

* Database Type/Version: [e.g., SQL Server 2012, Oracle 11g, Salesforce]

* Key Data Entities/Schemas: [e.g., dbo.Customers, Sales.Orders]

* Access Method: [e.g., ODBC, JDBC, API, flat file exports]

  • Target System:

* Name: [e.g., New Cloud ERP, Custom Application]

* Database Type/Version: [e.g., PostgreSQL 14, Oracle 19c, MongoDB, Salesforce]

* Key Data Entities/Schemas: [e.g., public.customer, sales.order_header]

* Ingestion Method: [e.g., Bulk API, SQL INSERT statements, ORM]


4. Data Migration Strategy

The migration will follow an Extract, Transform, Load (ETL) approach, leveraging a dedicated migration environment.

  1. Extract: Data will be extracted from the source system(s) using [e.g., SQL queries, API calls, data export tools] into an intermediate staging area (e.g., flat files, temporary database).
  2. Transform: Data in the staging area will be cleansed, standardized, de-duplicated, and formatted according to the target system's schema and business rules.
  3. Load: Transformed data will be loaded into the target system using [e.g., bulk import utilities, API calls, direct database inserts].
  4. Validate: Comprehensive validation will be performed at each stage (source, staging, target) and post-migration to ensure data accuracy and completeness.
  5. Iterative Approach: The migration will proceed iteratively, starting with smaller, less complex datasets, followed by larger, more complex ones, allowing for continuous refinement of scripts and procedures.

5. Phase 1: Planning & Analysis

  • Detailed Data Assessment:

* Data Profiling: Analyze source data for quality, completeness, consistency, uniqueness, and distribution. Identify anomalies, missing values, and data types.

* Data Volume Estimation: Quantify data volumes per entity to plan for performance and storage.

* Data Dependencies: Map relationships between data entities to ensure correct migration order.

* Data Ownership: Identify data owners and subject matter experts (SMEs) for each data domain.

  • Stakeholder Identification: Core migration team, business users (SMEs), IT operations, system administrators, security teams.

6. Phase 2: Design & Development

6.1. Field Mapping (Source to Target)

A comprehensive field mapping document will be created for each entity. Below is an illustrative example for a Customer entity.

| Source Table/Field (e.g., LegacyDB.dbo.Customers) | Source Data Type | Source Max Length | Sample Source Data | Transformation Rule ID / Description | Target Table/Field (e.g., NewERP.public.customer) | Target Data Type | Target Max Length | Required in Target | Notes / Constraints |

| :-------------------------------------------------- | :--------------- | :---------------- | :----------------- | :----------------------------------- | :-------------------------------------------------- | :--------------- | :---------------- | :----------------- | :------------------ |

| CustomerID | INT | N/A | 1001 | TRF-001 (Direct Map) | customer_id | BIGINT | N/A | YES | Primary Key |

| FirstName | VARCHAR(50) | 50 | "John" | TRF-001 (Direct Map) | first_name | VARCHAR(100) | 100 | YES | |

| LastName | VARCHAR(50) | 50 | "Doe" | TRF-001 (Direct Map) | last_name | VARCHAR(100) | 100 | YES | |

| Address1, Address2 | VARCHAR(100) | 100 | "123 Main St", "" | TRF-002 (Concatenate) | street_address | VARCHAR(250) | 250 | YES | |

| City | VARCHAR(50) | 50 | "Anytown" | TRF-001 (Direct Map) | city | VARCHAR(100) | 100 | YES | |

| State | CHAR(2) | 2 | "NY" | TRF-003 (Lookup/Normalize) | state_code | CHAR(2) | 2 | YES | Map 'New York' to 'NY' |

| ZipCode | VARCHAR(10) | 10 | "12345-6789" | TRF-004 (Format) | postal_code | VARCHAR(10) | 10 | YES | Remove hyphens if not needed |

| Email | VARCHAR(255) | 255 | "john@example.com" | TRF-005 (Validate Format) | email_address | VARCHAR(255) | 255 | NO | Must be valid email format |

| AccountStatus | VARCHAR(20) | 20 | "Active" | TRF-006 (Map Status) | customer_status | VARCHAR(50) | 50 | YES | Map 'Active'->'OPEN', 'Inactive'->'CLOSED' |

| LastPurchaseDate | DATETIME | N/A | "2023-10-26" | TRF-001 (Direct Map) | last_purchase_at | TIMESTAMP | N/A | NO | Convert to UTC |

6.2. Transformation Rules

Transformation rules define how source data will be modified to fit the target system's requirements. Each rule will have a unique ID (e.g., TRF-XXX).

Common Transformation Categories:

  • Data Type Conversion: Changing a field's data type (e.g., VARCHAR to INT, DATETIME to TIMESTAMP).
  • Concatenation/Splitting: Combining multiple source fields into one target field or splitting one source field into multiple target fields.
  • Lookup/Mapping: Translating source codes or values to corresponding target codes/values using reference tables (e.g., State codes, AccountStatus mappings).
  • Defaulting: Assigning a default value if a source field is null or empty.
  • Cleansing/Standardization: Removing extra spaces, standardizing case, correcting common misspellings.
  • Derivation: Calculating a new value based on one or more source fields.
  • Filtering: Excluding specific records based on criteria.

Code Examples for Transformation Rules (Python & SQL):

These examples illustrate how common transformation rules can be implemented.


# Python Script for Data Transformation (Example using Pandas for a CSV/DataFrame)

import pandas as pd
import re

def apply_customer_transformations(df_source):
    """
    Applies defined transformation rules to a DataFrame of customer data.
    
    Args:
        df_source (pd.DataFrame): DataFrame containing source customer data.
        
    Returns:
        pd.DataFrame: DataFrame with transformed customer data.
    """
    df_target = pd.DataFrame()

    # TRF-001: Direct Mapping (CustomerID, FirstName, LastName)
    df_target['customer_id'] = df_source['CustomerID'].astype(int)
    df_target['first_name'] = df_source['FirstName'].fillna('').str.strip()
    df_target['last_name'] = df_source['LastName'].fillna('').str.strip()
    
    # TRF-002: Concatenate Address1 and Address2
    # Handle cases where Address2 might be empty
    df_target['street_address'] = df_source['Address1'].fillna('').str.strip() + \
                                  df_source['Address2'].apply(lambda x: f" {x.strip()}" if pd.notna(x) and x.strip() else "")
    
    # TRF-001: Direct Map City
    df_target['city'] = df_source['City'].fillna('').str.strip()

    # TRF-003: Lookup/Normalize State Code
    state_mapping = {
        'New York': 'NY', 'California': 'CA', 'Texas': 'TX', 'NY': 'NY', 'CA': 'CA', 'TX': 'TX',
        # ... add more mappings as needed
    }
    df_target['state_code'] = df_source['State'].fillna('').apply(lambda x: state_mapping.get(x.strip(), 'XX')) # Default 'XX' for unknown

    # TRF-004: Format Postal Code (remove hyphens if not needed, assume target wants 5-digit or 9-digit without hyphen)
    df_target['postal_code'] = df_source['ZipCode'].fillna('').str.replace('-', '', regex=False).str.strip()
    
    # TRF-005: Validate Email Format and optionally default/nullify invalid ones
    def validate_email(email):
        if pd.isna(email) or not email.strip():
            return None # Or a default invalid email
        # Basic regex for email validation
        if re.match(r"[^@]+@[^@]+\.[^@]+", email):
            return email.strip().lower()
        return None # Return None for invalid emails
        
    df_target['email_address'] = df_source['Email'].apply(validate_email)

    # TRF-006: Map Account Status
    status_mapping = {
        'Active': 'OPEN',
        'Inactive': 'CLOSED',
        'Pending': 'PENDING_APPROVAL',
        # ... add more mappings
    }
    df_target['customer_status'] = df_source['AccountStatus'].fillna('UNKNOWN').apply(lambda x: status_mapping.get(x.strip(), 'UNKNOWN'))

    # TRF-001: Direct Map Last Purchase Date and convert to UTC if necessary
    # Assuming source 'LastPurchaseDate' is in a recognized format
    df_target['last_purchase_at'] = pd.to_datetime(df_source['LastPurchaseDate'], errors='coerce').dt.tz_localize(None) # Remove timezone if present, or convert to UTC
    
    # Add any default values for target fields not present in source
    df_target['created_by'] = 'DataMigration'
    df_target['created_at'] = pd.Timestamp.now()

    return df_target

# Example Usage (assuming you have source data loaded into a DataFrame)
# source_data = {
#     'CustomerID': [1001, 1002, 1003],
#     'FirstName': ['John', 'Jane', 'Peter'],
#     'LastName': ['Doe', 'Smith', 'Jones'],
#     'Address1': ['123 Main St', '456 Oak Ave', '789 Pine Ln'],
#     'Address2': ['', 'Apt 101', None],
#     'City': ['Anytown', 'Otherville', 'Smalltown'],
#     'State': ['NY', 'California', 'TX'],
#     'ZipCode': ['12345', '90210-1234', '78701'],
#     'Email': ['john@example.com', 'jane@invalid', None],
#     'AccountStatus': ['Active', 'Inactive', 'Pending'],
#     'LastPurchaseDate': ['2023-10-26 10:00:00', '2023-09-15', None]
# }
# df_source_customers = pd.DataFrame(source_data)
# df_transformed_customers = apply_customer_transformations(df_source_customers)

gemini Output

Data Migration Planner: Comprehensive Migration Strategy Document

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name]

Prepared By: PantheraHive Solutions Team


1. Executive Summary

This document outlines the comprehensive plan for migrating data from [Source System Name] to [Target System Name]. It details the critical components of the migration process, including field mapping, data transformation rules, validation procedures, rollback strategies, and estimated timelines. The objective is to ensure a smooth, accurate, and secure transition of data, minimizing downtime and mitigating risks. This plan serves as a foundational guide for the technical execution and stakeholder communication throughout the migration lifecycle.

2. Introduction: Migration Overview

This data migration initiative aims to transfer all relevant historical and operational data from the legacy [Source System Name] to the new, modernized [Target System Name]. The successful completion of this migration is crucial for [mention key business objective, e.g., "enabling enhanced operational efficiency," "supporting new business functionalities," or "achieving compliance requirements"].

Key Objectives:

  • Ensure complete and accurate transfer of all in-scope data.
  • Preserve data integrity and consistency.
  • Minimize business disruption during the migration window.
  • Establish robust validation and rollback mechanisms.
  • Provide a clear, actionable roadmap for all involved teams.

Source System: [e.g., Legacy CRM System, SAP ECC, Custom Database]

Target System: [e.g., Salesforce, SAP S/4HANA, New Custom Application Database]

Primary Data Subject Areas: [e.g., Customers, Orders, Products, Accounts, Transactions]

3. Data Migration Scope

The scope of this migration includes the following primary data entities and their associated attributes:

  • [Entity 1 Name]: [e.g., Customer Master Data]

* Scope: All active and historical customer records from [Date] onwards.

* Key Attributes: Customer ID, Name, Address, Contact Info, Account Status, Credit Limit.

  • [Entity 2 Name]: [e.g., Product Catalog]

* Scope: All current and historical product definitions, including SKUs, descriptions, pricing tiers.

* Key Attributes: Product ID, Name, Description, Category, Unit Price, Stock Level.

  • [Entity 3 Name]: [e.g., Sales Orders]

* Scope: All completed and open sales orders from [Date] onwards.

* Key Attributes: Order ID, Customer ID, Order Date, Total Amount, Line Items, Status.

  • [Entity 4 Name]: [e.g., Employee Records]

* Scope: All active employee records.

* Key Attributes: Employee ID, Name, Department, Hire Date, Salary.

Out of Scope:

  • Archived data older than [Specific Date].
  • Temporary or transient data not required for operational continuity.
  • Specific logs or audit trails not relevant to the target system's functionality.
  • Binary Large Objects (BLOBs) not explicitly identified as critical for migration.

4. Detailed Migration Plan

4.1. Field Mapping Document

The field mapping document serves as the definitive guide for how each source field corresponds to a target field. It identifies data types, primary/foreign key relationships, and initial transformation needs.

Structure of Field Mapping (Example Table - to be expanded for all entities):

| Source System (e.g., Legacy CRM) | Target System (e.g., Salesforce) | Mapping Type | Notes / Comments |

| :------------------------------- | :------------------------------- | :----------- | :--------------- |

| Entity: Customer | Object: Account | | |

| Legacy_Customer.CustomerID (INT) | Account.External_ID__c (TEXT) | Direct Map | Source Primary Key mapped to External ID field for future reference. |

| Legacy_Customer.FirstName (TEXT) | Account.FirstName (TEXT) | Direct Map | |

| Legacy_Customer.LastName (TEXT) | Account.LastName (TEXT) | Direct Map | |

| Legacy_Customer.AddrLine1 (TEXT) | Account.BillingStreet (TEXT) | Direct Map | Part of address concatenation. |

| Legacy_Customer.City (TEXT) | Account.BillingCity (TEXT) | Direct Map | |

| Legacy_Customer.StateCode (CHAR(2)) | Account.BillingState (TEXT) | Transformation | Requires lookup from StateCode to StateName (e.g., 'CA' -> 'California'). |

| Legacy_Customer.Zip (TEXT) | Account.BillingPostalCode (TEXT) | Direct Map | |

| Legacy_Customer.AccountStatusID (INT) | Account.Status__c (Picklist) | Transformation | Map AccountStatusID (1='Active', 2='Inactive', 3='Pending') to Picklist values ('Active', 'Inactive', 'Pending'). Default to 'Active' if null. |

| Legacy_Customer.CreatedDate (DATETIME) | Account.CreatedDate (DATETIME) | Direct Map | System field, but ensuring accurate historical date. |

| Legacy_Customer.LastOrderDate (DATETIME) | Account.Last_Order_Date__c (DATE) | Transformation | Date only, time component truncated. |

| Legacy_Customer.CreditLimit (DECIMAL) | Account.Credit_Limit__c (Currency) | Direct Map | |

| (New Field) | Account.OwnerId (Lookup) | Default Value | Set to specific default owner for migrated records. |

Key Considerations for Field Mapping:

  • Primary/Foreign Keys: Identify and map all primary and foreign key relationships to maintain referential integrity in the target system. External IDs will be used to link migrated records.
  • Data Type Conversion: Ensure compatibility between source and target data types.
  • New Fields: Identify target system fields that do not have a direct source equivalent and determine how they will be populated (e.g., default values, derived values, left null).
  • Deprecated Fields: Note source fields that are no longer relevant in the target system and will not be migrated.
  • Mandatory Fields: Ensure all mandatory target fields are populated, either directly mapped or via transformation rules.

4.2. Transformation Rules

Data transformation rules define how data will be manipulated during the migration process to fit the target system's requirements, data models, and business logic.

Common Transformation Categories & Examples:

  1. Data Type Conversion:

* Rule: Convert Legacy_Customer.CreditLimit (DECIMAL) to Account.Credit_Limit__c (Currency).

* Rule: Convert Legacy_Product.IsActive (BIT/BOOLEAN) to Product2.Active__c (Checkbox).

  1. Concatenation/Splitting:

* Rule: Concatenate Legacy_Customer.FirstName and Legacy_Customer.LastName into Account.Name if the target system requires a single name field.

* Rule: Split Legacy_Product.FullDescription into Product2.Short_Description__c (first 255 chars) and Product2.Long_Description__c (remainder).

  1. Lookup/Mapping Tables:

* Rule: Map Legacy_Customer.StateCode ('CA', 'NY') to Account.BillingState ('California', 'New York') using a predefined lookup table.

* Rule: Map Legacy_Order.StatusCode (1, 2, 3) to Order.Status ('New', 'Processing', 'Completed') using a cross-reference table.

  1. Default Value Assignment:

* Rule: If Legacy_Customer.AccountType is null, default Account.Type to 'Standard'.

* Rule: Assign a default Account.OwnerId to a specific user for all migrated accounts.

  1. Null Handling:

* Rule: If Legacy_Product.Weight is null, set Product2.Weight__c to 0.00.

* Rule: If Legacy_Customer.Email is null, leave Account.Email as null (if allowed by target system and business rules).

  1. Date/Time Manipulation:

* Rule: Truncate time component from Legacy_Order.OrderDate (DATETIME) to Order.Order_Date__c (DATE).

* Rule: Convert Legacy_Employee.HireDate (YYYYMMDD string) to Employee__c.Hire_Date__c (DATE).

  1. Aggregation/Derivation:

* Rule: Calculate Account.Total_Orders__c by counting related orders from Legacy_Order for each Legacy_Customer.

* Rule: Derive Product2.Full_SKU__c by combining Legacy_Product.BaseSKU and Legacy_Product.ColorCode.

  1. De-duplication Logic:

* Rule: Identify potential duplicate Legacy_Customer records based on FirstName, LastName, and Email combination. Implement a merge strategy (e.g., keep the most recently updated record, or combine specific fields).

  1. Data Cleansing:

* Rule: Remove leading/trailing spaces from all text fields.

* Rule: Standardize phone number formats (e.g., from '(XXX) YYY-ZZZZ' to 'XXXXXXXXXX').

Transformation Rule Documentation (Example):

| Source Field(s) | Target Field | Transformation Rule | Example |

| :---------------- | :----------- | :------------------ | :------ |

| Legacy_Customer.StateCode | Account.BillingState | Lookup: Use State_Lookup_Table to map 2-char code to full state name. If no match, default to 'Unknown'. | 'CA' -> 'California' |

| Legacy_Order.OrderAmount | Order.TotalAmount | Calculation: OrderAmount ExchangeRate (if multi-currency, else direct). Ensure precision to 2 decimal places. | 100.50 1.05 = 105.52 |

| Legacy_Product.Description | Product2.Description | Cleansing: Remove HTML tags. Truncate to 1000 characters if longer. | <p>Item</p> -> Item |

4.3. Validation Scripts

Validation scripts are crucial for ensuring the integrity, completeness, and accuracy of the migrated data. They will be executed at various stages: pre-migration (source data profiling), during migration (intermediate checks), and post-migration (target data verification).

Validation Phases:

  1. Pre-Migration Validation (Source Data Profiling):

* Purpose: Understand source data quality, identify anomalies, and inform transformation rules.

* Scripts:

Record Counts: SELECT COUNT() FROM Legacy_Customer;

Uniqueness Checks: SELECT CustomerID, COUNT() FROM Legacy_Customer GROUP BY CustomerID HAVING COUNT(*) > 1;

Null Value Checks: SELECT COUNT() FROM Legacy_Customer WHERE FirstName IS NULL;

* Data Type Conformity: SELECT DISTINCT StateCode FROM Legacy_Customer WHERE LENGTH(StateCode) != 2;

Referential Integrity: SELECT COUNT() FROM Legacy_Order LO LEFT JOIN Legacy_Customer LC ON LO.CustomerID = LC.CustomerID WHERE LC.CustomerID IS NULL;

Range Checks: SELECT COUNT() FROM Legacy_Product WHERE UnitPrice < 0;

  1. Post-Migration Validation (Target Data Verification):

* Purpose: Confirm that data has been migrated correctly and adheres to target system rules.

* Scripts:

Record Count Verification: Compare SELECT COUNT() FROM Account; in target with expected source count.

Data Completeness: SELECT COUNT() FROM Account WHERE FirstName IS NULL; (should align with expectations after transformation).

* Data Accuracy (Sample-based): Select a random sample of 1-5% of records and manually verify mapped fields and transformed values against source.

Referential Integrity: SELECT COUNT() FROM Order WHERE AccountId IS NULL; (after linking).

* Business Rule Compliance:

SELECT COUNT() FROM Account WHERE Credit_Limit__c < 0;

SELECT COUNT() FROM Product2 WHERE Unit_Price__c <= 0;

SELECT COUNT() FROM Account WHERE Status__c NOT IN ('Active', 'Inactive', 'Pending');

Uniqueness: SELECT External_ID__c, COUNT() FROM Account GROUP BY External_ID__c HAVING COUNT(*) > 1;

* Data Aggregation Checks: Compare aggregated sums/averages for key financial fields (e.g., total sales amount) between source and target for a subset of data.

Validation Reporting:

  • All validation script results will be logged and reviewed.
  • Discrepancies will be categorized (e.g., Critical, Major, Minor) and reported.
  • A formal validation report will be generated post-migration, detailing success rates and any unresolved issues.

4.4. Rollback Procedures

A robust rollback plan is essential to mitigate risks and ensure business continuity in the event of unforeseen issues or failures during migration.

Rollback Trigger Conditions:

  • Significant data corruption or loss detected post-migration.
  • Critical system performance degradation in the target system directly attributable to migrated data.
  • Failure to meet predefined Go/No-Go criteria during the migration window.
  • Unacceptable number of errors or discrepancies in post-migration validation.
  • Unforeseen critical business process failures in the target system.

Rollback Strategy:

  1. Pre-Migration Backups:

* Source System Backup: A full, verified backup of the entire [Source System Name]

data_migration_planner.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}