Data Migration Planner
Run ID: 69cceb993e7fb09ff16a65182026-04-01Development
PantheraHive BOS
BOS Dashboard

Plan a complete data migration with field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates.

Study Plan: Mastering Data Migration Planning and Execution

This document outlines a comprehensive six-week study plan designed to equip professionals with the knowledge and practical skills required to plan, manage, and execute successful data migration projects. This plan integrates theoretical understanding with practical application, covering all critical phases from initial strategy to post-migration validation.


1. Purpose and Overview

This study plan serves as a structured guide for individuals or teams aiming to develop expertise in data migration. It breaks down the complex process of data migration into manageable learning modules, ensuring a holistic understanding of the methodologies, tools, and best practices involved. The goal is to enable participants to confidently architect, plan, and oversee data migration initiatives, minimizing risks and maximizing data integrity.

2. Overall Learning Objective

Upon successful completion of this study plan, participants will be able to:

  • Formulate a robust data migration strategy, including scope definition, risk assessment, and stakeholder management.
  • Perform comprehensive source and target system analysis, data profiling, and field mapping.
  • Design and implement data transformation and cleansing rules, ensuring data quality and consistency.
  • Develop effective data validation scripts and establish testing protocols.
  • Plan and execute data migration cutover strategies, including robust rollback procedures.
  • Select appropriate data migration tools and technologies.
  • Estimate project timelines and resources for data migration initiatives.
  • Identify and mitigate common data migration challenges and risks.

3. Target Audience

This study plan is ideal for:

  • IT Project Managers
  • Data Architects and Engineers
  • Business Analysts involved in system transitions
  • Database Administrators
  • System Integrators
  • Anyone responsible for planning or executing data-intensive system changes.

4. Weekly Schedule and Detailed Learning Objectives

Each week builds upon the previous, progressing through the data migration lifecycle. An estimated 8-12 hours of dedicated study per week is recommended.

Week 1: Foundations, Strategy & Scope Definition

  • Learning Objectives:

* Understand the different types and drivers of data migration (e.g., system upgrade, cloud migration, M&A).

* Define the key phases of a data migration project lifecycle.

* Learn to establish clear project scope, objectives, and success criteria.

* Identify and engage key stakeholders (business, IT, security, legal).

* Conduct initial risk assessment and develop mitigation strategies.

* Understand the importance of data governance and compliance in migration.

  • Key Activities:

* Review case studies of successful and failed migrations.

* Practice drafting a project charter for a hypothetical migration scenario.

* Participate in a discussion on stakeholder analysis.

Week 2: Source & Target System Analysis, Data Profiling & Mapping

  • Learning Objectives:

* Master techniques for analyzing source system data models, schemas, and data types.

* Understand how to analyze target system requirements and data models.

* Perform comprehensive data profiling to identify data quality issues (duplicates, missing values, inconsistencies).

* Develop detailed field-level data mapping documents between source and target systems.

* Identify primary keys, foreign keys, and unique constraints in both systems.

* Understand the impact of data volume and velocity on migration planning.

  • Key Activities:

* Hands-on exercise: Data profiling using a sample dataset (e.g., Excel, SQL queries).

* Create a data mapping document for a small set of tables.

* Discuss challenges in mapping complex data structures.

Week 3: Data Transformation, Cleansing & Validation Rules

  • Learning Objectives:

* Design data transformation rules (e.g., data type conversion, aggregation, concatenation, splitting).

* Develop strategies for data cleansing and enrichment.

* Define business rules and logic for data transformation.

* Understand the importance of referential integrity and how to maintain it during migration.

* Learn to write data validation rules and criteria for post-transformation checks.

* Explore techniques for handling historical data and archiving.

  • Key Activities:

* Practice defining transformation rules for specific data issues identified in Week 2.

* Develop a set of data validation checks in pseudo-code or SQL.

* Case study review: Complex data transformation scenarios.

Week 4: Migration Techniques, Tools & Testing Strategy

  • Learning Objectives:

* Evaluate different data migration approaches (e.g., big bang, phased, trickle migration).

* Understand the capabilities of various ETL (Extract, Transform, Load) tools and scripting languages (e.g., Python, SQL).

* Develop a comprehensive data migration testing strategy (unit testing, integration testing, user acceptance testing).

* Design test cases and create sample test data.

* Learn to perform mock migrations and dry runs.

* Understand performance considerations and optimization techniques for migration.

  • Key Activities:

* Research and compare 2-3 popular ETL tools.

* Outline a detailed testing plan for a specific data migration scenario.

* Simulate a small data migration using a scripting language or a simple ETL tool.

Week 5: Cutover Planning, Rollback & Post-Migration

  • Learning Objectives:

* Plan the data migration cutover strategy, including downtime considerations and communication plans.

* Develop robust rollback procedures and contingency plans.

* Define post-migration data verification and reconciliation processes.

* Establish monitoring strategies for data integrity post-migration.

* Understand the importance of user training and change management.

* Learn about data archiving and decommissioning of legacy systems.

  • Key Activities:

* Draft a cutover checklist and a rollback plan for a critical system.

* Discuss best practices for post-migration data reconciliation.

* Review communication templates for migration events.

Week 6: Project Management, Best Practices & Review

  • Learning Objectives:

* Integrate data migration planning into overall project management methodologies (Agile, Waterfall).

* Develop realistic timeline estimates and resource allocation plans.

* Understand the role of documentation throughout the migration process.

* Identify common pitfalls and learn how to avoid them.

* Review industry best practices and emerging trends in data migration.

* Consolidate knowledge and prepare for practical application.

  • Key Activities:

* Develop a high-level project plan and timeline for a full data migration.

* Participate in a comprehensive Q&A and knowledge sharing session.

* Reflect on personal learning and identify areas for further development.


5. Recommended Resources

  • Books:

* "The DAMA Guide to the Data Management Body of Knowledge (DMBOK2)" by DAMA International (Chapters on Data Migration, Data Quality, Data Architecture).

* "Data Migration Illustrated: Achieving Agility and Minimizing Risk" by Christian L. Thaler.

* "Star Schema The Complete Reference" by Christopher Adamson (for understanding target data models).

  • Online Courses/Platforms:

* Coursera/edX: Courses on Data Engineering, ETL Fundamentals, Database Design.

* LinkedIn Learning: Courses on SQL, Python for Data, Data Warehousing.

* Specific vendor training (e.g., AWS Data Migration Service, Azure Data Factory, Google Cloud Dataflow, Informatica, Talend).

  • Articles & Whitepapers:

* Gartner, Forrester, and industry analyst reports on data migration trends and tools.

* Vendor documentation and best practice guides for specific migration tools.

* Blogs and articles from reputable data management consultancies.

  • Tools (for practical exercises):

* Databases: PostgreSQL, MySQL, SQL Server Express (free versions available).

* ETL Tools: Apache NiFi, Talend Open Studio, Pentaho Data Integration (Kettle), Python with Pandas/SQLAlchemy.

* Data Profiling: SQL queries, Excel, specialized data quality tools (trial versions).

* Documentation: Confluence, Microsoft Word/Excel, Lucidchart (for data flow diagrams).


6. Milestones

  • End of Week 2: Completed draft of Data Mapping Document for a sample scenario.
  • End of Week 3: Defined set of Data Transformation and Validation Rules for a sample dataset.
  • End of Week 4: High-level Data Migration Testing Strategy document.
  • End of Week 5: Draft Cutover Plan and Rollback Procedure outline.
  • End of Week 6: Comprehensive Data Migration Plan (incorporating all elements learned) for a hypothetical project.

7. Assessment Strategies

  • Weekly Quizzes/Self-Assessments: Short quizzes to reinforce concepts learned each week.
  • Practical Exercises & Deliverables: Submission of mapping documents, transformation rules, and test plans.
  • Case Study Analysis: Applying learned principles to analyze real-world data migration scenarios and propose solutions.
  • Final Project/Presentation: A comprehensive data migration plan for a hypothetical business case, presented to peers or a mentor, demonstrating mastery of all key areas.
  • Peer Review: Constructive feedback on deliverables from fellow learners to foster collaborative learning.
  • Mentor/Expert Review: Regular check-ins with an experienced data migration professional for guidance and feedback.

8. Next Steps

Upon completion of this study plan, participants are encouraged to:

  • Seek opportunities to apply their knowledge in real-world data migration projects.
  • Explore advanced topics such as cloud-native data migration patterns, real-time data integration, and specific industry compliance requirements.
  • Consider professional certifications in data management or specific cloud platforms.
  • Continuously stay updated with new tools, technologies, and best practices in the rapidly evolving field of data management.
gemini Output

Data Migration Planner: Comprehensive Strategy & Execution Plan

Project: [Insert Project Name]

Date: [Current Date]

Version: 1.0

Prepared For: [Customer Name]


1. Introduction & Scope

This document outlines a comprehensive plan for the data migration from the [Source System Name] to the [Target System Name]. The primary objective is to ensure a secure, accurate, and efficient transfer of critical business data with minimal downtime and data integrity loss. This plan covers all phases of the migration, from initial planning and data mapping to execution, validation, and rollback procedures.

Key Objectives:

  • Migrate all in-scope data accurately and completely.
  • Ensure data integrity and consistency in the target system.
  • Minimize business disruption during the migration window.
  • Provide robust validation and rollback mechanisms.
  • Establish clear communication and responsibilities throughout the process.

Out of Scope (Example):

  • Migration of historical archived data not actively used.
  • Development of new features in the target system.
  • User acceptance testing (UAT) planning (assumed to be a separate activity, though data readiness for UAT is in scope).

2. Source & Target Systems Overview

2.1. Source System Details

  • System Name: [e.g., Legacy CRM System, Old ERP Database]
  • Database Type: [e.g., MySQL 5.7, Oracle 12c, SQL Server 2016]
  • Key Tables/Entities: Customers, Orders, Products, Users, etc.
  • Estimated Data Volume: [e.g., 500 GB, 10 Million Records]
  • Connectivity: ODBC/JDBC, API access, direct database access.

2.2. Target System Details

  • System Name: [e.g., New SaaS CRM, PostgreSQL Data Warehouse]
  • Database Type: [e.g., PostgreSQL 14, SQL Server 2022, MongoDB 5.0]
  • Key Tables/Entities: customers, orders, products, users. (Note: Target names may differ, hence mapping is crucial).
  • Connectivity: Direct database access, ORM, API.

3. Data Inventory & Scope

The following data sets/entities are in scope for migration:

  • Customer Information: Names, addresses, contact details, account history.
  • Order History: Order details, line items, pricing, status.
  • Product Catalog: Product IDs, descriptions, pricing, inventory levels.
  • User Accounts: Usernames, roles, permissions (excluding sensitive passwords, which will be reset or re-created).
  • [Add other relevant data entities]

Data Volume Estimates (Example):

| Data Entity | Source Table Name | Target Table Name | Estimated Records (Source) | Estimated Size (Source) |

| :-------------- | :---------------- | :---------------- | :------------------------- | :---------------------- |

| Customers | tblCustomers | customers | 5,000,000 | 10 GB |

| Orders | tblOrders | orders | 15,000,000 | 30 GB |

| Order Items | tblOrderItems | order_items | 50,000,000 | 80 GB |

| Products | tblProducts | products | 100,000 | 1 GB |

| Total | | | ~70,100,000 | ~121 GB |


4. Data Migration Strategy

The migration will follow an Extract, Transform, Load (ETL) approach.

  1. Extract: Data will be extracted from the source system using [e.g., SQL queries, custom scripts, an ETL tool like Apache Nifi/Talend].
  2. Transform: Extracted data will be processed according to defined transformation rules, including data type conversions, data cleansing, normalization, and enrichment. This will occur in a staging area.
  3. Load: Transformed data will be loaded into the target system using [e.g., bulk insert commands, ORM, target system APIs].

Migration Approach:

  • Phased Migration: Critical data will be migrated first, followed by less critical or historical data.
  • Incremental Migration (Optional for long-running projects): If the cutover window is tight, a bulk initial load followed by delta synchronization might be considered. For this plan, we assume a single full-data cutover.
  • Staging Area: A temporary database or file system will be used as a staging area to hold extracted and transformed data before loading, facilitating easier debugging and validation.

5. Detailed Migration Plan Components

5.1. Field Mapping Document (Example)

A comprehensive field mapping document will be maintained, detailing every source field to its corresponding target field, including data types and any specific notes.

Example: Customers Table Mapping

| Source Table.Field | Source Data Type | Target Table.Field | Target Data Type | Transformation Rule | Notes |

| :-------------------- | :--------------- | :----------------- | :--------------- | :-------------------------------------------------------- | :------------------------------------------------------------------------ |

| tblCustomers.CustID | INT | customers.id | UUID | Generate UUID from CustID (see transform rule 5.2.1) | Primary Key. Ensures uniqueness across systems. |

| tblCustomers.FName | VARCHAR(50) | customers.first_name | VARCHAR(100) | Direct Map | Target allows longer string. |

| tblCustomers.LName | VARCHAR(50) | customers.last_name | VARCHAR(100) | Direct Map | |

| tblCustomers.Email | VARCHAR(100) | customers.email | VARCHAR(255) | Direct Map, Validate Format (see validation rule 5.3.1) | |

| tblCustomers.Address1 | VARCHAR(100) | customers.address_line1 | VARCHAR(255) | Direct Map | |

| tblCustomers.City | VARCHAR(50) | customers.city | VARCHAR(100) | Direct Map | |

| tblCustomers.ZIP | VARCHAR(10) | customers.postal_code | VARCHAR(20) | Cleanse (remove hyphens, spaces) (see transform rule 5.2.2) | Standardize postal code format. |

| tblCustomers.Active | BIT | customers.is_active | BOOLEAN | Convert BIT to BOOLEAN (0->false, 1->true) | |

| tblCustomers.RegDate | DATETIME | customers.created_at | TIMESTAMP WITH TIME ZONE | Convert to UTC, default to NOW() if NULL | Ensure consistent timezone. |

| tblCustomers.LastUpd | DATETIME | customers.updated_at | TIMESTAMP WITH TIME ZONE | Convert to UTC, default to NOW() if NULL | |

| tblCustomers.Status | INT | customers.status | VARCHAR(50) | Map Status INT to ENUM string (see transform rule 5.2.3) | 1 -> 'Active', 2 -> 'Inactive', 3 -> 'Pending' |

| tblCustomers.Notes | TEXT | (DROP) | - | Not Migrated | Data deemed irrelevant or to be manually re-entered post-migration. |

5.2. Data Transformation Rules (Code Examples)

Transformation rules will be implemented in a scripting language (e.g., Python) or directly via SQL in the staging environment. These scripts will be modular, testable, and well-commented.

Environment: Python 3.x with pandas for data manipulation, psycopg2 or pymysql for database interaction.


# data_migration/transformation_rules.py

import uuid
import re
from datetime import datetime
import pandas as pd

def transform_customer_data(df: pd.DataFrame) -> pd.DataFrame:
    """
    Applies a set of transformation rules to a DataFrame of customer data.

    Args:
        df (pd.DataFrame): DataFrame containing raw customer data from the source.

    Returns:
        pd.DataFrame: Transformed DataFrame ready for loading into the target.
    """
    print("Applying customer data transformations...")

    # Rule 5.2.1: Generate UUID for 'id' from 'CustID'
    # For simplicity, we'll generate new UUIDs. If preserving a link to original ID is needed,
    # a deterministic UUID generation (e.g., using UUID5 with a namespace) could be used.
    df['id'] = [str(uuid.uuid4()) for _ in range(len(df))]
    # Alternatively, if we need to map source ID to a UUID:
    # df['id'] = df['CustID'].apply(lambda x: str(uuid.uuid5(uuid.NAMESPACE_DNS, str(x))))

    # Rule: Direct Mapping (renaming columns)
    df = df.rename(columns={
        'FName': 'first_name',
        'LName': 'last_name',
        'Email': 'email',
        'Address1': 'address_line1',
        'City': 'city',
        'Active': 'is_active',
        'RegDate': 'created_at',
        'LastUpd': 'updated_at'
    })

    # Rule 5.2.2: Cleanse 'ZIP' to 'postal_code' (remove hyphens, spaces)
    if 'ZIP' in df.columns:
        df['postal_code'] = df['ZIP'].astype(str).apply(lambda x: re.sub(r'[^a-zA-Z0-9]', '', x).upper() if pd.notna(x) else None)
        df = df.drop(columns=['ZIP']) # Drop original ZIP column

    # Rule 5.2.3: Map 'Status' INT to 'status' VARCHAR
    status_mapping = {
        1: 'Active',
        2: 'Inactive',
        3: 'Pending',
        4: 'Blocked'
    }
    if 'Status' in df.columns:
        df['status'] = df['Status'].map(status_mapping).fillna('Unknown') # Handle unmapped statuses
        df = df.drop(columns=['Status']) # Drop original Status column

    # Rule: Convert BIT to BOOLEAN for 'is_active'
    if 'is_active' in df.columns:
        df['is_active'] = df['is_active'].astype(bool)

    # Rule: Convert DATETIME to TIMESTAMP WITH TIME ZONE (UTC)
    # Handle potential NaT values from source
    for col in ['created_at', 'updated_at']:
        if col in df.columns:
            df[col] = pd.to_datetime(df[col], errors='coerce').dt.tz_localize('UTC') # Assume source is UTC or needs conversion
            # Default to current UTC time if original is NULL/NaT
            df[col] = df[col].fillna(datetime.utcnow().replace(tzinfo=pytz.utc)) # Requires 'import pytz'

    # Rule: Drop 'Notes' column as per mapping
    if 'Notes' in df.columns:
        df = df.drop(columns=['Notes'])

    # Ensure all target columns exist, adding None if missing and expected
    expected_target_columns = [
        'id', 'first_name', 'last_name', 'email', 'address_line1', 'city',
        'postal_code', 'is_active', 'created_at', 'updated_at', 'status'
    ]
    for col in expected_target_columns:
        if col not in df.columns:
            df[col] = None

    # Reorder columns to match target table schema
    df = df[expected_target_columns]

    print(f"Transformed {len(df)} customer records.")
    return df

# Example of how to use this function (in a main migration script)
if __name__ == "__main__":
    # Simulate loading data from source
    source_data = {
        'CustID': [101, 102, 103, 104],
        'FName': ['John', 'Jane', 'Peter', 'Alice'],
        'LName': ['Doe', 'Smith', 'Jones', 'Brown'],
        'Email': ['john.doe@example.com', 'jane.smith@example.com', 'peter.jones@example.com', 'alice.brown@example.com'],
        'Address1': ['123 Main St', '456 Oak Ave', '789 Pine Ln', '101 Elm Rd'],
        'City': ['Anytown', 'Otherville', 'Anytown', 'Otherville'],
        'ZIP': ['12345-6789', '98765', '12345', None],
        'Active': [1, 0, 1, 1],
        'RegDate': ['2020-01-15 10:00:00', '2021-03-20 11:30:00', None, '2019-07-01 09:00:00'],
        'LastUpd': ['2023-10-26 14:00:00', '2023-10-25 10:00:00', '2023-10-27 08:00:00', '2023-10-24 16:00:00'],
        'Status': [1, 2, 1, 5], # 5 is an un
gemini Output

Data Migration Plan: From [Source System Name] to [Target System Name]

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name]

Prepared By: PantheraHive Solutions


1. Executive Summary

This document outlines a comprehensive plan for the data migration from the existing [Source System Name] to the new [Target System Name]. The primary objective of this migration is to ensure a secure, accurate, and complete transfer of critical business data, enabling the successful launch and operation of the [Target System Name] while minimizing business disruption. This plan details the strategy, field mapping, data transformation rules, validation procedures, rollback mechanisms, and a high-level timeline to guide the migration process.


2. Introduction

2.1. Project Goals

  • To successfully migrate all in-scope data from [Source System Name] to [Target System Name].
  • To ensure data integrity, accuracy, and completeness throughout the migration process.
  • To minimize downtime and impact on business operations during the cutover.
  • To establish a robust, repeatable, and auditable migration process.
  • To provide a foundation for future data management and analytics within the [Target System Name].

2.2. Project Scope

This migration plan specifically covers the transfer of the following data entities:

  • [Specify Data Entity 1, e.g., Customer Master Data]
  • [Specify Data Entity 2, e.g., Product Catalog]
  • [Specify Data Entity 3, e.g., Historical Order Data (last 5 years)]
  • [Specify Data Entity 4, e.g., Employee Records (active only)]
  • Further entities to be detailed in subsequent planning phases.

Out-of-scope data includes [e.g., archived data older than 5 years, transactional logs, specific legacy reports not required in the new system].

2.3. Source and Target Systems

  • Source System:

* Name: [e.g., Legacy CRM System, SAP ECC, Custom Database Application]

* Version: [e.g., v3.2, SAP ECC 6.0]

* Key Data Stores: [e.g., SQL Server Database, Oracle Database, Flat Files]

* Connectivity: [e.g., ODBC, JDBC, REST API]

  • Target System:

* Name: [e.g., Salesforce Sales Cloud, SAP S/4HANA, Custom Cloud Application]

* Version: [e.g., Spring '24 Release, S/4HANA 2023]

* Key Data Stores: [e.g., Cloud Database (e.g., Snowflake, AWS RDS), Internal APIs]

* Connectivity: [e.g., REST API, Bulk API, JDBC]


3. Data Migration Strategy

We propose a Phased Migration Strategy to mitigate risks, allow for thorough testing, and minimize business disruption. This approach involves migrating data in logical batches rather than a single "big bang" event.

3.1. Key Phases

  1. Planning & Discovery: Detailed analysis of source data, target schema, business rules, and mapping requirements. Development of migration strategy and documentation.
  2. Development & Testing: Design and implementation of ETL (Extract, Transform, Load) scripts, data validation routines, and rollback procedures. Comprehensive unit, integration, and user acceptance testing (UAT) with test data.
  3. Pre-Migration Activities: Data cleansing in the source system, finalization of source system freeze procedures, and preparation of production environments.
  4. Migration Execution (Dry Runs & Production Cutover): Multiple dry runs to refine the process, measure performance, and identify issues. Final production cutover during a scheduled maintenance window.
  5. Post-Migration Validation & Go-Live: Intensive validation of migrated data in the target system, user acceptance, and official system go-live.

4. Data Analysis & Profiling Summary

Prior to this plan, comprehensive data profiling was conducted on the [Source System Name] data. This analysis revealed:

  • Data Quality Issues: [e.g., Inconsistent address formats, duplicate customer records, missing mandatory fields in ~5% of records].
  • Data Volume: [e.g., ~1.5 million customer records, 500k product SKUs, 10 million historical orders].
  • Referential Integrity: [e.g., Some orphan records identified where child records exist without parent records].
  • Key Findings: [e.g., The Customer_ID field in the source system is not consistently unique and requires a new unique identifier to be generated during migration].

These findings have been incorporated into the transformation rules and validation scripts outlined below.


5. Detailed Migration Plan Components

5.1. Field Mapping

The Field Mapping document serves as the definitive guide for how each piece of data from the source system will correspond to the target system. This will be maintained in a detailed spreadsheet format. Below is an illustrative example for a subset of 'Customer' data:

| Source System Table | Source Field Name | Source Data Type | Source Nullable | Target System Table | Target Field Name | Target Data Type | Target Nullable | Transformation Rule ID | Notes/Comments |

| :------------------ | :---------------- | :--------------- | :-------------- | :------------------ | :---------------- | :--------------- | :-------------- | :--------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

| Customers | CustomerID | INT | No | Contact | ExternalID__c | VARCHAR(50) | No | TRN-001 | Map directly. Will be used as an external lookup key in the target system. |

| Customers | FirstName | VARCHAR(50) | No | Contact | FirstName | VARCHAR(80) | No | N/A | Direct map. |

| Customers | LastName | VARCHAR(50) | No | Contact | LastName | VARCHAR(80) | No | N/A | Direct map. |

| Customers | AddrLine1 | VARCHAR(100) | Yes | Contact | Street | VARCHAR(255) | Yes | TRN-002 | Concatenate AddrLine1, AddrLine2 (if present) to Street. |

| Customers | City | VARCHAR(50) | Yes | Contact | City | VARCHAR(40) | Yes | N/A | Direct map. |

| Customers | StateCode | CHAR(2) | Yes | Contact | State | VARCHAR(80) | Yes | TRN-003 | Lookup StateCode to full State Name using an external mapping table. Default to 'N/A' if not found. |

| Customers | Zip | VARCHAR(10) | Yes | Contact | PostalCode | VARCHAR(20) | Yes | N/A | Direct map. |

| Customers | CreationDate | DATETIME | No | Contact | CreatedDate | DATETIME | No | TRN-004 | Convert to UTC timezone. |

| Customers | Status | VARCHAR(20) | No | Contact | AccountStatus | PICKLIST | No | TRN-005 | Map source values ('Active', 'Inactive', 'Pending') to target picklist values ('Active', 'Inactive', 'Provisioning'). Default to 'Inactive' for any unmapped value. |

| Orders | OrderTotal | DECIMAL(10,2) | No | Order__c | TotalAmount__c | CURRENCY(18,2) | No | N/A | Direct map. |

5.2. Transformation Rules

Transformation rules define how data is manipulated during the migration process to meet the target system's requirements and improve data quality.

  • TRN-001: Unique External ID Generation

*

data_migration_planner.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}