Data Migration Planner
Run ID: 69caea3cc8ebe3066ba6f69a2026-03-30Development
PantheraHive BOS
BOS Dashboard

As a professional AI assistant within PantheraHive, I am executing Step 1 of 3 for the "Data Migration Planner" workflow. This output outlines the architectural and high-level planning required for a comprehensive data migration, setting the foundation for subsequent detailed design and implementation steps.


Data Migration Planner - Step 1: Architecture Planning

1. Executive Summary

This document outlines the architectural plan and high-level strategy for the upcoming data migration project. It defines the scope, objectives, proposed approach, key architectural components, and critical considerations for ensuring a successful, secure, and compliant data transfer from source system(s) to target system(s). This foundational plan will guide subsequent detailed design, development, and execution phases.

2. Project Context and Objectives

2.1. Business Drivers

2.2. High-Level Migration Goals

2.3. Scope Definition

3. Migration Strategy and Approach

3.1. Migration Methodology

* Rationale: Reduces risk, allows for iterative testing and validation, minimizes impact on business operations, provides opportunities for lessons learned.

* Approach: Data will be migrated in logical batches (e.g., by business unit, data domain, or historical periods). Each phase will involve its own extraction, transformation, loading, and validation cycle.

* Considerations: Requires robust synchronization strategies if source and target systems run concurrently during phases.

* Rationale: Simpler to manage data consistency if downtime can be fully accommodated.

* Approach: All in-scope data migrated simultaneously over a single, extended downtime window.

* Considerations: Higher risk due to single point of failure, requires extensive pre-migration testing and highly coordinated cutover.

3.2. Tooling Strategy

* Rationale: Provides robust capabilities for data extraction, transformation, loading, error handling, and scheduling.

* Rationale: Essential for understanding source data quality and defining cleansing rules.

* Rationale: For managing all migration scripts, mappings, and configuration files.

* Rationale: For real-time visibility into migration progress and immediate error detection.

3.3. High-Level Data Flow Architecture

text • 3,129 chars
+----------------+       +-------------------+       +---------------------+       +-----------------+
| Source Systems | ----> | Data Extraction   | ----> | Data Staging Area   | ----> | Data Cleansing  |
| (DBs, APIs,    |       | (APIs, SQL, File  |       | (Temporary DB/Data  |       | & Transformation|
| Files)         |       | Transfers)        |       | Lake)               |       | Engine          |
+----------------+       +-------------------+       +---------------------+       +-----------------+
        ^                                                                                   |
        |                                                                                   V
        |                                                                         +-----------------+
        |                                                                         | Data Validation |
        |                                                                         | (Pre-Load)      |
        |                                                                         +-----------------+
        |                                                                                   |
        |                                                                                   V
        |                                                                         +-----------------+
        |                                                                         | Target Data Load|
        |                                                                         | (APIs, Bulk     |
        |                                                                         | Inserts)        |
        +-------------------------------------------------------------------------------------------+
                                                                                              |
                                                                                              V
                                                                                    +-----------------+
                                                                                    | Target Systems  |
                                                                                    | (New CRM, ERP,  |
                                                                                    | DWH)            |
                                                                                    +-----------------+
                                                                                              |
                                                                                              V
                                                                                    +-----------------+
                                                                                    | Data Validation |
                                                                                    | (Post-Load)     |
                                                                                    +-----------------+
Sandboxed live preview

4. Key Architectural Components

4.1. Data Extraction Layer

  • Methods:

* Database Direct Connect: For relational databases (e.g., JDBC/ODBC connections).

* API Integration: For SaaS platforms or systems with robust APIs.

* File Exports: For legacy systems or specific flat file sources (CSV, XML, JSON).

* Database Snapshots/Dumps: For large datasets or complex legacy structures.

  • Considerations:

* Impact on source system performance during extraction.

* Network bandwidth and security for data transfer.

* Change Data Capture (CDC) mechanisms for phased migrations.

4.2. Data Staging Area

  • Purpose: A temporary, intermediate storage location for extracted data before transformation.
  • Technology: [e.g., "Dedicated relational database (PostgreSQL/SQL Server)", "Cloud storage bucket (S3/Azure Blob) with a compute layer (Spark/Databricks)", "Data Lake (HDFS/ADLS)"].
  • Benefits:

* Decouples extraction from transformation and loading.

* Provides a safe environment for data profiling and initial cleansing.

* Allows for replayability of transformation steps without re-extracting.

* Acts as a recovery point.

4.3. Data Transformation Engine

  • Purpose: Apply business rules, data cleansing, standardization, enrichment, and field mapping.
  • Technology: The chosen ETL/ELT platform.
  • Key Activities:

* Data Profiling: In-depth analysis of source data characteristics (data types, distributions, nulls, uniqueness).

* Data Cleansing: Handling missing values, correcting inaccuracies, removing duplicates.

* Data Standardization: Enforcing consistent formats (e.g., date formats, address formats).

* Data Enrichment: Adding value to data from external sources if required.

* Data Aggregation/De-normalization: Preparing data for the target system's schema.

* Key Generation/Mapping: Generating new primary keys and mapping legacy keys.

* Referential Integrity: Ensuring relationships between entities are maintained.

4.4. Target Data Loading Layer

  • Methods:

* API Integration: For SaaS platforms or systems that prefer API-driven inserts.

* Bulk Loaders: For high-volume inserts into relational databases or data warehouses (e.g., COPY INTO for Snowflake, bcp for SQL Server).

* Direct Database Inserts/Updates: For smaller datasets or specific update scenarios.

  • Considerations:

* Target system performance and capacity during loading.

* Transaction management and error handling during inserts.

Validation of data after* loading into the target.

4.5. Error Handling and Logging

  • Strategy: Implement comprehensive error logging at each stage (extraction, transformation, loading).
  • Components:

* Centralized logging system for all migration events.

* Mechanisms to identify, quarantine, and report bad records.

* Defined thresholds for acceptable error rates.

* Automated alerts for critical errors or failures.

5. Data Governance & Quality

5.1. Data Profiling Strategy

  • Utilize data profiling tools to analyze source data for:

* Data types and format consistency.

* Completeness (null rates).

* Uniqueness and cardinality.

* Value distributions and outliers.

* Referential integrity violations.

  • Output will inform detailed field mapping and transformation rules.

5.2. Data Cleansing Strategy

  • Define specific rules for addressing identified data quality issues (e.g., standardizing addresses, resolving duplicate customer records, handling missing mandatory fields).
  • Prioritize cleansing efforts based on business impact and feasibility.
  • Establish a feedback loop with business users for data quality issue resolution.

5.3. Data Validation Strategy

  • Pre-Migration Validation (Source System):

* Audits of source data counts, checksums, and key aggregates.

* Verification of data integrity constraints.

  • During-Migration Validation (Staging & Transformation):

* Record counts at each stage of the ETL pipeline.

* Data type validation, format validation.

* Referential integrity checks within the staging area.

  • Post-Migration Validation (Target System):

* Record Count Verification: Compare total record counts for each entity between source and target.

* Data Summation/Aggregation Checks: Verify financial totals, quantity sums, etc.

* Random Sample Data Verification: Manual spot checks of individual records.

* Business Rule Validation: Ensure transformed data adheres to new business rules.

* Data Reconciliation Reports: Automated reports comparing key metrics.

6. Security and Compliance

  • Data Encryption:

* Data at rest: Encrypt data in the staging area and target database.

* Data in transit: Use secure protocols (TLS/SSL) for all data transfers.

  • Access Controls:

* Strict role-based access control (RBAC) to migration tools, staging environments, and target systems.

* Principle of least privilege applied to all accounts involved in the migration.

  • Auditing:

* Maintain detailed audit logs of all data access, modifications, and migration activities.

  • Compliance:

* Ensure adherence to relevant regulations (e.g., GDPR, HIPAA, CCPA, PCI-DSS) for data handling, storage, and privacy throughout the migration lifecycle.

* Mask or anonymize sensitive data in non-production environments.

7. Rollback and Contingency Planning

  • Rollback Procedures (High-Level):

* Define clear "point of no return" for each migration phase.

* Establish a strategy for reverting the target system to its pre-migration state if critical issues arise. This may involve restoring from backups or executing reverse migration scripts.

* Communicate rollback triggers and responsibilities.

  • Backup Strategy:

* Comprehensive backups of source systems prior to extraction.

* Backups of the staging area and target systems at critical junctures.

  • Disaster Recovery:

* Consider the impact of infrastructure failures during migration and plan for recovery.

  • Communication Plan:

* Clear communication protocols for stakeholders in case of migration delays or failures.

8. Team and Roles (High-

gemini Output

Data Migration Planner: Comprehensive Plan & Deliverables

Project Title: [Customer Name] - Data Migration Project

Document Version: 1.0

Date: October 26, 2023


1. Executive Summary

This document outlines a comprehensive plan for the data migration from [Source System Name] to [Target System Name]. The objective is to ensure a secure, accurate, and efficient transfer of critical business data, minimizing downtime and data integrity risks. This plan details the scope, strategy, field mappings, transformation rules, validation procedures, rollback mechanisms, and a projected timeline to guide the successful execution of the migration.

2. Scope & Objectives

2.1 Scope:

  • In-Scope Data Entities: [List specific entities, e.g., Customer Data, Product Catalog, Order History, Employee Records, etc.]
  • In-Scope Data Volume: Approximately [X] GB / [Y] million records.
  • In-Scope Systems: Data migration from [Source System Name] (e.g., Legacy CRM, SAP ECC, Custom DB) to [Target System Name] (e.g., Salesforce, Dynamics 365, New ERP).
  • Out-of-Scope: [List anything explicitly NOT covered, e.g., historical archived data older than X years, specific non-critical lookup tables, integration layer development post-migration].

2.2 Objectives:

  • Migrate all in-scope data with 100% accuracy and completeness.
  • Ensure data integrity and consistency in the target system.
  • Minimize downtime for critical business operations during the migration window.
  • Establish robust validation and rollback procedures to mitigate risks.
  • Provide a clear audit trail for all migrated data.
  • Complete the migration within the agreed timeline and budget.

3. Source & Target Systems

  • Source System:

* Name: [e.g., Legacy CRM System v3.2]

* Database: [e.g., SQL Server 2012]

* Key Tables/Schemas: [e.g., Customers, Products, Orders, Order_Items]

* Access Method: [e.g., ODBC, JDBC, API]

  • Target System:

* Name: [e.g., Salesforce Sales Cloud]

* Database/Platform: [e.g., Salesforce Platform, PostgreSQL]

* Key Objects/Tables: [e.g., Account, Contact, Product2, Order, OrderItem]

* Access Method: [e.g., Salesforce Data Loader API, Custom API, JDBC]

4. Data Inventory & Analysis

A detailed data inventory has been performed, identifying key entities, attributes, relationships, data types, volumes, and potential data quality issues in the source system. This analysis forms the basis for mapping and transformation rules.

5. Data Migration Strategy

5.1 Migration Approach:

  • Recommended Approach: Phased Migration

* Rationale: Reduces risk, allows for iterative testing and validation, minimizes impact on business operations, and provides opportunities for course correction.

* Alternative: Big Bang (Not recommended for complex migrations due to high risk).

  • Phases:

1. Pilot Migration: A small subset of non-critical data to validate the entire migration process (ETL, mapping, validation, loading).

2. Entity-by-Entity Migration: Migrating data entities in a logical sequence (e.g., Lookup data -> Customers -> Products -> Orders).

3. Delta Migration: For long migration windows, a final delta migration will capture changes made in the source system since the initial bulk load.

5.2 Downtime Strategy:

  • Planned Downtime: A specific window will be scheduled for the final data cutover and load. During this period, the source system will be read-only or taken offline to prevent new data from being created or modified.
  • Minimization: The phased approach and pre-migration data cleansing will help minimize the duration of the critical downtime window.

5.3 Data Cleansing Plan:

  • Pre-Migration Cleansing: Data quality issues (duplicates, incomplete records, inconsistent formats) will be identified and remediated in the source system before extraction, wherever possible.
  • During Transformation: Automated cleansing rules will be applied during the ETL process (e.g., standardizing addresses, fixing common typos).
  • Post-Migration Review: A final review of migrated data will be conducted to identify any remaining anomalies.

6. Detailed Data Mapping & Transformation

This section provides a detailed example of data mapping and transformation rules for a hypothetical Customers entity.

6.1 Key Data Entities (Example: Customer Data)

Source Table: Legacy_CRM.Customers

| Source Field Name | Data Type | Nullable | Description | Sample Data |

| :---------------- | :-------- | :------- | :---------- | :---------- |

| CustomerID | INT | NO | Unique ID | 1001 |

| FirstName | VARCHAR(50)| NO | First Name | John |

| LastName | VARCHAR(50)| NO | Last Name | Doe |

| AddressLine1 | VARCHAR(100)| NO | Primary Address| 123 Main St |

| AddressLine2 | VARCHAR(100)| YES | Apt/Suite | Apt 4B |

| City | VARCHAR(50)| NO | City | Anytown |

| StateCode | CHAR(2) | NO | State Abbrev.| NY |

| ZipCode | VARCHAR(10)| NO | Postal Code | 10001-1234 |

| Country | VARCHAR(50)| NO | Country Name| USA |

| Email | VARCHAR(100)| YES | Email Address| john.doe@example.com |

| PhoneNum | VARCHAR(20)| YES | Phone Number| (555) 123-4567 |

| CreationDate | DATETIME | NO | Record Creation| 2010-01-15 10:30:00 |

| StatusFlag | CHAR(1) | NO | A=Active, I=Inactive | A |

Target Object: New_CRM.Account and New_CRM.Contact

(Assuming Account for Company, Contact for Person)

Target New_CRM.Account (for Company/Organization, if applicable):

| Target Field Name | Data Type | Nullable | Description |

| :---------------- | :-------- | :------- | :---------- |

| AccountID | UUID | NO | Unique ID |

| AccountName | VARCHAR(255)| NO | Company Name|

| BillingStreet | VARCHAR(255)| YES | Billing Street |

| BillingCity | VARCHAR(100)| YES | Billing City |

| BillingState | VARCHAR(100)| YES | Billing State|

| BillingPostalCode| VARCHAR(20)| YES | Billing Postal Code|

| BillingCountry | VARCHAR(100)| YES | Billing Country|

| Status | PICKLIST | NO | Account Status|

| LegacyID__c | VARCHAR(50)| YES | Custom field for Source CustomerID|

Target New_CRM.Contact:

| Target Field Name | Data Type | Nullable | Description |

| :---------------- | :-------- | :------- | :---------- |

| ContactID | UUID | NO | Unique ID |

| FirstName | VARCHAR(50)| NO | First Name |

| LastName | VARCHAR(50)| NO | Last Name |

| Email | VARCHAR(100)| YES | Email Address|

| Phone | VARCHAR(40)| YES | Phone Number|

| MailingStreet | VARCHAR(255)| YES | Mailing Street |

| MailingCity | VARCHAR(100)| YES | Mailing City |

| MailingState | VARCHAR(100)| YES | Mailing State|

| MailingPostalCode| VARCHAR(20)| YES | Mailing Postal Code|

| MailingCountry | VARCHAR(100)| YES | Mailing Country|

| CreatedDate | DATETIME | NO | Record Creation|

| Status | PICKLIST | NO | Contact Status|

| LegacyCustomerID__c| VARCHAR(50)| YES | Custom field for Source CustomerID|

| AccountID | UUID | NO | Lookup to Account |

6.2 Field Mapping (Example: Customer Data to New_CRM.Contact)

| Source Field (Legacy_CRM.Customers) | Target Field (New_CRM.Contact) | Transformation Rule(s) | Notes |

| :---------------------------------- | :------------------------------- | :--------------------- | :---- |

| CustomerID | LegacyCustomerID__c | Direct Map | Stored in custom field for traceability |

| FirstName | FirstName | Direct Map | |

| LastName | LastName | Direct Map | |

| AddressLine1 | MailingStreet | Concatenate with AddressLine2 if present | Rule 1 |

| AddressLine2 | (Part of MailingStreet) | See above | |

| City | MailingCity | Direct Map | |

| StateCode | MailingState | Map to full state name (e.g., NY -> New York) | Rule 2 |

| ZipCode | MailingPostalCode | Extract first 5 digits if ZipCode contains hyphen. | Rule 3 |

| Country | MailingCountry | Standardize (e.g., 'USA' -> 'United States') | Rule 4 |

| Email | Email | Direct Map | Validate format (Rule 5) |

| PhoneNum | Phone | Format to E.164 (e.g., +15551234567) | Rule 6 |

| CreationDate | CreatedDate | Direct Map | |

| StatusFlag | Status | Map 'A' -> 'Active', 'I' -> 'Inactive' | Rule 7 |

| (Derived) | AccountID | Lookup/Create based on CustomerID or CompanyName (if available in source) | Rule 8: If source has company name, create/link to Account. Otherwise, create a default 'Individual' account. |

6.3 Transformation Rules (Detailed Examples)

  • Rule 1: Concatenate Address Lines

* Logic: MailingStreet = Source.AddressLine1 + (if Source.AddressLine2 is not null and not empty, then ", " + Source.AddressLine2 else "")

* Example: "123 Main St", "Apt 4B" -> "123 Main St, Apt 4B"

  • Rule 2: State Code to Full State Name

* Logic: Use a lookup table (or dictionary) to convert 2-character state codes to full state names. If no match, use the original code and flag for review.

* Example: 'NY' -> 'New York', 'CA' -> 'California'

  • Rule 3: Zip Code Formatting

* Logic: If Source.ZipCode contains a hyphen, take only the first 5 characters. Otherwise, use the entire Source.ZipCode. Trim whitespace.

* Example: '10001-1234' -> '10001', '90210' -> '90210'

  • Rule 4: Country Standardization

* Logic: Map common variations to a standardized list (e.g., 'USA', 'US', 'United States' -> 'United States'). Use ISO 3166-1 alpha-2 codes if target system supports it.

* Example: 'USA' -> 'United States'

  • Rule 5: Email Format Validation

* Logic: Ensure Source.Email conforms to a standard email regex pattern. If invalid, log the record and set Target.Email to NULL or a default placeholder, or flag for manual correction.

  • Rule 6: Phone Number Formatting

* Logic: Remove all non-numeric characters, then prepend country code. Store in E.164 format.

* Example: '(555) 123-4567' -> '+15551234567'

  • Rule 7: Status Flag Mapping

* Logic: Conditional mapping: IF Source.StatusFlag = 'A' THEN 'Active' ELSE IF Source.StatusFlag = 'I' THEN 'Inactive' ELSE 'Unknown'

  • Rule 8: Account/Contact Relationship

* Logic: If Source.CompanyName exists, check if an Account with that name exists in New_CRM. If yes, link Contact

gemini Output

Data Migration Planner: Comprehensive Plan Document

Date: October 26, 2023

Document Version: 1.0

Prepared For: [Customer Name]

Prepared By: PantheraHive Solutions


1. Executive Summary

This document outlines a comprehensive plan for the upcoming data migration project. The objective is to seamlessly transfer critical business data from [Source System Name/Description] to [Target System Name/Description], ensuring data integrity, minimal downtime, and adherence to all business requirements. This plan details the scope, strategy, field mappings, transformation rules, validation procedures, rollback mechanisms, and a high-level timeline, providing a robust framework for a successful migration.

2. Project Scope and Objectives

2.1. Scope

The scope of this data migration project includes:

  • Source System: [Specify Source System, e.g., Legacy CRM (Salesforce Classic), On-premise SQL Database v2012]
  • Target System: [Specify Target System, e.g., Cloud-based CRM (Salesforce Lightning), AWS RDS PostgreSQL]
  • Data Entities: [List specific tables/objects/modules to be migrated, e.g., Accounts, Contacts, Opportunities, Products, Historical Orders (last 5 years)]
  • Data Volume: Estimated [X GB / Y Million Records] across specified entities.
  • Excluded Data: [Specify any data not in scope, e.g., Archived data older than 5 years, transient log files, test data from source system].

2.2. Objectives

The primary objectives of this data migration are to:

  • Ensure Data Integrity: Migrate 100% of in-scope data accurately and without corruption.
  • Achieve Data Consistency: Standardize data formats and values to meet target system requirements and business rules.
  • Minimize Downtime: Execute the migration with minimal disruption to business operations, targeting a cutover window of [X hours/days].
  • Validate Data Accuracy: Implement robust validation procedures to confirm successful and accurate data transfer.
  • Provide Rollback Capability: Establish clear procedures to revert to the source system state in case of unforeseen critical issues.
  • Improve Data Quality: Leverage the migration as an opportunity to cleanse and de-duplicate data where applicable.
  • Support Business Continuity: Enable the target system to be fully operational with complete and accurate data post-migration.

3. Source and Target Systems Overview

3.1. Source System

  • System Name/Type: [e.g., Custom-built ERP, Oracle E-Business Suite]
  • Database Type: [e.g., Microsoft SQL Server 2012, Oracle Database 11g]
  • Key Data Structures: [Briefly mention relevant schemas/tables, e.g., dbo.Customers, dbo.Orders_Header, dbo.Orders_LineItems]
  • Current State: [e.g., Production system, actively used, known data quality issues in addresses]

3.2. Target System

  • System Name/Type: [e.g., SAP S/4HANA, HubSpot CRM]
  • Database Type: [e.g., PostgreSQL, Salesforce Objects]
  • Key Data Structures: [Briefly mention relevant schemas/objects, e.g., Account__c, Contact__c, Opportunity__c]
  • Desired State: [e.g., New production system, clean slate, enforces new validation rules for email formats]

4. Data Migration Strategy

The chosen migration strategy is a [Phased / Big Bang / Incremental] approach.

  • [Phased]: Data will be migrated in logical batches (e.g., by module, by geography) to allow for incremental testing and validation. This reduces risk but extends the overall project timeline.
  • [Big Bang]: All in-scope data will be migrated in a single cutover event. This minimizes system coexistence but carries higher risk and requires extensive planning and testing.
  • [Incremental]: Initial bulk migration followed by ongoing synchronization for new/changed data, often used for systems with minimal downtime requirements.

Selected Strategy Rationale: [Justify the chosen strategy, e.g., "A Phased approach is selected to mitigate risk associated with the complexity of integrating diverse data sets and to allow business users to gradually adapt to the new system."]

5. Data Cleansing and Preparation Strategy

Prior to migration, data cleansing is critical for a successful outcome.

  • Identification: Data quality reports will be generated from the source system to identify common issues (e.g., missing values, incorrect formats, duplicates).
  • Standardization: Data will be standardized according to target system requirements and business rules (e.g., address formats, phone number formats, currency codes).
  • De-duplication: A de-duplication strategy will be implemented for key entities (e.g., Accounts, Contacts) using [specify methodology, e.g., fuzzy matching algorithms, manual review of high-confidence matches].
  • Enrichment: Where necessary, data may be enriched from external sources or internal lookups (e.g., adding industry codes, correcting postal codes).
  • Tooling: [Specify tools, e.g., SQL scripts, Python scripts, dedicated ETL data quality modules like Talend Data Quality, Informatica Data Quality].
  • Responsibility: Data owners will be involved in reviewing and approving cleansed data sets.

6. Detailed Migration Plan Components

6.1. Field Mapping (Source to Target)

A detailed field mapping document will be maintained, typically in a spreadsheet format, covering every in-scope field. This document will include:

  • Source System Details: Table/Object Name, Field Name, Data Type, Length, Nullable, Description.
  • Target System Details: Table/Object Name, Field Name, Data Type, Length, Nullable, Description.
  • Transformation Rule ID: Reference to specific transformation rules (see Section 6.2).
  • Comments/Notes: Any specific considerations or questions.
  • Key Identifiers: Identification of primary and foreign keys, and how relationships are maintained.

Example Field Mapping Snippet (for Customer entity):

| Source Table.Field | Source Type | Source Nullable | Target Object.Field | Target Type | Target Nullable | Transformation Rule ID | Notes |

| :------------------------ | :---------- | :-------------- | :------------------------- | :---------- | :-------------- | :--------------------- | :-------------------------------------------------- |

| LegacyCRM.Customers.CustID | INT | NO | SFDC.Account.External_ID__c | TEXT(255) | NO | TR-001 | Map to external ID field. |

| LegacyCRM.Customers.Name | VARCHAR(255)| NO | SFDC.Account.Name | TEXT(255) | NO | | Direct map. |

| LegacyCRM.Customers.Addr1| VARCHAR(255)| YES | SFDC.Account.BillingStreet| TEXT(255) | YES | TR-002 | Concatenate with Addr2. |

| LegacyCRM.Customers.Addr2| VARCHAR(255)| YES | SFDC.Account.BillingStreet| TEXT(255) | YES | TR-002 | Concatenate with Addr1. |

| LegacyCRM.Customers.City | VARCHAR(100)| YES | SFDC.Account.BillingCity | TEXT(100) | YES | TR-003 | Standardize to proper case. |

| LegacyCRM.Customers.ZipCode| VARCHAR(10) | YES | SFDC.Account.BillingPostalCode| TEXT(20) | YES | TR-004 | Pad with leading zeros if < 5 digits. |

| LegacyCRM.Customers.Status| VARCHAR(20) | NO | SFDC.Account.Status__c | PICKLIST | NO | TR-005 | Map legacy statuses to new picklist values. |

| LegacyCRM.Customers.CreatedDate| DATETIME | NO | SFDC.Account.CreatedDate | DATETIME | NO | TR-006 | Convert to UTC and ensure ISO 8601 format. |

| LegacyCRM.Customers.SalesPersonID| INT | YES | SFDC.Account.OwnerId | LOOKUP | YES | TR-007 | Lookup SFDC User ID based on SalesPersonID. |

6.2. Transformation Rules

Transformation rules define how data is modified during the migration process to meet the target system's requirements and business logic. Each rule will be documented with a unique ID, description, and source/target examples.

Example Transformation Rules:

  • TR-001: External ID Generation

* Description: Concatenate 'LEGACY-' prefix with LegacyCRM.Customers.CustID to form SFDC.Account.External_ID__c.

* Example: CustID = 12345 becomes LEGACY-12345.

  • TR-002: Address Line Concatenation

* Description: Combine LegacyCRM.Customers.Addr1 and LegacyCRM.Customers.Addr2 into SFDC.Account.BillingStreet. If Addr2 is null, use Addr1 alone. If both are present, separate with a comma.

* Example: Addr1 = "123 Main St", Addr2 = "Suite 101" becomes "123 Main St, Suite 101".

  • TR-003: City Name Standardization

* Description: Convert LegacyCRM.Customers.City to proper case (e.g., "new york" becomes "New York").

* Example: City = "los angeles" becomes "Los Angeles".

  • TR-004: Zip Code Formatting

* Description: For US zip codes, if LegacyCRM.Customers.ZipCode is less than 5 digits, pad with leading zeros.

* Example: ZipCode = "9021" becomes "09021".

  • TR-005: Status Code Mapping

* Description: Map legacy status codes to new picklist values in SFDC.

* LegacyCRM.Customers.Status = 'A' -> SFDC.Account.Status__c = 'Active'

* LegacyCRM.Customers.Status = 'I' -> SFDC.Account.Status__c = 'Inactive'

* LegacyCRM.Customers.Status = 'P' -> SFDC.Account.Status__c = 'Prospect'

* Default: If a legacy status is not found, default to 'Inactive'.

  • TR-006: Date Time Conversion

* Description: Convert LegacyCRM.Customers.CreatedDate from local time zone [e.g., EST] to UTC and store in ISO 8601 format in SFDC.Account.CreatedDate.

  • TR-007: User ID Lookup

* Description: Lookup SalesPersonID from a cross-reference table (LegacyUserID_SFDCUserID_Map) to retrieve the corresponding SFDC.User.Id for SFDC.Account.OwnerId. Handle null or unmatched IDs by assigning to a default 'Migration User'.

6.3. Data Load Strategy and Error Handling

  • Load Mechanism: [Specify mechanism, e.g., ETL tool (Informatica, Talend), custom scripts (Python, Java), API calls, direct database inserts, Salesforce Data Loader].
  • Batching: Data will be loaded in batches of [X records] to optimize performance and manage memory.
  • Error Logging: A robust error logging mechanism will be implemented to capture:

* Source record ID

* Target object/field

* Error message (e.g., validation failure, data type mismatch)

* Timestamp

* Strategy: Errors will be logged to a dedicated error table/file, and records failing validation will be quarantined for review and manual correction or re-processing.

  • Retry Mechanism: For transient errors (e.g., network issues), a limited retry mechanism will be implemented.

6.4. Validation Scripts and Procedures

Validation is critical at multiple stages to ensure data quality and successful migration.

6.4.1. Pre-Migration Validation (Source Data)

  • Purpose: Verify source data quality and completeness before extraction.
  • Scripts: SQL queries or custom scripts to identify:

* Missing mandatory fields.

* Incorrect data types.

* Referential integrity violations within the source.

* Records that would violate target system unique constraints.

  • Procedure: Run scripts, review reports, and remediate identified issues with data owners.

6.4.2. Post-Migration Validation (Target Data)

  • Purpose: Confirm that data has been migrated accurately and completely into the target system.
  • Scripts:

* Record Count Verification: Compare total record counts for each entity between source and target.

SELECT COUNT() FROM LegacyCRM.Customers; vs. SELECT COUNT(*) FROM SFDC.Account;

* Data Summation/Aggregation: Validate aggregate values (e.g., sum of order totals, average customer age).

* SELECT SUM(OrderTotal) FROM LegacyCRM.Orders; vs. SELECT SUM(Amount__c) FROM SFDC.Opportunity;

* Random Sample Verification: Select a statistically significant random sample of records and perform a field-by-field comparison between source and target.

* Key Field Comparison: Verify uniqueness and correctness of primary identifiers and foreign keys.

* Business Rule Validation: Run reports/queries against the target system to ensure migrated data adheres to new business rules (e.g., all active accounts have a primary contact).

  • Procedure:

1. Execute validation scripts immediately post-load.

2. Generate validation reports highlighting discrepancies.

3. Investigate and categorize discrepancies (e.g., expected transformation, actual error).

4. Report findings to stakeholders and determine remediation actions.

6.4.3. User Acceptance Testing (UAT)

  • Purpose: Business users verify that the migrated data meets their functional requirements and expectations.
  • Procedure:

1. Provide access to the migrated target system to designated UAT testers.

2. Testers execute pre-defined test cases covering key business processes and data queries.

3. Feedback and issues are logged and tracked.

4.

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}