Data Migration Planner
Run ID: 69cbaf3461b1021a29a8b6022026-03-31Development
PantheraHive BOS
BOS Dashboard

Plan a complete data migration with field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates.

Comprehensive Study Plan: Becoming a Data Migration Planner

This document outlines a detailed, 10-week study plan designed to equip an individual with the knowledge and skills necessary to effectively plan and manage data migration projects. This plan covers the entire data migration lifecycle, from initial assessment to post-migration activities, emphasizing best practices, tooling, and critical considerations.


1. Introduction

Purpose: To provide a structured and actionable study roadmap for individuals aspiring to become proficient Data Migration Planners. This plan will systematically build expertise in data analysis, mapping, transformation, validation, execution, and risk management within the context of data migration.

Target Audience: IT professionals, data analysts, project managers, and database administrators looking to specialize in data migration, or anyone involved in large-scale system transitions.

Expected Outcome: By the end of this study plan, the learner will possess a comprehensive understanding of data migration principles, methodologies, and practical skills required to plan, oversee, and troubleshoot data migration initiatives.


2. Overall Learning Objectives

Upon completion of this 10-week study plan, the learner will be able to:

  • Understand Data Migration Fundamentals: Grasp the core concepts, drivers, types, and challenges associated with data migration projects.
  • Conduct Thorough Data Source Analysis: Profile source data, identify data quality issues, and understand data structures.
  • Design Robust Data Mapping and Transformation Rules: Create detailed field-to-field mappings and define complex transformation logic.
  • Develop Data Quality and Cleansing Strategies: Implement processes to ensure data integrity and accuracy before and during migration.
  • Evaluate and Select Appropriate Migration Strategies and Tools: Choose the right approach (e.g., big bang, phased) and technology for different scenarios.
  • Plan and Execute Migration Testing: Design and perform various tests (unit, integration, volume) to validate the migration process.
  • Formulate Validation, Reconciliation, and Cutover Plans: Ensure data consistency post-migration and manage the transition to the new system.
  • Establish Comprehensive Rollback and Contingency Procedures: Develop plans to mitigate risks and recover from potential migration failures.
  • Define Post-Migration Activities: Understand monitoring, archiving, and optimization tasks after a successful migration.
  • Manage Data Migration Projects: Apply project management principles to scope, schedule, budget, and risk manage data migration projects.

3. Weekly Schedule and Detailed Learning Objectives

Each week includes specific learning objectives, key topics, and recommended resources.

Week 1: Fundamentals of Data Migration

  • Learning Objectives:

* Define data migration, its purpose, and common triggers.

* Identify different types of data migration (e.g., storage, database, application, cloud).

* Understand the key phases and stakeholders in a data migration project.

* Recognize common challenges and risks associated with data migration.

  • Topics:

* What is Data Migration? (Definitions, Drivers, Benefits)

* Types of Data Migration (On-prem to On-prem, On-prem to Cloud, Cloud to Cloud)

* Data Migration Lifecycle Overview (Assessment, Design, Execution, Validation, Cutover)

* Key Roles and Responsibilities (Data Architect, PM, DBA, Business Analyst)

* Common Pitfalls and Risk Identification

* Introduction to Data Governance and Compliance in Migration

  • Recommended Resources:

* Book Chapters: "Data Migration" by Johny M. John (Introduction, Why Data Migration, Types)

* Articles: Gartner reports on data migration trends, IBM/Microsoft whitepapers on migration strategies.

* Online Courses: LinkedIn Learning: "Introduction to Data Migration," Coursera: "Data Engineering with Google Cloud" (Module 1 on Data Migration Basics).

Week 2: Data Source Analysis & Profiling

  • Learning Objectives:

* Perform comprehensive analysis of source data structures, schemas, and relationships.

* Identify and document data quality issues, inconsistencies, and redundancies.

* Utilize data profiling tools to gain insights into data characteristics.

* Understand the importance of metadata management in migration.

  • Topics:

* Source System Assessment (Database schemas, file formats, APIs)

* Data Profiling Techniques (Value distributions, null analysis, uniqueness, patterns)

* Identifying Data Quality Issues (Incompleteness, inaccuracy, inconsistency, duplication)

* Metadata Management (Data dictionaries, business glossaries)

* Data Discovery Tools and Techniques

* Understanding Data Volume and Velocity

  • Recommended Resources:

* Tools: SQL Server Management Studio (SSMS), Oracle SQL Developer, Python with Pandas (for CSV/text files), basic data profiling features in ETL tools.

* Book Chapters: "Data Quality: The Field Guide" by Thomas C. Redman (Chapter on Data Profiling).

* Articles: "The Importance of Data Profiling in Data Migration" by various data governance blogs.

Week 3: Data Mapping & Transformation Rules

  • Learning Objectives:

* Create detailed field-to-field mapping documents between source and target systems.

* Define complex data transformation rules (e.g., aggregation, lookup, concatenation, conditional logic).

* Handle data type conversions and schema differences.

* Document mapping and transformation rules clearly and comprehensively.

  • Topics:

* Principles of Data Mapping (Direct, Complex, Derivations)

* Developing Data Mapping Specifications (Source field, Target field, Data Type, Length, Nullability, Transformation Rule, Business Rule)

* Common Transformation Patterns (Standardization, Normalization, Denormalization, Splitting, Merging)

* Handling Referential Integrity and Foreign Keys

* Managing Data Type Incompatibilities

* Version Control for Mapping Documents

  • Recommended Resources:

* Templates: Sample data mapping document templates (available online).

* Book Chapters: "Mastering Data Management" by David Loshin (Chapter on Data Mapping).

* Online Courses: Specific modules on data mapping within ETL tool training (e.g., Informatica, Talend, SSIS tutorials).

* Practical Exercise: Map a small dataset from one schema to another (e.g., contact data from an old CRM to a new one).

Week 4: Data Quality & Cleansing

  • Learning Objectives:

* Develop strategies for data cleansing and enrichment.

* Implement data validation rules to ensure data integrity.

* Understand the role of master data management (MDM) in migration.

* Plan for data deduplication and standardization.

  • Topics:

* Data Cleansing Methodologies (Parsing, Standardization, Matching, Merging)

* Data Validation Rules (Range checks, format checks, consistency checks)

* Error Handling and Reporting for Data Quality Issues

* Introduction to Master Data Management (MDM) and its relevance

* Data Deduplication Strategies (Exact vs. Fuzzy Matching)

* Data Enrichment Techniques (Geocoding, external data sources)

  • Recommended Resources:

* Book Chapters: "Executing Data Quality Projects" by Danette McGilvray (Chapters on Cleansing and Validation).

* Tools: OpenRefine (for interactive data cleaning), Python libraries (e.g., fuzzywuzzy), Excel for basic cleansing.

* Articles: Best practices for data cleansing before migration.

Week 5: Data Migration Strategies & Tooling

  • Learning Objectives:

* Evaluate different data migration strategies (Big Bang, Phased, Trickle).

* Understand the advantages and disadvantages of each strategy.

* Identify and compare various data migration tools (ETL, scripting, specialized platforms).

* Select the appropriate strategy and tools based on project requirements and constraints.

  • Topics:

* Migration Strategies: Big Bang, Phased (Coexistence), Trickle Migration, Parallel Run

* Factors Influencing Strategy Choice (Downtime tolerance, data volume, complexity, budget)

* Overview of Data Migration Tools:

* ETL Tools: Informatica PowerCenter, Talend, Microsoft SSIS, AWS Glue, Azure Data Factory

* Scripting: Python, SQL, Shell scripts

* Specialized Migration Tools: Database-specific migration tools, cloud migration services

* Build vs. Buy Decisions for Migration Tools

* Performance Considerations and Optimization

  • Recommended Resources:

* Vendor Documentation: Explore documentation for leading ETL tools (Talend Open Studio, SSIS tutorials).

* Case Studies: Analyze real-world data migration projects and their chosen strategies.

* Articles: Comparisons of data migration tools and strategies.

Week 6: Migration Execution & Testing

  • Learning Objectives:

* Develop a detailed migration execution plan.

* Design and implement various testing strategies for data migration.

* Understand the importance of performance testing and tuning.

* Manage migration environments and configurations.

  • Topics:

* Developing the Migration Execution Plan (Sequencing, dependencies, timelines)

* Types of Migration Testing:

* Unit Testing (individual transformations)

* Integration Testing (end-to-end flow)

* Volume Testing (large datasets)

* Performance Testing (speed, resource utilization)

* User Acceptance Testing (UAT)

* Creating Test Data and Scenarios

* Test Environment Setup and Management

* Error Logging and Monitoring during Execution

  • Recommended Resources:

* Templates: Test plan templates for data migration.

* Book Chapters: "Software Testing Techniques" by Boris Beizer (Chapters on Testing Strategies).

* Practical Exercise: Design a test plan for a hypothetical data migration scenario.

Week 7: Validation, Reconciliation & Cutover

  • Learning Objectives:

* Define comprehensive data validation and reconciliation procedures.

* Develop scripts and reports to verify data completeness and accuracy post-migration.

* Plan and execute the cutover process, including rollback points.

* Communicate effectively during the cutover phase.

  • Topics:

* Data Validation Checklists (Row counts, sum checks, statistical comparisons)

* Data Reconciliation Techniques (Source vs. Target comparisons, checksums)

* Automating Validation and Reconciliation Scripts (SQL, Python)

* Cutover Planning (Pre-cutover checks, Go/No-Go criteria, downtime management)

* Phased Cutover vs. Big Bang Cutover

* Communication Plan for Cutover

  • Recommended Resources:

* Practical Exercise: Write SQL queries to compare row counts and sums between two tables.

* Articles: Case studies on successful and challenging cutovers.

* Tools: Database comparison tools (e.g., Redgate SQL Compare, ApexSQL Compare).

Week 8: Rollback Procedures & Contingency Planning

  • Learning Objectives:

* Design robust rollback strategies for various failure scenarios.

* Develop detailed contingency plans to mitigate risks.

* Understand the importance of backup and recovery in migration.

* Document rollback and contingency procedures clearly.

  • Topics:

* Why Rollback is Crucial (Minimizing impact of failure)

* Types of Rollback Strategies (Full database restore, incremental rollback, data-only rollback)

* Defining Rollback Points and Triggers

* Contingency Planning (Hardware failure, software bugs, data corruption, network issues)

* Backup and Recovery Best Practices for Migration

* Communication during a Rollback Event

* Lessons Learned from Rollback Scenarios

  • Recommended Resources:

* Book Chapters: "Database Reliability Engineering" by Laine et al. (Chapters on Disaster Recovery).

* Articles: "Planning for the Worst: Data Migration Rollback Strategies."

* Practical Exercise: Outline a rollback plan for a specific migration failure scenario.

Week 9: Post-Migration Activities & Optimization

  • Learning Objectives:

* Understand necessary post-migration monitoring and support activities.

* Plan for archiving legacy data and decommissioning old systems.

* Identify opportunities for data optimization and performance tuning in the new system.

* Conduct a post-migration review and capture lessons learned.

  • Topics:

* Post-Migration Monitoring and Support (Performance, data integrity, user feedback)

* Legacy System Decommissioning (Data archiving, system shutdown)

* Data Archiving Strategies and Compliance

* Performance Tuning in the New Environment (Indexing, query optimization)

* Data Governance and Ownership in the New System

* Post-Migration Audit and Reporting

* Lessons Learned and Knowledge Transfer

  • Recommended Resources:

* Articles: "Post-Migration Checklist for Success."

* Templates: Post-implementation review report templates.

Week 10: Capstone Project & Best Practices

  • Learning Objectives:

* Apply all learned concepts to plan a comprehensive data migration project.

* Articulate best practices for managing complex data migrations.

* Present a data migration plan to a simulated stakeholder.

* Refine understanding of project management principles for data migration.

  • Topics:

* Capstone Project: Design a complete data migration plan for a hypothetical scenario (e.g., migrating customer data from an old ERP to a new cloud ERP). This includes:

* Project Charter/Scope

* Source/Target Analysis Summary

* Sample Data Mapping & Transformation Rules

* Migration Strategy & Tool Selection Justification

* Testing Plan

* Validation & Reconciliation Approach

* Cutover Plan

* Rollback Plan

* Timeline & Resource Estimate

* Review of Data Migration

gemini Output

This document outlines a comprehensive data migration plan, detailing the necessary steps, strategies, and technical considerations for a successful and secure data transfer. It includes detailed sections on field mapping, data transformation, validation, rollback procedures, and estimated timelines, along with production-ready code examples to illustrate key components.


Data Migration Planner: Comprehensive Deliverable

1. Executive Summary

This plan details the strategy for migrating critical data from a specified Source System to the new Target System. The primary objective is to ensure a complete, accurate, and timely transfer of data with minimal disruption to business operations. This document covers all phases of the migration, including planning, execution, validation, and post-migration activities, providing a robust framework for a successful transition.

2. Scope of Migration

  • Source System(s): [Specify Source System(s) - e.g., Legacy CRM (SQL Server), ERP (Oracle DB)]
  • Target System(s): [Specify Target System(s) - e.g., New Salesforce CRM, Snowflake Data Warehouse]
  • Data Entities to be Migrated:

* Customers (e.g., Customers, Accounts)

* Products (e.g., Products, Items)

* Orders (e.g., Orders, SalesOrders, OrderDetails)

* Employees (e.g., Employees, Users)

* [Add other relevant entities]

  • In-Scope Data: All active and historical data for the specified entities, adhering to defined retention policies.
  • Out-of-Scope Data: Archived data beyond retention policies, temporary data, system logs not required for business continuity.

3. Data Migration Strategy

  • Migration Approach:

* Phased Migration: Data will be migrated in logical batches (e.g., by entity, by region, or by historical period) to allow for iterative testing and validation, reducing overall risk.

* Alternatively: Big Bang Migration (if downtime can be fully accommodated and validated extensively pre-go-live).

  • Migration Tooling: [Specify chosen tools - e.g., Custom Python/Spark scripts, ETL tools like Talend/SSIS/Airbyte, Database replication tools, Cloud migration services like AWS DMS/Azure Data Factory].
  • Methodology:

* Extract: Data will be extracted from the source system(s) using [e.g., SQL queries, API calls, file exports].

* Transform: Extracted data will be cleansed, standardized, de-duplicated, and formatted to meet the target system's schema and business rules.

* Load: Transformed data will be loaded into the target system(s) using [e.g., API calls, bulk insert utilities, database loaders].

* Validate: Comprehensive validation will be performed at each stage (source, staging, target) to ensure data integrity and completeness.

4. Field Mapping

A critical component, detailing the correspondence between source and target fields, including data types and any specific mapping notes.

Example: Customer Data Field Mapping

| Source System (Legacy CRM) Table: Customers | Source Data Type | Target System (New CRM) Table: Accounts | Target Data Type | Mapping Notes / Transformation Rules

| customer_id | INT | id | INT | Primary Key. Must be unique.

gemini Output

This document outlines a comprehensive plan for the upcoming data migration, detailing the strategy, technical specifications, validation procedures, and contingency plans necessary for a successful transition. It serves as a foundational deliverable, ensuring all stakeholders have a clear understanding of the migration process from end to end.


Data Migration Planner: Comprehensive Plan

1. Executive Summary

This Data Migration Plan details the strategic approach for migrating critical data from the existing [Source System Name] to the new [Target System Name]. The objective is to ensure a seamless, accurate, and secure transfer of data while minimizing downtime and data integrity risks. This plan covers field mapping, data transformation rules, validation scripts, rollback procedures, and a detailed timeline, providing a robust framework for execution.

2. Scope of Migration

The scope of this migration includes the following critical data entities:

  • In-Scope Data Entities:

* Customer Records (e.g., personal details, contact information)

* Product Catalog (e.g., product descriptions, SKUs, pricing)

* Order History (e.g., order details, line items, status)

* [Add other specific data entities as required, e.g., User Accounts, Inventory, Financial Transactions]

  • Out-of-Scope Data:

* Archived data older than [X years] (unless specifically requested)

* Temporary or transient system data (e.g., session data, log files)

* [Add other specific out-of-scope data]

Source System(s): [e.g., Legacy CRM Database (MySQL), ERP System (SQL Server)]

Target System(s): [e.g., Salesforce CRM, SAP S/4HANA, Custom Microservices Platform (PostgreSQL)]

3. Data Migration Strategy & Approach

Migration Method:

  • [Choose One: Big Bang / Phased Approach]

* Big Bang: All data migrated simultaneously over a defined cutover period. Suitable for smaller datasets or when the target system replaces the source entirely with minimal parallel operations.

* Phased Approach: Data migrated in stages (e.g., by module, by geography, by data entity). Allows for incremental testing and reduced risk, but requires managing data consistency across systems during the transition.

Recommendation:* A Phased Approach is recommended to mitigate risks, allow for iterative testing, and minimize impact on business operations. The migration will be segmented by [e.g., data entity type: Customers first, then Products, then Orders].

Migration Environment:

Data migration will follow a structured environment progression:

  1. Development (DEV): Initial script development and unit testing.
  2. Test (TEST): Integration testing, end-to-end data flow validation.
  3. Staging (UAT/PRE-PROD): User Acceptance Testing (UAT) with business users, performance testing, and dry runs mirroring production.
  4. Production (PROD): Final migration execution during a planned maintenance window.

Tools & Technologies:

  • ETL Tool: [e.g., Apache Nifi, Talend, SSIS, Informatica, Custom Python/Java Scripts]
  • Database Tools: SQL Client tools for data extraction, loading, and validation.
  • Version Control: Git for managing all scripts and configuration files.

4. Field Mapping Documentation

This section details the precise mapping of fields from the source system to the target system. This is a critical step to ensure data integrity and compatibility. A sample table structure is provided below; a full mapping document will be maintained in a separate, version-controlled spreadsheet or dedicated tool.

Example Field Mapping Table:

| Source System Table | Source Field Name | Source Data Type | Target System Table | Target Field Name | Target Data Type | Transformation Rule ID | Notes/Comments |

| :------------------ | :---------------- | :--------------- | :------------------ | :---------------- | :--------------- | :--------------------- | :------------- |

| Customers | CustomerID | INT | CRM_Accounts | AccountID | UUID | TRF-001 | Primary Key. Generate UUID. |

| Customers | FirstName | VARCHAR(50) | CRM_Accounts | FirstName | VARCHAR(100) | - | Direct map. |

| Customers | LastName | VARCHAR(50) | CRM_Accounts | LastName | VARCHAR(100) | - | Direct map. |

| Customers | AddressLine1 | VARCHAR(100) | CRM_Accounts | BillingAddress | VARCHAR(255) | TRF-002 | Concatenate Address fields. |

| Customers | City | VARCHAR(50) | CRM_Accounts | BillingCity | VARCHAR(100) | - | Direct map. |

| Customers | PhoneNum | VARCHAR(20) | CRM_Contacts | PhoneNumber | VARCHAR(30) | TRF-003 | Format to E.164. |

| Products | ProductID | INT | Catalog_Items | ItemID | INT | - | Primary Key. Direct map. |

| Products | CategoryCode | VARCHAR(10) | Catalog_Items | Category | VARCHAR(50) | TRF-004 | Map code to descriptive name. |

| Orders | OrderDate | DATETIME | Sales_Orders | OrderPlacedDate | DATE | TRF-005 | Extract date part only. |

Key Considerations for Field Mapping:

  • Primary Keys & Foreign Keys: Ensure unique identification and referential integrity are maintained or re-established. New UUIDs may be generated for target system PKs.
  • Data Type Compatibility: Address mismatches (e.g., INT to VARCHAR, DATETIME to DATE).
  • Data Length: Ensure target fields can accommodate source data lengths.
  • Mandatory Fields: Identify and ensure all mandatory target fields are populated, either directly mapped or through transformation.

5. Data Transformation Rules

Transformation rules define how source data is manipulated to fit the target system's structure, format, and business logic. Each rule will be documented with a unique ID, description, and the fields it impacts.

Example Transformation Rules:

  • TRF-001: CustomerID to AccountID (UUID Generation)

* Description: Generate a new Universally Unique Identifier (UUID) for each CustomerID from the source system to serve as the AccountID in the target system. A lookup table will maintain the mapping between old and new IDs for historical reference if needed.

* Source Field(s): Customers.CustomerID

* Target Field(s): CRM_Accounts.AccountID

* Logic: GENERATE_UUID() for each unique CustomerID.

  • TRF-002: Address Concatenation

* Description: Combine AddressLine1, AddressLine2, City, State, and ZipCode from the source Customers table into a single BillingAddress field in the target CRM_Accounts table.

* Source Field(s): Customers.AddressLine1, Customers.AddressLine2, Customers.City, Customers.State, Customers.ZipCode

* Target Field(s): CRM_Accounts.BillingAddress

* Logic: CONCAT(AddressLine1, ', ', AddressLine2, ', ', City, ', ', State, ' ', ZipCode) (handle NULLs appropriately).

  • TRF-003: Phone Number Formatting

* Description: Standardize phone numbers to E.164 format (e.g., +1-555-123-4567). Remove all non-numeric characters and prepend country code if missing.

* Source Field(s): Customers.PhoneNum

* Target Field(s): CRM_Contacts.PhoneNumber

* Logic: REGEX_REPLACE(PhoneNum, '[^0-9]', ''), then apply formatting rules and country code logic.

  • TRF-004: Category Code Mapping

* Description: Map numeric or abbreviated CategoryCode from the source Products table to full descriptive Category names in the target Catalog_Items table using a predefined lookup table.

* Source Field(s): Products.CategoryCode

* Target Field(s): Catalog_Items.Category

* Logic: LOOKUP(CategoryCode, 'Category_Lookup_Table', 'Description')

* e.g., 'ELC' -> 'Electronics', 'CLO' -> 'Clothing'

  • TRF-005: Date Part Extraction

* Description: Extract only the date component from the OrderDate (which includes time) in the source Orders table to populate the OrderPlacedDate field in the target Sales_Orders table.

* Source Field(s): Orders.OrderDate

* Target Field(s): Sales_Orders.OrderPlacedDate

* Logic: CAST(OrderDate AS DATE) or equivalent database function.

  • TRF-006: Default Value Assignment

* Description: If Customers.Status is NULL or empty, default the CRM_Accounts.AccountStatus to 'Active'.

* Source Field(s): Customers.Status

* Target Field(s): CRM_Accounts.AccountStatus

* Logic: IF(Status IS NULL OR Status = '', 'Active', Status)

  • TRF-007: Data Cleansing (Trim Spaces)

* Description: Remove leading and trailing whitespace from all string fields during migration.

* Source Field(s): All VARCHAR/TEXT fields

* Target Field(s): Corresponding VARCHAR/TEXT fields

* Logic: TRIM(SourceField)

6. Data Validation Scripts & Procedures

Robust validation is crucial to ensure data accuracy, completeness, and integrity post-migration.

6.1. Pre-Migration Data Profiling & Quality Checks:

  • Purpose: Identify and address data quality issues in the source system before migration.
  • Scripts:

* Duplicate Record Identification: SQL queries to find duplicate records based on key identifiers (e.g., CustomerID, Email).

* Missing Value Analysis: Identify fields with a high percentage of NULL or empty values.

* Data Type & Format Conformance: Check if data adheres to expected types and formats (e.g., dates are valid, numbers are numeric).

* Referential Integrity Checks: Verify foreign key relationships within the source system.

  • Procedure: Run profiling scripts, report findings, work with business users to cleanse or enrich data in the source system or define specific transformation rules for identified issues.

6.2. Post-Migration Validation (Target System):

  • Purpose: Verify the successful and accurate transfer of data to the target system.
  • Scripts/Methods:

* Row Count Validation:

Script: SELECT COUNT() FROM SourceTable; vs. SELECT COUNT(*) FROM TargetTable;

* Expected Outcome: Counts should match for directly mapped tables, or reflect expected transformations (e.g., deduplication).

* Sum/Aggregate Validation:

* Script: SELECT SUM(Amount) FROM SourceTable; vs. SELECT SUM(Amount) FROM TargetTable;

* Expected Outcome: Sums of financial values, quantities, etc., should match or align with expected transformations.

* Data Integrity Checks:

Primary Key Uniqueness: SELECT Field, COUNT() FROM TargetTable GROUP BY Field HAVING COUNT(*) > 1;

* Referential Integrity (Foreign Keys): Verify that foreign key values in target tables correctly reference existing primary keys.

* Mandatory Field Population: Ensure all NOT NULL fields in the target system are populated.

* Sample Data Verification:

* Method: Randomly select a statistically significant number of records (e.g., 5-10% of total) from both source and target. Manually compare field-

data_migration_planner.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}