Data Migration Planner
Run ID: 69cc7b043e7fb09ff16a24ce2026-04-01Development
PantheraHive BOS
BOS Dashboard

This document outlines the detailed code and structural components for planning a comprehensive data migration. It encompasses field mapping, transformation rules, validation scripts, rollback procedures, and a framework for timeline estimates. The provided code snippets are in Python, a widely used language for data migration scripting due to its versatility and extensive libraries.


Data Migration Planner: Code Generation Deliverable

This deliverable provides a foundational set of code components designed to facilitate the planning and execution of your data migration. Each section includes well-commented, production-ready code snippets and explanations to help you understand, adapt, and extend them for your specific migration needs.

Introduction

A successful data migration requires meticulous planning and robust execution. This code generation step provides a practical framework, using Python, to define the critical aspects of your migration: how data fields map from source to target, what transformations are applied, how data integrity is validated, how to recover from potential issues, and how to structure your project timeline.

The following sections detail the code for each critical component.

1. Data Migration Configuration (High-Level)

A central configuration file or module is crucial for managing migration parameters. This example demonstrates a basic structure to define source and target connection details, and other global settings.

text • 600 chars
**Explanation:**
This `MigrationConfig` class provides a centralized place to manage all configuration parameters. It utilizes environment variables for sensitive information (like passwords) and allows for easy modification of database types, host details, and migration-specific settings like `TABLES_TO_MIGRATE`. The `get_connection_string` methods abstract the database-specific connection string generation.

### 2. Field Mapping Definition

Field mapping defines how fields from your source system correspond to fields in your target system, including any transformations that need to occur.

Sandboxed live preview

Detailed Study Plan: Mastering Data Migration Planning and Architecture

This document outlines a comprehensive study plan designed to equip an individual with the knowledge and skills required to effectively plan and architect complex data migration projects. This plan integrates theoretical understanding with practical application, covering key phases from initial assessment to post-migration validation.


Study Plan Goal

To develop a robust understanding of data migration principles, methodologies, tools, and best practices, enabling the successful planning, design, and execution of data migration projects with a focus on architecture, data integrity, and business continuity.

Learning Objectives

Upon completion of this study plan, the learner will be able to:

  1. Understand Data Migration Fundamentals: Define data migration, identify common triggers, challenges, and types of migration strategies (e.g., "big bang," "phased," "trickle").
  2. Assess Source & Target Systems: Conduct thorough analyses of source data systems, target environments, and associated business processes to identify requirements and constraints.
  3. Design Data Migration Architecture: Develop robust migration architectures, including data flow diagrams, staging areas, and integration points.
  4. Master Data Mapping & Transformation: Create detailed field-level mappings, define complex transformation rules, and handle data quality issues.
  5. Implement Validation & Testing Strategies: Design comprehensive data validation scripts, testing plans (unit, integration, user acceptance), and reconciliation procedures.
  6. Develop Rollback & Contingency Plans: Formulate effective rollback procedures and disaster recovery strategies to mitigate risks.
  7. Estimate Timelines & Resources: Accurately estimate project timelines, resource requirements, and budget for data migration initiatives.
  8. Select Appropriate Tools & Technologies: Evaluate and recommend suitable data migration tools, ETL platforms, and scripting languages.
  9. Manage Project Risks & Stakeholders: Identify potential risks, develop mitigation strategies, and communicate effectively with project stakeholders.

Weekly Schedule

This 8-week schedule provides a structured approach to learning, with each week building upon the previous one. Allocate approximately 10-15 hours per week for study, including reading, exercises, and project work.


Week 1: Introduction to Data Migration & Fundamentals

  • Topics:

* What is Data Migration? (Definition, Triggers, Business Drivers)

* Types of Data Migration (Storage, Database, Application, Cloud)

* Migration Methodologies (Big Bang vs. Phased vs. Trickle)

* Common Challenges and Pitfalls

* Data Migration Lifecycle Overview (Phases: Scope, Analyze, Design, Build, Test, Execute, Validate)

  • Activities:

* Read introductory articles and whitepapers.

* Watch overview videos on data migration.

* Identify a hypothetical data migration scenario (e.g., ERP migration, cloud migration) to use as a case study throughout the plan.

Week 2: Source & Target System Analysis

  • Topics:

* Data Profiling and Discovery Techniques

* Understanding Source Data Models, Schemas, and Data Dictionaries

* Assessing Data Quality (Completeness, Accuracy, Consistency, Uniqueness, Timeliness)

* Target System Requirements and Constraints

* Business Process Impact Analysis

* Stakeholder Identification and Requirements Gathering

  • Activities:

* Practice data profiling using a sample dataset (e.g., publicly available datasets).

* Document the source and target system characteristics for your hypothetical case study.

* Outline initial business requirements for the migration.

Week 3: Data Migration Architecture Design

  • Topics:

* Designing the Migration Architecture (Staging Areas, Data Lakes, Data Warehouses)

* Choosing a Migration Approach (ETL, ELT, Custom Scripts)

* Network and Infrastructure Considerations

* Security and Compliance in Data Migration

* High-Level Data Flow Diagrams

  • Activities:

* Draw a high-level architectural diagram for your case study, including source, staging, and target.

* Justify your chosen migration approach (ETL/ELT/Custom) for the case study.

* Consider security implications and how to address them.

Week 4: Data Mapping & Transformation Rules

  • Topics:

* Field-Level Mapping Techniques

* Defining Transformation Rules (Data Type Conversion, Aggregation, Derivation, Cleansing)

* Handling Missing Data and Defaults

* Referential Integrity and Key Management

* Creating a Data Mapping Document

  • Activities:

* Develop a detailed data mapping document for a critical entity (e.g., customer, product) in your case study, including transformation rules.

* Practice writing transformation logic in pseudo-code or a scripting language (e.g., SQL, Python).

Week 5: Data Quality, Validation & Testing

  • Topics:

* Data Quality Management throughout the Migration Process

* Developing Data Validation Scripts (Pre-migration, Post-migration, In-flight)

* Designing a Comprehensive Testing Strategy (Unit, Integration, UAT, Performance)

* Reconciliation Procedures and Reporting

* Error Handling and Logging Mechanisms

  • Activities:

* Draft validation scripts (pseudo-code/SQL) for your case study to check data integrity post-migration.

* Outline a testing plan for your case study, specifying test cases and expected outcomes.

Week 6: Rollback Procedures & Contingency Planning

  • Topics:

* Importance of Rollback Strategies

* Designing Rollback Procedures (Database backups, Application snapshots, Data retention)

* Developing Disaster Recovery Plans for Migration

* Contingency Planning for Unexpected Issues

* Go/No-Go Decision Criteria

  • Activities:

* Develop a detailed rollback plan for a critical phase of your case study migration.

* Identify potential failure points and propose contingency measures.

Week 7: Project Planning, Estimation & Tooling

  • Topics:

* Timeline Estimation Techniques (e.g., PERT, Three-Point Estimation)

* Resource Planning and Allocation

* Budgeting for Data Migration Projects

* Evaluating Data Migration Tools (e.g., Informatica, Talend, SQL Server Integration Services, custom scripts)

* Risk Management and Mitigation Strategies

* Stakeholder Communication and Reporting

  • Activities:

* Create a high-level project plan and timeline estimate for your case study.

* Research and compare 2-3 data migration tools relevant to your case study, justifying a selection.

* Identify key risks and propose mitigation strategies.

Week 8: Comprehensive Case Study Review & Presentation

  • Topics:

* Review of all study plan components

* Integration of knowledge into a cohesive migration plan

* Best practices and lessons learned

  • Activities:

* Consolidate all documentation for your hypothetical case study into a complete "Data Migration Plan" document.

* Prepare a presentation summarizing your migration plan, architecture, and key considerations.

* Self-assessment and identification of areas for further study.


Recommended Resources

Books:

  • "Data Migration" by Johny Morris: A comprehensive guide covering various aspects of data migration.
  • "The Data Warehouse Toolkit" by Ralph Kimball and Margy Ross: While focused on data warehousing, it provides invaluable insights into data modeling, ETL, and data quality.
  • "Designing Data-Intensive Applications" by Martin Kleppmann: Excellent for understanding underlying systems and architectural decisions.

Online Courses & Platforms:

  • Coursera/edX: Look for courses on "Data Engineering," "ETL Development," "Cloud Data Migration" (e.g., AWS, Azure, GCP specific courses).
  • LinkedIn Learning/Udemy: Search for specific tool tutorials (e.g., "Informatica PowerCenter," "Talend Open Studio," "SSIS").
  • Microsoft Learn / AWS Training / Google Cloud Training: Official certifications and learning paths for cloud-specific data migration.

Articles & Whitepapers:

  • Gartner/Forrester Reports: Search for data migration trends, vendor comparisons, and best practices.
  • Industry Blogs: Read blogs from data migration specialists, consulting firms, and technology vendors.
  • Academic Papers: For deeper dives into specific algorithms or methodologies.

Tools (Hands-on Practice):

  • SQL Database: PostgreSQL, MySQL, SQL Server (for data profiling, scripting, validation).
  • ETL Tools (Community/Trial Editions): Talend Open Studio, Pentaho Data Integration (Kettle), Apache NiFi.
  • Spreadsheet Software: Microsoft Excel/Google Sheets (for initial data mapping, small-scale profiling).
  • Diagramming Tools: Lucidchart, draw.io, Microsoft Visio (for architecture diagrams, data flow).
  • Programming Languages: Python (with libraries like Pandas for data manipulation, cleaning) and SQL.

Milestones

  • End of Week 2: Completed initial source/target assessment and documented business requirements for case study.
  • End of Week 4: Designed high-level migration architecture and completed detailed data mapping for a key entity.
  • End of Week 6: Developed draft validation scripts and a rollback plan for the case study.
  • End of Week 8: Finalized comprehensive "Data Migration Plan" document and prepared a summary presentation for the case study.

Assessment Strategies

  1. Self-Assessment Quizzes: Regularly test understanding of key concepts using flashcards or self-made quizzes.
  2. Case Study Deliverables: Completion of documented outputs for the chosen case study (e.g., architectural diagrams, data mapping, validation scripts, project plan). These will serve as practical demonstrations of acquired skills.
  3. Peer Review (Optional): If possible, share your case study outputs with a colleague or mentor for feedback.
  4. Presentation: Deliver a summary presentation of your data migration plan for the case study, demonstrating your ability to articulate the strategy and justify decisions.
  5. Practical Exercises: Apply learned concepts by solving small data transformation or validation challenges using SQL or Python.
  6. Certification: Consider pursuing relevant industry certifications (e.g., cloud data engineer certifications) as a formal validation of skills.

This detailed study plan provides a robust framework for mastering data migration planning and architecture. Consistent effort and practical application of knowledge will be key to achieving the defined learning objectives.

python

transformations.py

from datetime import datetime

import re

from typing import Any, Optional

def clean_string(value: Optional[str]) -> Optional[str]:

"""

Strips leading/trailing whitespace and handles None values.

"""

if value is None:

return None

return str(value).strip()

def format_date(value: Optional[str], input_format: str = '%Y-%m-%d', output_format: str = '%Y-%m-%d') -> Optional[str]:

"""

Converts a date string from one format to another.

Handles various input date formats and returns None if parsing fails.

"""

if value is None:

return None

try:

# Attempt to parse common date formats if input_format is not specific

if input_format == '%Y-%m-%d': # Default, but can try others

date_obj = datetime.strptime(str(value), input_format)

else:

# More robust parsing for varying formats, or try multiple

possible_formats = [

'%Y-%m-%d', '%Y/%m/%d', '%m-%d-%Y', '%m/%d/%Y',

'%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f'

]

date_obj = None

for fmt in possible_formats:

try:

date_obj = datetime.strptime(str(value), fmt)

break

except ValueError:

continue

if date_obj is None:

raise ValueError(f"Could not parse date '{value}' with any known format.")

return date_obj.strftime(output_format)

except (ValueError, TypeError):

# Log the error for review

print(f"Warning: Could not format date '{value}'. Returning None.")

return None

def format_date_or_null(value: Optional[str]) -> Optional[str]:

"""

A specific transformation to format date, returning None if input is invalid.

Uses default format_date internally.

"""

return format_date(value)

def validate_email(value: Optional[str]) -> Optional[str]:

"""

Validates an email format and returns the cleaned email or None if invalid.

"""

if value is None:

return None

email = str(value).strip().lower()

if re.match(r"[^@]+@[^@]+\.[^@]+", email):

return email

print(f"Warning: Invalid email format '{value}'. Returning None.")

return None

def map_user_status(value: Optional[str]) -> Optional[str]:

"""

Maps source user status codes to target system's status values.

"""

if value is None:

return 'inactive' # Default status

status_map = {

'ACTIVE': 'active',

'PENDING': 'pending',

'INACTIVE': 'inactive',

'SUSPENDED': 'suspended'

}

return status_map.get(str(value).upper(), 'unknown') # Default to 'unknown' or 'inactive'

def lookup_address_id(old_address_id: Optional[int]) -> Optional[int]:

"""

Performs a lookup in a temporary mapping table or service to get the new address ID.

This function

gemini Output

This document outlines a comprehensive plan for the upcoming data migration, covering all critical aspects from field mapping and transformation rules to validation, rollback procedures, and a detailed timeline. This plan serves as a foundational deliverable to ensure a smooth, efficient, and successful transition of data from the source system to the new target environment.


Data Migration Plan: [Project Name]

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name]

Prepared By: PantheraHive Solutions Team


Executive Summary

This Data Migration Plan details the strategy and execution roadmap for migrating critical business data from [Source System Name, e.g., Legacy CRM] to [Target System Name, e.g., New ERP System]. The primary objective is to ensure the integrity, accuracy, and completeness of all migrated data, minimizing business disruption and facilitating a seamless transition to the new system. This plan encompasses detailed field mapping, data transformation rules, robust validation procedures, comprehensive rollback strategies, and a phased timeline with key milestones.


1. Introduction & Scope

This section defines the purpose and boundaries of the data migration project.

1.1. Purpose

The purpose of this document is to provide a structured, detailed, and actionable plan for the data migration, ensuring all stakeholders have a clear understanding of the process, responsibilities, and expected outcomes. It aims to mitigate risks associated with data loss, corruption, or inconsistency during the transition.

1.2. Scope of Migration

The migration will encompass the following data entities and their associated attributes:

  • In-Scope Data Entities:

* Customers (including contact information, addresses, historical orders)

* Products (including descriptions, SKUs, pricing, inventory levels)

* Sales Orders (historical and open orders)

* Invoices (historical and open)

* Employees (basic HR data, roles)

* [Add any other specific entities, e.g., Vendors, Projects, etc.]

  • Out-of-Scope Data Entities:

* Archived data older than [e.g., 5 years] (unless specifically requested)

* Temporary records or transactional logs not required for business operations in the new system.

* [Specify any other data not being migrated]

1.3. Objectives

  • Migrate 100% of in-scope data accurately and completely.
  • Ensure data integrity and consistency between source and target systems.
  • Minimize downtime for critical business operations during the migration window.
  • Provide robust validation and rollback mechanisms to address potential issues.
  • Deliver a high-quality data set that fully supports the functionality of the new [Target System Name].

2. Data Migration Strategy

The overall strategy involves a phased approach, beginning with discovery and planning, followed by iterative development, testing, and ultimately, the production cutover.

2.1. High-Level Approach

  1. Discovery & Analysis: Understand source data, target schema, and business requirements.
  2. Mapping & Design: Define field mappings, transformation rules, and validation logic.
  3. Development: Build extraction, transformation, and loading (ETL) scripts/tools.
  4. Testing & Iteration: Perform multiple cycles of data migration tests, validating data and refining scripts.
  5. Pre-Migration Activities: Data cleansing, system readiness checks, backups.
  6. Production Migration: Execute the final migration during a defined cutover window.
  7. Post-Migration Activities: Validation, reconciliation, system go-live, and ongoing monitoring.

2.2. Migration Tooling

  • Primary ETL Tool: [e.g., Apache Nifi, Talend, SSIS, Custom Python Scripts, Dell Boomi, Salesforce Data Loader, etc.]
  • Data Validation Tools: SQL queries, custom scripts, data comparison tools.
  • Version Control: Git for all scripts and mapping documents.

3. Source and Target Systems

3.1. Source System Details

  • System Name: [e.g., Legacy CRM, Oracle E-Business Suite 11i]
  • Database Type: [e.g., SQL Server 2012, Oracle 11g, MySQL]
  • Key Data Schemas/Modules: [e.g., CRM.dbo, OE.OrderEntry]
  • Access Method: [e.g., Direct DB connection, API, Flat File Export]
  • Data Volume (Estimated): [e.g., 500 GB, 10 million records]

3.2. Target System Details

  • System Name: [e.g., Salesforce Sales Cloud, SAP S/4HANA, Microsoft Dynamics 365]
  • Database Type: [e.g., Salesforce Objects, HANA DB, SQL Server]
  • Key Data Modules/Objects: [e.g., Account, Contact, Opportunity, Product2]
  • Access Method: [e.g., Salesforce API, SAP IDOCs/BAPIs, Direct DB Load]
  • Data Volume (Estimated): [e.g., 600 GB (post-transformation), 12 million records]

4. Data Inventory & Scope

A detailed inventory of all data entities and their attributes will be maintained in a separate "Data Migration Mapping Document." This section provides a summary.

4.1. Key Data Entities and Relationships

  • Customers: Customer_ID, Name, Address, Email, Phone, Account_Status, Creation_Date.

Relationships:* One-to-many with Orders, One-to-many with Invoices.

  • Products: Product_ID, SKU, Name, Description, Price, Category, Inventory_Level.

Relationships:* One-to-many with Order Items.

  • Sales Orders: Order_ID, Customer_ID, Order_Date, Total_Amount, Status, Shipping_Address.

Relationships:* One-to-many with Order Items.

  • Order Items: Order_Item_ID, Order_ID, Product_ID, Quantity, Unit_Price.

4.2. Data Volume Estimates (per Entity)

| Data Entity | Source Records (Est.) | Target Records (Est.) | Source Size (Est.) | Target Size (Est.) |

| :------------ | :-------------------- | :-------------------- | :----------------- | :----------------- |

| Customers | 2,500,000 | 2,500,000 | 10 GB | 12 GB |

| Products | 500,000 | 500,000 | 2 GB | 2.5 GB |

| Sales Orders | 10,000,000 | 10,000,000 | 40 GB | 45 GB |

| Order Items | 50,000,000 | 50,000,000 | 100 GB | 110 GB |

| Total | 63,000,000 | 63,000,000 | 152 GB | 169.5 GB |


5. Data Mapping & Transformation

This is the core of the migration, defining how each piece of data moves and changes.

5.1. Field Mapping (Example for 'Customer' Entity)

A detailed "Data Migration Mapping Document" will be maintained separately, containing all entities and fields. Below is an illustrative example for the 'Customer' entity.

| Source Table.Field (Legacy CRM) | Target Object.Field (New ERP) | Data Type (Source) | Data Type (Target) | Transformation Rule (if any) | Validation Rule (Post-Mig.) | Notes / Comments |

| :------------------------------ | :---------------------------- | :----------------- | :----------------- | :--------------------------- | :-------------------------- | :---------------------------------------------------- |

| CRM.Customers.CustomerID | Account.Legacy_ID__c | INT | Text (External ID) | Direct Map | NOT NULL, Unique | Used for reconciliation and rollback. |

| CRM.Customers.CompanyName | Account.Name | VARCHAR(255) | Text (255) | Direct Map | NOT NULL | Mandatory field. |

| CRM.Customers.FirstName | Contact.FirstName | VARCHAR(100) | Text (100) | Direct Map | NOT NULL | Migrated to associated Contact record. |

| CRM.Customers.LastName | Contact.LastName | VARCHAR(100) | Text (100) | Direct Map | NOT NULL | Migrated to associated Contact record. |

| CRM.Customers.AddressLine1 | Account.BillingStreet | VARCHAR(255) | Text (255) | Concatenate with AddressLine2 | N/A | |

| CRM.Customers.AddressLine2 | Account.BillingStreet | VARCHAR(255) | Text (255) | Concatenate with AddressLine1 | N/A | |

| CRM.Customers.City | Account.BillingCity | VARCHAR(100) | Text (100) | Direct Map | N/A | |

| CRM.Customers.State | Account.BillingState | VARCHAR(50) | Text (50) | Standardize to 2-letter code | N/A | e.g., "California" -> "CA" |

| CRM.Customers.ZipCode | Account.BillingPostalCode | VARCHAR(20) | Text (20) | Direct Map | N/A | |

| CRM.Customers.Email | Contact.Email | VARCHAR(255) | Email | Direct Map, Lowercase | Valid Email Format | Only primary email migrated. |

| CRM.Customers.AccountStatus | Account.Status__c | VARCHAR(50) | Picklist (Text) | Map Active->Open, Inactive->Closed, Pending->Prospect | N/A | Default to Open if null in source. |

| CRM.Customers.CreationDate | Account.CreatedDate | DATETIME | DateTime | Direct Map | NOT NULL, Future Date Check | Ensure original creation date is preserved. |

| CRM.Customers.LastUpdated | Account.LastModifiedDate | DATETIME | DateTime | Use migration timestamp | N/A | Target system will overwrite with actual modification. |

5.2. Transformation Rules

Detailed transformation logic will be documented for each field requiring manipulation. Examples include:

  • Standardization:

* States/Provinces: Convert full names (e.g., "California") to 2-letter ISO codes (e.g., "CA").

* Phone Numbers: Format all phone numbers to (XXX) XXX-XXXX format.

* Dates: Convert all date formats to YYYY-MM-DD HH:MM:SS (UTC).

  • Concatenation:

* Combine AddressLine1 and AddressLine2 into a single BillingStreet field.

* Combine FirstName and LastName into a FullName field if required (while also mapping separately).

  • Splitting:

* If a source field contains multiple values (e.g., "Product Tags: Tag1, Tag2"), split them into multiple target records or a multi-select picklist.

  • Lookup & Mapping:

* Map AccountStatus from source values (Active, Inactive, Pending) to target picklist values (Open, Closed, Prospect).

* Look up ProductCategoryID from a mapping table based on SourceProductCategoryName.

  • Defaulting:

* If a mandatory target field is null in the source, assign a default value (e.g., Account.Status__c defaults to Open).

  • Data Type Conversion:

* Convert VARCHAR to INT or DECIMAL where applicable, handling non-numeric values gracefully (e.g., converting to NULL or 0).

  • Currency Conversion:

* If multiple currencies exist, convert all historical transaction amounts to a single base currency (e.g., USD) using historical exchange rates as of the transaction date.


6. Data Quality & Validation

Ensuring data quality is paramount. This section details procedures to verify the integrity and accuracy of the migrated data.

6.1. Pre-Migration Data Cleansing

Prior to migration, the following data cleansing activities will be performed on the source system:

  • Duplicate Detection & Merging: Identify and merge duplicate customer and product records.
  • Incomplete Data Remediation: Fill in missing mandatory fields where possible (e.g., default values for null addresses).
  • Invalid Data Correction: Correct malformed email addresses, phone numbers, or invalid date formats.
  • Deactivation/Archiving: Mark or remove old, irrelevant, or test data from the source system that should not be migrated.
  • Referential Integrity Checks: Ensure all foreign key relationships are valid within the source system.

6.2. Validation Scripts & Procedures

Validation will occur at multiple stages:

6.2.1. Extraction Validation (Source)

  • Script: validate_source_counts.sql
  • Purpose: Verify the number of records extracted matches expected counts from source queries.
  • Example: SELECT COUNT(*) FROM CRM.Customers;

6.2.2. Transformation Validation (Staging)

  • Script: validate_transformed_data.py (Python)
  • Purpose: Check data quality rules after transformation but before loading.
  • Example:

* Count records with Account.Name IS NULL.

* Verify Account.BillingState is a valid 2-letter code.

* Check for duplicates based on target system's unique keys (e.g., Account.Legacy_ID__c).

6.2.3. Loading Validation (Target)

  • Script: validate_target_counts.sql or API calls.
  • Purpose: Confirm that all records intended for loading have been successfully inserted into the target system.
  • Example: SELECT COUNT(*) FROM Account WHERE Legacy_ID__c IS NOT NULL;

6.2.4. Post-Migration Data Reconciliation

  • Script: reconciliation_report.sql / Data
data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}