Data Migration Planner
Run ID: 69cb065058b35c7ea758c3152026-03-30Development
PantheraHive BOS
BOS Dashboard

Data Migration Planner: Code Generation Deliverable

This document provides a comprehensive set of code components and configuration examples designed to support the planning and execution of your data migration. These scripts and configurations serve as a foundational framework, illustrating best practices for field mapping, data transformation, validation, and rollback procedures.

The code is written in Python, a widely adopted language for data manipulation and scripting, ensuring readability, maintainability, and extensibility.


1. Overview of the Code Deliverable

The generated code aims to provide a modular and configurable approach to data migration. It separates concerns into distinct components:

This framework is designed to be adapted to your specific data sources (e.g., relational databases, flat files, APIs) and target systems.


2. Data Migration Configuration (YAML Example)

A configuration-driven approach is crucial for managing complex migrations. This migration_config.yaml file defines all parameters, making the migration process transparent and easy to modify without changing core logic.

migration_config.yaml

text • 202 chars
---

### 3. Core Migration Framework (Python Code)

This section provides the Python modules that implement the logic defined in the `migration_config.yaml`.

#### 3.1. `utils.py` - General Utilities

Sandboxed live preview

As a professional AI assistant, I understand the need for clear, actionable deliverables. While the overarching workflow is "Data Migration Planner" and the current step is "plan_architecture," the specific instruction provided for this output is to "Create a detailed study plan with: weekly schedule, learning objectives, recommended resources, milestones, and assessment strategies."

To best serve this request and provide a relevant output within the context of data migration, this study plan is designed for an individual or team aiming to master the principles and practices of Data Migration Planning and Architecture.


Comprehensive Study Plan: Data Migration Planning & Architecture

1. Introduction and Purpose

This study plan outlines an intensive 8-week program designed to equip individuals with the knowledge and practical skills required to plan, design, and execute successful data migration projects. The curriculum covers foundational concepts, architectural considerations, practical methodologies, and essential tools, culminating in the ability to architect robust data migration solutions.

Target Audience: Aspiring Data Migration Specialists, Solution Architects, Data Engineers, Project Managers involved in data-intensive projects.

2. Learning Objectives

Upon successful completion of this study plan, participants will be able to:

  • Understand Data Migration Fundamentals: Define data migration, identify common challenges, and explain its lifecycle.
  • Analyze Source & Target Systems: Conduct thorough assessments of source and target environments, including data models, schemas, and dependencies.
  • Design Data Extraction Strategies: Select appropriate methods (full, incremental, CDC) and tools for efficient data extraction.
  • Develop Data Transformation Rules: Define and implement complex data mapping, cleansing, standardization, and aggregation logic.
  • Formulate Data Loading Methodologies: Choose effective loading techniques (bulk, API, streaming) while considering performance and integrity.
  • Implement Robust Data Validation: Design and execute pre- and post-migration validation scripts and quality assurance processes.
  • Plan Rollback and Error Handling: Develop comprehensive strategies for error detection, resolution, and full migration rollback.
  • Architect Secure & Compliant Solutions: Integrate security, privacy, and compliance requirements into migration plans.
  • Estimate Timelines & Resources: Develop realistic project timelines, resource allocations, and budget estimates for data migration.
  • Utilize Industry Tools & Best Practices: Gain familiarity with leading data migration tools and apply industry best practices.

3. Weekly Schedule

This schedule provides a structured approach, dedicating approximately 15-20 hours per week (mix of self-study, practical exercises, and resource review).

Week 1: Data Migration Fundamentals & Project Initiation

  • Topics:

* Introduction to Data Migration: Definition, types, common triggers (mergers, system upgrades, cloud adoption).

* Data Migration Lifecycle: Discovery, Design, Build, Test, Execute, Validate, Decommission.

* Key Roles & Responsibilities in Data Migration.

* Project Scoping & Feasibility Analysis.

* Identifying Stakeholders & Communication Planning.

* Introduction to Data Governance and Compliance in Migration.

  • Activities:

* Read foundational articles/chapters.

* Participate in discussion forums on migration challenges.

* Begin drafting a high-level project charter for a hypothetical migration scenario.

Week 2: Source & Target System Analysis, Data Profiling & Discovery

  • Topics:

* Deep Dive into Source System Analysis: Data models (relational, NoSQL), schemas, data types, constraints, relationships, data volume.

* Target System Analysis: Design goals, new schema definition, integration points.

* Data Profiling Techniques: Identifying data quality issues (missing values, inconsistencies, duplicates), data distribution, cardinality.

* Metadata Management: Importance and tools.

* Legacy System Challenges & Strategies.

  • Activities:

* Practice data profiling using sample datasets (e.g., SQL queries, Python scripts with Pandas).

* Document schema differences between a hypothetical source and target system.

Week 3: Data Extraction Strategies & ETL Tooling

  • Topics:

* Extraction Methods: Full extract, incremental extraction (CDC - Change Data Capture), API-based extraction, database replication.

* Performance Optimization during Extraction: Batching, parallelization, indexing.

* Introduction to ETL/ELT Tools: Overview of popular tools (e.g., Apache NiFi, Talend, Informatica, AWS DMS, Azure Data Factory, Google Cloud Dataflow).

* Scripting for Extraction: SQL, Python, Shell scripting.

  • Activities:

* Research and compare 2-3 ETL tools relevant to data migration.

* Write a Python script to extract data from a CSV file and perform basic initial cleansing.

Week 4: Data Transformation & Mapping Rules

  • Topics:

* Data Mapping: Field-level mapping, data type conversions, composite keys.

* Transformation Rules: Cleansing (deduplication, standardization, imputation), enrichment, aggregation, derivation, lookup transformations.

* Handling Complex Data Structures: Hierarchical data, XML/JSON transformations.

* Error Handling during Transformation: Logging, error rows, data rejection.

* Business Rule Implementation & Validation.

  • Activities:

* Create a detailed data mapping document for a small dataset, including transformation rules.

* Implement several transformation rules using SQL or Python on a sample dataset.

Week 5: Data Loading Strategies & Performance

  • Topics:

* Loading Methods: Direct inserts, bulk loading, API-based loading, streaming inserts.

* Performance Considerations for Loading: Indexing, constraints, triggers, transaction management.

* Rollback Mechanisms: Planning for failure, transaction control.

* Error Handling during Loading: Logging, retry mechanisms.

* Migration Cutover Strategies: Big Bang vs. Phased Approach, Parallel Run.

  • Activities:

* Simulate a bulk load operation (e.g., using COPY command in PostgreSQL/Snowflake or a Python bulk insert).

* Design a basic rollback plan for a failed loading phase.

Week 6: Data Validation, Quality Assurance & Security

  • Topics:

* Pre-Migration Validation: Source data quality checks, data profiling revisit.

* Post-Migration Validation: Row counts, checksums, reconciliation reports, sample data verification, business rule validation.

* Data Quality Metrics & Reporting.

* Testing Strategies: Unit, Integration, User Acceptance Testing (UAT).

* Security in Data Migration: Data at rest/in transit encryption, access controls, anonymization/pseudonymization.

* Compliance: GDPR, HIPAA, PCI-DSS considerations.

  • Activities:

* Develop a set of SQL queries for post-migration data validation (e.g., row counts, sum of key columns).

* Outline a data security plan for a cloud migration project.

Week 7: Cloud Migrations, Advanced Topics & Project Management

  • Topics:

* Cloud Data Migration Strategies: Lift-and-shift, re-platforming, refactoring.

* Cloud-Native Migration Services (e.g., AWS DMS, Azure Migrate, Google Cloud Migrate for Compute Engine).

* Real-time Data Migration vs. Batch.

* DevOps for Data Migration: CI/CD pipelines.

* Project Management for Data Migration: Risk management, change management, communication.

* Vendor Selection & Management.

  • Activities:

* Research a specific cloud migration service and its capabilities.

* Create a risk register for a hypothetical data migration project.

Week 8: Capstone Project & Review

  • Topics:

* Comprehensive review of all previous topics.

* Presentation skills for project proposals.

  • Activities:

* Capstone Project: Design a complete data migration plan (architecture, mapping, transformation, validation, rollback, timeline) for a complex hypothetical scenario. This will involve integrating all learned concepts.

* Peer review and feedback sessions on capstone projects.

* Final Q&A and knowledge consolidation.

4. Recommended Resources

  • Books:

* "Designing Data-Intensive Applications" by Martin Kleppmann (for foundational data system knowledge).

* "Data Migration" by Johny S. John and Paul M. M. John (specific to data migration).

* "The DAMA Guide to the Data Management Body of Knowledge (DMBOK2)" (for data governance and quality).

  • Online Courses/Platforms:

* Coursera/edX: "Data Engineering with Google Cloud," "AWS Data Analytics Specialization," "Microsoft Azure Data Engineer Associate."

* Udemy/Pluralsight: Courses on specific ETL tools (Talend, Informatica), SQL, Python for Data Engineering.

* Cloud Provider Documentation: AWS, Azure, Google Cloud data migration services documentation.

  • Tools & Technologies (Hands-on Practice):

* Databases: PostgreSQL, MySQL, SQL Server, MongoDB (for source/target practice).

* ETL/ELT Tools: Talend Open Studio, Apache NiFi, dbt (data build tool).

* Programming Languages: Python (with libraries like Pandas, SQLAlchemy), SQL.

* Version Control: Git/GitHub.

  • Articles/Blogs:

* Blogs from major cloud providers (AWS, Azure, GCP) on data migration.

* Medium articles on data engineering and migration case studies.

* Gartner/Forrester reports on data migration trends and tools.

  • Certifications (Optional, but Recommended):

* AWS Certified Data Analytics – Specialty

* Microsoft Certified: Azure Data Engineer Associate

* Google Cloud Professional Data Engineer

5. Milestones

  • End of Week 2: Completion of Source/Target System Analysis Document for a case study.

python

utils.py

import os

import yaml

import logging

from datetime import datetime

Setup logging

def setup_logging(log_level="INFO"):

"""

Configures the global logger.

"""

logging.basicConfig(

level=getattr(logging, log_level.upper(), logging.INFO),

format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',

handlers=[

logging.FileHandler(f"migration_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log"),

logging.StreamHandler()

]

)

return logging.getLogger(__name__)

logger = setup_logging()

def load_config(config_path="migration_config.yaml"):

"""

Loads the YAML configuration file.

Args:

config_path (str): Path to the YAML configuration file.

Returns:

dict: The loaded configuration.

"""

try:

with open(config_path, 'r') as f:

config = yaml.safe_load(f)

logger.info(f"Configuration loaded successfully from {config_path}")

return config

except FileNotFoundError:

logger.error(f"Configuration file not found at {config_path}")

raise

except yaml.YAMLError as e:

logger.error(f"Error parsing YAML configuration: {e}")

raise

def get_db_credentials(db_config):

"""

Retrieves database credentials, prioritizing environment variables for passwords.

Args:

db_config (dict): Dictionary containing database connection details.

Returns:

dict: Updated database configuration with resolved password.

"""

credentials = db_config.copy()

if 'password_env_var' in credentials:

env_var_name = credentials['password_env_var']

password = os.getenv(env_var_name)

if password:

credentials['password'] = password

logger.debug(f"Password for {credentials.get('database')} retrieved from environment variable '{env_var_name}'.")

else:

logger.warning(f"Environment variable '{env_var_name}' not set for database password. Proceeding without password or with default.")

credentials['password'] = credentials.get('password', '') # Fallback to empty string if not found

return credentials

Placeholder for database connection factory - replace with actual DB connectors

def get_db_connection(db_config):

"""

Placeholder function to establish a database connection.

In a real scenario, this would import and use specific DB drivers (e.g., psycopg2, sqlalchemy).

Args:

db_config (dict): Database connection details.

Returns:

object: A database connection object (e.g., psycopg2 connection).

Raises:

NotImplementedError: If the specific database type is not handled.

"""

credentials = get_db_credentials(db_config)

db_

gemini Output

This document outlines the comprehensive plan for the upcoming data migration, serving as a critical deliverable for all stakeholders. It details the strategy, technical specifications, operational procedures, and timeline estimates required to ensure a successful and seamless transition of data from the [Source System Name] to the [Target System Name].


Data Migration Planner: Comprehensive Deliverable

1. Executive Summary

This document details the complete data migration plan from [Source System Name, e.g., Legacy CRM 1.0] to [Target System Name, e.g., Salesforce Sales Cloud]. The objective is to securely and accurately transfer all defined customer, product, and historical transaction data, ensuring data integrity, minimizing downtime, and providing a robust foundation for future operations. This plan covers data mapping, transformation rules, validation procedures, rollback strategies, and a detailed timeline, designed to mitigate risks and ensure a smooth transition.

2. Project Overview

  • Project Name: [e.g., Legacy CRM to Salesforce Migration]
  • Source System: [e.g., MySQL Database for Legacy CRM, Version X.Y]
  • Target System: [e.g., Salesforce Sales Cloud Enterprise Edition]
  • Migration Scope: Customer data, Product catalog, Sales orders (last 5 years), Support cases (last 2 years), User profiles.
  • Primary Objective: To enable the full operational capability of the new [Target System Name] by accurately migrating essential business data, retiring the [Source System Name], and improving data quality.
  • Key Success Metrics:

* Data accuracy: >99.5% match between source and target for key fields.

* Data completeness: 100% of scoped records migrated.

* Downtime: Max [X] hours for critical systems during cutover.

* Zero critical data loss or corruption.

* Successful user acceptance testing (UAT).

3. Data Migration Strategy

The migration will follow a phased approach to minimize risk and allow for iterative testing and validation.

  • Phase 1: Discovery & Planning: Requirements gathering, system analysis, detailed mapping, and strategy finalization.
  • Phase 2: Development & Testing: ETL script development, unit testing, data cleansing, and comprehensive integration testing with sample data.
  • Phase 3: Pilot Migration (UAT): Migration of a subset of production data to a UAT environment for business user validation and performance testing.
  • Phase 4: Full Production Migration: Execution of the final migration plan during a scheduled maintenance window.
  • Phase 5: Post-Migration & Decommissioning: Data validation, system stabilization, user support, and eventual decommissioning of the source system.

4. Source and Target Systems Details

  • Source System:

* Name: [e.g., "Legacy Customer Management System (LCMS)"]

* Database: [e.g., SQL Server 2016]

* Key Tables/Objects: Customers, Products, Orders, OrderItems, Users, Cases

* Authentication: [e.g., SQL Server Authentication]

* Access Method: [e.g., ODBC connection via secure VPN]

  • Target System:

* Name: [e.g., "Salesforce Sales Cloud"]

* Objects: Account, Contact, Product2, Opportunity, Order, Case, User

* Authentication: [e.g., OAuth 2.0 via connected app]

* API: [e.g., Salesforce Bulk API 2.0 for large volumes, SOAP API for individual records]

5. Data Scope and Estimated Volume

| Data Entity | Source Table/Object | Target Object | Estimated Record Count (Source) | Estimated Data Volume (GB) | Comments |

| :---------- | :------------------ | :------------ | :------------------------------ | :------------------------- | :------- |

| Customers | Customers | Account | 500,000 | 0.5 | Includes company accounts |

| Contacts | Customers | Contact | 1,200,000 | 1.2 | Linked to Accounts |

| Products | Products | Product2 | 15,000 | 0.01 | Active products only |

| Orders | Orders, OrderItems | Order (with related OrderItem) | 3,000,000 (last 5 years) | 3.0 | Includes line items |

| Cases | Cases | Case | 800,000 (last 2 years) | 0.8 | Closed and Open cases |

| Users | Users | User | 150 | 0.001 | Active users only |

| Total | | | ~5.5 Million Records | ~5.5 GB | Excludes attachments/blobs |

6. Data Mapping and Transformation

This section outlines the detailed field-level mapping and the necessary transformation rules to ensure data compatibility and quality in the target system.

6.1. Example Field Mapping Table (Excerpt)

| Source Object.Field | Source Data Type | Target Object.Field | Target Data Type | Transformation Rule | Validation Check |

| :------------------ | :--------------- | :------------------ | :--------------- | :------------------ | :--------------- |

| Customers.CustomerID | INT | Account.External_ID__c | Text (Unique) | Direct Map | Not Null, Unique |

| Customers.CompanyName | NVARCHAR(255) | Account.Name | Text (255) | Direct Map, Trim | Not Null, Min Length 3 |

| Customers.Status | VARCHAR(10) | Account.Status__c | Picklist | CASE statement: Active->Active, Inactive->Inactive, Pending->Prospect | Valid Picklist Value |

| Customers.CreatedDate | DATETIME | Account.CreatedDate | DateTime | Direct Map, UTC Conversion | Not Null, Valid Date |

| Customers.AddressLine1 | NVARCHAR(255) | Account.BillingStreet | Text (255) | Concatenate with AddressLine2 if not null | Not Null |

| Customers.AddressLine2 | NVARCHAR(255) | (Part of BillingStreet) | - | Concatenate with AddressLine1 | - |

| Customers.City | NVARCHAR(100) | Account.BillingCity | Text (100) | Direct Map | Not Null |

| Customers.ZipCode | VARCHAR(10) | Account.BillingPostalCode | Text (20) | Format to XXXXX-XXXX if needed | Valid US Zip Code |

| Orders.OrderTotal | DECIMAL(18,2) | Order.TotalAmount | Currency (18,2) | Direct Map, Round to 2 decimal places | > 0 |

| Users.LoginName | VARCHAR(50) | User.Username | Email | Convert to email format: login@yourdomain.com | Valid Email Format, Unique |

6.2. Detailed Transformation Rules

  1. Data Type Conversion:

* All date/time fields from source (e.g., DATETIME, TIMESTAMP) will be converted to ISO 8601 format and stored as DateTime in the target system, with UTC conversion where applicable.

* Numeric fields (e.g., DECIMAL, INT) will be converted to the corresponding Currency, Number, or Integer types in the target system, with precision and scale adjustments as per target system requirements.

  1. Lookup and Reference Data:

* Status Codes: Source Customers.Status values (Active, Inactive, Pending, Archived) will be mapped to Account.Status__c picklist values (Active, Inactive, Prospect, Archived) via a lookup table.

* User IDs: Source Users.UserID will be mapped to existing or newly created User.Id in the target system. Ownership of records will be assigned based on this mapping.

  1. Data Concatenation/Splitting:

* Customers.AddressLine1 and Customers.AddressLine2 will be concatenated into Account.BillingStreet, separated by a newline character if AddressLine2 is present.

* Customers.FirstName and Customers.LastName will be mapped directly to Contact.FirstName and Contact.LastName.

  1. Default Values:

* If a non-nullable target field has no corresponding source field or the source field is null, a predefined default value will be applied (e.g., Account.RecordType will default to 'Standard Account').

  1. Data Cleansing & Standardization:

* Trim Whitespace: All text fields will have leading/trailing whitespace removed.

* Case Standardization: Company names will be converted to Title Case where appropriate (e.g., "ACME CORP" -> "Acme Corp").

* Phone Numbers: Standardize to E.164 format +CC NNN NNN NNNN.

* Email Addresses: Validate format; flag invalid emails for review or skip migration.

  1. De-duplication Logic:

* Prior to migration, a de-duplication pass will be performed on source data based on a combination of CompanyName and PrimaryContactEmail for Accounts, and FirstName, LastName, Email for Contacts.

* During migration, the target system's native de-duplication rules will be leveraged, and any identified duplicates will be logged for manual review.

7. Data Validation Strategy

A multi-stage validation approach will be implemented to ensure data quality and integrity throughout the migration process.

7.1. Pre-Migration Validation

  • Source Data Profiling: Analyze source data for completeness, consistency, uniqueness, and adherence to expected patterns. Identify potential data quality issues (e.g., null values in mandatory fields, inconsistent formats).
  • Schema Validation: Compare source and target schema definitions to identify discrepancies before mapping.
  • Data Cleansing Reports: Generate reports on data flagged for cleansing or requiring manual intervention (e.g., invalid email formats, missing critical fields).
  • Referential Integrity Checks: Verify relationships between tables in the source system (e.g., ensure all OrderItems have a valid Order).

7.2. Post-Migration Validation (Scripts & Procedures)

  • Record Count Verification:

Script: Python/SQL script to compare COUNT() of records for each entity in the source and target systems.

* Expected Outcome: COUNT(Source.Entity) == COUNT(Target.Entity)

* Tolerance: 0% deviation for critical entities (Accounts, Contacts, Orders); <0.5% for less critical (e.g., historical cases where some might be intentionally skipped).

  • Data Sample Verification:

* Script: Randomly select 5% of records for each major entity. Extract key fields from source and target for these records.

* Procedure: Manual review by business users and QA team to visually confirm accuracy of mapped fields, especially transformed data.

  • Data Reconciliation Reports:

* Script: Generate reports comparing sum of key numeric fields (e.g., SUM(OrderTotal), AVG(ProductPrice)) between source and target for migrated data.

* Expected Outcome: SUM(Source.Field) == SUM(Target.Field)

* Tolerance: 0% deviation for financial values.

  • Referential Integrity Verification:

* Script: Verify parent-child relationships in the target system (e.g., all Contacts are linked to an Account, all OrderItems are linked to an Order).

  • Error Logging and Reporting:

* All transformation errors, skipped records, and validation failures will be logged with detailed messages, timestamps, and source record identifiers.

* Daily/hourly reports will be generated during migration execution to track progress and identify issues promptly.

  • Business Validation (UAT):

* Business users will perform a comprehensive User Acceptance Testing (UAT) on the migrated data in a dedicated sandbox environment.

* Test cases will include:

* Searching for specific customers/orders.

* Creating new records and verifying auto-population/defaults.

* Running standard reports to check aggregated data.

* Verifying workflows and automation rules with migrated data.

8. Rollback Procedures

A comprehensive rollback plan is essential to mitigate risks in case of unforeseen issues or critical failures during the migration.

8.1. Contingency Planning

  • Dedicated Rollback Team: A designated team will be on standby during the migration window, trained on rollback procedures.
  • Communication Protocol: Clear communication channels and escalation matrix in case of rollback decision.
  • Decision Point: A "Go/No-Go" decision will be made at critical checkpoints, including post-pilot migration and immediately prior to production cutover. A rollback will be initiated if key success metrics are not met or critical errors are identified.

8.2. Rollback Steps (Example Scenario: Full Production Migration Failure)

  1. Halt Migration Process: Immediately stop all ETL jobs and data loading processes into the target system.
  2. Isolate Target System: Prevent user access to the partially migrated target system to avoid data corruption or confusion (e.g., put target system in maintenance mode).
  3. Restore Target System (Option 1 - Preferred):

* If a full backup of the target system was taken immediately prior to the migration attempt, restore the target system to its pre-migration state. This is the fastest and most reliable method for a clean slate.

* Estimated Time: [X] hours (dependent on backup size and restoration speed).

  1. Clear Target System (Option 2 - If backup not feasible/too slow):

* Execute scripts to delete all data imported during the failed migration attempt from the target system. This requires robust delete scripts that handle related records and referential integrity.

* Estimated Time: [Y] hours (dependent on data volume and target system API limits).

  1. Re-enable Source System: Ensure the source system remains fully operational and accessible to business users. If any temporary data freezes or read-only modes were applied, they should be lifted.
  2. Verify Source System Integrity: Perform quick checks on the source system to ensure no unintended changes occurred during the migration attempt.
  3. Communicate Rollback: Notify all stakeholders about the rollback, the reasons, and the revised plan/timeline.
  4. Post-Rollback Analysis: Conduct a root cause analysis to understand the failure, rectify issues, and update the migration plan for a subsequent attempt.

9. Migration Timeline and Phases

The following timeline provides estimated durations for each phase. Actual durations may vary based on discovery findings and unforeseen challenges.

| Phase | Start Date | End Date | Duration (Weeks) | Key Activities

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}