Data Migration Planner
Run ID: 69cbcb6361b1021a29a8c6192026-03-31Development
PantheraHive BOS
BOS Dashboard

Data Migration Plan: Comprehensive Strategy and Implementation Blueprint

Project: Data Migration Planner

Workflow Step: Generate Code for Data Migration Plan

Deliverable Date: October 26, 2023


1. Executive Summary

This document outlines a comprehensive plan for the upcoming data migration, encompassing detailed field mapping, transformation rules, robust validation scripts, clear rollback procedures, and realistic timeline estimates. The goal is to ensure a smooth, accurate, and secure transfer of data from the source system to the target system with minimal downtime and maximum data integrity. This plan provides the foundational blueprint and illustrative code examples to guide the migration process, serving as a critical deliverable for successful execution.


2. Introduction to the Data Migration Strategy

A successful data migration requires meticulous planning and execution. This strategy focuses on minimizing risks, ensuring data accuracy, and providing clear operational procedures. We will adopt a phased approach, including analysis, design, development (including scripting), testing, execution, and post-migration validation.

The following sections detail the core components of our migration strategy, including illustrative, production-ready code examples where applicable. These code snippets are designed to be adaptable and serve as a strong starting point for the actual implementation, requiring specific database connection details, table names, and field names to be customized for your environment.


3. Core Data Migration Components

3.1. Field Mapping

Field mapping defines the precise relationship between source system fields and target system fields. It's the foundation for understanding how data will flow and where transformations will be required.

Explanation:

The field mapping is represented as a dictionary (or JSON structure) where keys are source field identifiers (e.g., source_table.source_column_name) and values are target field identifiers (e.g., target_table.target_column_name). This document also details the data type mapping and any specific notes or constraints.

Illustrative Code: field_mapping.py

text • 430 chars
#### 3.2. Transformation Rules

Transformation rules define how data is modified during the migration process to fit the target system's schema, data types, and business logic.

**Explanation:**
A Python module with a collection of functions, each responsible for a specific data transformation. These functions will be called during the ETL (Extract, Transform, Load) process.

**Illustrative Code: `transformation_rules.py`**

Sandboxed live preview

As a professional AI assistant, I understand the importance of clear, unambiguous instructions. The request "Generate detailed professional output for: Data Migration Planner" aligns with the workflow description. However, the subsequent sentence "Create a detailed study plan with: weekly schedule, learning objectives, recommended resources, milestones, and assessment strategies" appears to be a distinct, unrelated request.

Given that this is Step 1 of 3 for the workflow "Data Migration Planner" and the step name is plan_architecture, I will prioritize generating a comprehensive Data Migration Architecture Plan. This aligns with the overall workflow objective to "Plan a complete data migration with field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates." I will proceed with the assumption that the request for a "study plan" was an oversight or an extraneous instruction, as it does not fit the context of planning a data migration architecture.


Data Migration Architecture Plan: [Project Name - Placeholder]

1. Executive Summary

This document outlines the architectural plan for the complete data migration for [Client/Project Name]. The objective is to seamlessly transition critical data from [Source System(s)] to [Target System(s)], ensuring data integrity, minimal downtime, and adherence to business requirements. This plan covers the strategy, scope, technical architecture, data mapping, transformation rules, validation, error handling, rollback procedures, security, performance, and high-level timeline estimates necessary for a successful migration.

2. Project Goals & Objectives

The primary goals of this data migration are to:

  • Consolidate/Migrate Data: Successfully transfer all identified critical data from [Source System(s)] to [Target System(s)].
  • Ensure Data Integrity: Maintain accuracy, completeness, and consistency of data throughout the migration process.
  • Minimize Business Disruption: Execute the migration with the least possible impact on ongoing business operations, targeting [e.g., a specific cutover window, phased approach].
  • Improve Data Quality: Implement cleansing and transformation rules to enhance data quality in the target system.
  • Establish a Robust Migration Framework: Develop repeatable and auditable processes for future data migrations.
  • Comply with Regulations: Ensure all data handling and storage comply with relevant data privacy and security regulations (e.g., GDPR, HIPAA, CCPA).

3. Scope Definition

3.1. In-Scope Systems & Data

  • Source Systems:

* [System 1 Name] (e.g., Legacy CRM Database - Oracle 12c)

* [System 2 Name] (e.g., On-premise ERP System - SQL Server 2016)

* [List specific modules/tables/datasets to be migrated, e.g., Customer Master Data, Order History (last 5 years), Product Catalog]

  • Target Systems:

* [System 1 Name] (e.g., Salesforce CRM)

* [System 2 Name] (e.g., SAP S/4HANA)

* [List specific modules/tables/datasets to receive data]

  • Data Entities: [e.g., Customers, Accounts, Products, Orders, Invoices, Employees, Historical Transactions (specify retention period)].

3.2. Out-of-Scope Systems & Data

  • [System 1 Name] (e.g., Archival System - data will not be migrated but potentially linked or accessed via API)
  • [Specific data entities/attributes not being migrated, e.g., very old historical data beyond 5 years, temporary staging data, specific reports that will be re-generated in the new system]
  • Migration of custom applications or integrations not directly related to the specified data entities.

4. Source & Target System Analysis

4.1. Source System Details

  • System Name: [e.g., Legacy CRM]
  • Database/Platform: [e.g., Oracle 12c, SQL Server 2016, Custom Flat Files]
  • Schema Complexity: [e.g., Highly normalized, denormalized, star schema]
  • Estimated Data Volume: [e.g., 500 GB, 10 million customer records]
  • Connectivity: [e.g., JDBC, ODBC, REST API, SFTP for file transfer]
  • Data Quality Observations: [e.g., Known inconsistencies, missing values, duplicate records, free-text fields needing standardization]
  • Dependencies: [e.g., Other systems relying on this data, batch jobs]
  • Access Credentials & Permissions: [Required for extraction]

4.2. Target System Details

  • System Name: [e.g., Salesforce CRM]
  • Database/Platform: [e.g., Salesforce Objects, SAP S/4HANA tables, PostgreSQL]
  • Schema Complexity: [e.g., Standard Salesforce objects, custom objects, specific table structures]
  • Expected Data Volume: [e.g., 600 GB (post-transformation), 12 million customer records]
  • Connectivity: [e.g., Salesforce API (SOAP/REST), SAP RFC/BAPI, JDBC/ODBC]
  • Data Model Requirements: [e.g., Mandatory fields, data types, lookup values, unique constraints]
  • Dependencies: [e.g., New integrations, reporting tools]
  • Access Credentials & Permissions: [Required for loading]

5. Data Inventory & Profiling

A detailed data inventory will be created, listing all tables, fields, and their characteristics from source systems. Data profiling will be conducted to:

  • Identify data types, formats, and constraints.
  • Assess data completeness (null rates).
  • Detect data inconsistencies and anomalies.
  • Identify primary and foreign key relationships.
  • Estimate record counts and total data volume per entity.
  • Highlight potential data quality issues that require cleansing or transformation.

Deliverables: Data Dictionary for Source Systems, Data Profiling Reports.

6. Migration Strategy & Approach

6.1. Migration Methodology

  • Phased Migration (Recommended): Data will be migrated in logical stages or by specific business units/data entities. This approach reduces risk, allows for iterative testing, and provides opportunities for learning and adjustment.

Phase 1:* [e.g., Static reference data, product catalog]

Phase 2:* [e.g., Customer master data]

Phase 3:* [e.g., Historical orders/transactions]

Phase N:* [Final cutover]

  • Big Bang Migration (Alternative/Justification): All data migrated simultaneously during a defined downtime window. This is high risk but may be necessary for tightly coupled systems where phased migration is impractical.
  • Hybrid Approach: A combination of phased migration for initial data sets, culminating in a big bang cutover for critical transactional data.

6.2. Cutover Strategy

  • Downtime Window: A specific, pre-approved period during which source systems will be read-only or offline, and target systems will be populated. [e.g., Weekend from Friday 8 PM to Monday 6 AM].
  • Data Freeze: A point in time where no new data can be entered or modified in the source system before migration.
  • Incremental Load (for phased approaches): Strategy for migrating data changes that occur between initial load phases and the final cutover. This may involve change data capture (CDC) mechanisms.

6.3. Migration Tool Selection

  • ETL Tool (Recommended): [e.g., Informatica PowerCenter, Talend, SSIS, Google Cloud Dataflow, AWS Glue, Azure Data Factory]. These tools offer robust capabilities for extraction, transformation, loading, error handling, and orchestration.
  • Custom Scripts: [e.g., Python, Java, SQL scripts] for highly specific or complex transformations, or when commercial tools are not viable.
  • Vendor-Specific Tools: [e.g., Salesforce Data Loader, SAP BODS] for direct integration with specific target systems.

7. Detailed Data Mapping & Transformation Rules

This is the core of the migration, defining how each piece of data moves and changes.

7.1. Field Mapping Document Structure

A comprehensive Field Mapping Document will be created for each source-to-target entity relationship. It will include:

  • Source System/Table/Field: Original data location and name.
  • Source Data Type/Length:
  • Target System/Table/Field: Destination data location and name.
  • Target Data Type/Length:
  • Mapping Type: (e.g., Direct Map, Transformation Required, Lookup, Derivation, Concatenation, Split, Default Value).
  • Transformation Rule ID/Description: Reference to specific transformation logic.
  • Business Rule/Comments: Explanation of the mapping or transformation rationale.
  • Validation Rule ID/Description: Reference to validation checks for this field.
  • Mandatory Field (Target): Yes/No.
  • Primary Key/Foreign Key: Indication of key fields.

Example Mapping (Simplified):

| Source System | Source Table | Source Field | Source Type | Target System | Target Table | Target Field | Target Type | Mapping Type | Transformation Rule ID | Business Rule/Comments

python

transformation_rules.py

import uuid

from datetime import datetime

import hashlib

--- Global Lookup Tables (to be populated during migration phases) ---

This dictionary will store mappings from old INT IDs to new UUIDs for foreign key relationships.

Example: { old_user_id (int): new_customer_id (uuid.UUID) }

USER_ID_MAP = {}

def generate_uuid_from_int(old_id: int) -> uuid.UUID:

"""

Generates a consistent UUID based on an integer ID.

This is useful for maintaining referential integrity when old INT IDs become UUIDs.

Note: For production, consider a more robust UUID generation strategy

if collision risk or true randomness is required.

A common approach is to store the generated UUIDs in a lookup table.

"""

if not isinstance(old_id, int):

raise TypeError("Input must be an integer.")

# Create a deterministic UUID from the integer for consistency across runs

namespace_uuid = uuid.UUID('f8000000-0000-4000-8000-000000000000') # A fixed namespace UUID

return uuid.uuid5(namespace_uuid, str(old_id))

def map_user_id_to_customer_uuid(old_user_id: int) -> uuid.UUID:

"""

Looks up the new customer UUID based on the old user ID.

This function assumes USER_ID_MAP has been populated by the user migration script.

"""

if old_user_id not in USER_ID_MAP:

# In a real scenario, this might log an error or raise an exception

# if a user ID is not found, indicating a data integrity issue.

print(f"WARNING: Old user ID {old_user_id} not found in USER_ID_MAP. Generating new UUID.")

new_uuid = generate_uuid_from_int(old_user_id) # Fallback

USER_ID_MAP[old_user_id] = new_uuid

return new_uuid

return USER_ID_MAP[old_user_id]

def standardize_status(source_status: str) -> str:

"""

Transforms source status strings to target ENUM values.

Example: 'Active' -> 'active', 'Deactivated' -> 'inactive'

"""

if not isinstance(source_status, str):

return 'unknown' # Or raise an error

status_map = {

'Active': 'active',

'ACTIVE': 'active',

'active': 'active',

'Inactive': 'inactive',

'INACTIVE': 'inactive',

'deactivated': 'inactive',

'Suspended': 'suspended',

'SUSPENDED': 'suspended',

'pending': 'pending'

}

return status_map.get(source_status.strip().lower(), 'unknown') # Default to 'unknown' or error

def format_date_to_timestamp(date_str: str) -> datetime:

"""

Converts a date string (e.g., 'YYYY-MM-DD') into a datetime object

suitable for a TIMESTAMP column (sets time to 00:00:00).

"""

if not isinstance(date_str, str) or not date_str:

return None

try:

# Assuming source date format is 'YYYY-MM-DD'

return datetime.strptime(date_str, '%Y-%m-%d')

except ValueError:

print(f"WARNING: Could not parse date string '{date_str}'. Returning None.")

return None

def ensure_decimal_precision(value: float, precision: int = 12, scale: int = 4):

"""

Ensures a float value is represented with the desired decimal precision and scale.

Returns a string for database insertion or a Decimal object if preferred.

"""

if value is None:

return None

try:

# Using string formatting for precision control

# For actual database operations, consider using Python's Decimal type

# to avoid floating-point inaccuracies.

return f"{value:.{scale}f}"

except (ValueError, TypeError):

print(f"WARNING: Could not format value '{value}' to decimal. Returning original.")

return value

def hash_sensitive_data(data

gemini Output

Data Migration Planner: Comprehensive Strategy Document

Project: [Your Project Name/Identifier]

Document Version: 1.0

Date: October 26, 2023


1. Executive Summary

This document outlines a comprehensive plan for the data migration from [Legacy System X] to [New System Y]. It details the critical components required for a successful migration, including field mapping, data transformation rules, validation procedures, robust rollback strategies, and a projected timeline. The goal is to ensure data integrity, minimize downtime, and provide a seamless transition to the new system, thereby supporting [specific business objectives, e.g., enhanced operational efficiency, improved data analytics capabilities].

2. Introduction

Data migration is a critical process involving the transfer of data between storage types, formats, or computer systems. This plan addresses the migration of essential business data from [Legacy System X] (the source system) to [New System Y] (the target system). A meticulous approach is vital to prevent data loss, corruption, or inconsistencies, which could adversely impact business operations. This document serves as a foundational blueprint for all migration activities.

3. Scope of Migration

The scope of this migration encompasses the following key data entities and their associated attributes:

  • Customer Data: Customer profiles, contact information, account details.
  • Product Data: Product catalog, pricing, inventory levels.
  • Order Data: Historical and active orders, line items, status.
  • Financial Data: Invoices, payments, general ledger entries (specific modules to be identified).
  • [Add other key data entities as required, e.g., Employee Data, Vendor Data, Project Data]

Out of Scope:

  • Archived data older than [X years/date].
  • Unused or deprecated data fields identified during discovery.
  • Non-production environments (unless specifically for testing purposes).

4. Source and Target Systems

Source System:

  • Name: [Legacy System X]
  • Type: [e.g., On-Premise ERP, Custom Application, Legacy Database]
  • Database: [e.g., Oracle 11g, SQL Server 2008, MySQL 5.x]
  • Key Data Access Methods: [e.g., SQL queries, API, Flat File Exports]

Target System:

  • Name: [New System Y]
  • Type: [e.g., Cloud-based CRM, SaaS ERP, Custom Web Application]
  • Database: [e.g., PostgreSQL 12, SQL Server 2019, MongoDB]
  • Key Data Ingestion Methods: [e.g., REST API, ETL Tool Connector, CSV/JSON Imports]

5. Data Migration Strategy

The migration will follow a phased approach, emphasizing thorough testing and validation at each stage:

  1. Discovery & Planning: Detailed analysis of source data, target schema, and business requirements.
  2. Design & Development: Creation of field mappings, transformation rules, and ETL scripts.
  3. Testing & Validation: Iterative testing cycles using sample data and full datasets in a test environment.
  4. Execution (Cutover): Controlled migration of data to the production target system.
  5. Post-Migration & Support: Data validation, issue resolution, and system stabilization.

A "big bang" approach is planned for the final cutover to ensure data consistency, with multiple dry runs conducted prior to minimize risk.

6. Detailed Migration Plan

6.1. Field Mapping

Field mapping is the process of defining how each relevant data field from the source system corresponds to a field in the target system. This includes identifying data types, constraints, and any required transformations.

Actionable Steps:

  1. Inventory Source Fields: Document all relevant fields from [Legacy System X].
  2. Inventory Target Fields: Document all target fields in [New System Y], including mandatory fields and data types.
  3. Establish Direct Mappings: Identify fields that can be directly transferred.
  4. Identify Transformation Needs: Mark fields requiring data manipulation before transfer.
  5. Document Unmapped Fields: Note any source fields not required in the target and target fields requiring default values or derivation.

Example Field Mapping Table Structure:

| Source System (Legacy System X) | Target System (New System Y) | Transformation Rule | Notes / Business Logic |

| :------------------------------ | :--------------------------- | :------------------ | :--------------------- |

| Legacy_CustomerID (INT) | CustomerID (UUID) | GENERATE_UUID() | Unique identifier for customer. Legacy INT will be mapped to a new UUID. |

| Legacy_FirstName (VARCHAR(50)) | FirstName (VARCHAR(100)) | DIRECT_MAP | Direct copy. Target field length is greater. |

| Legacy_LastName (VARCHAR(50)) | LastName (VARCHAR(100)) | DIRECT_MAP | Direct copy. |

| Legacy_AddressLine1 (VARCHAR(100)) | Address_Street (VARCHAR(200)) | DIRECT_MAP | Direct copy. |

| Legacy_City (VARCHAR(50)) | Address_City (VARCHAR(100)) | DIRECT_MAP | Direct copy. |

| Legacy_StateCode (VARCHAR(2)) | Address_State (VARCHAR(50)) | LOOKUP_STATE_NAME() | Convert 2-letter code to full state name (e.g., "CA" -> "California"). |

| Legacy_ZipCode (VARCHAR(10)) | Address_PostalCode (VARCHAR(10)) | DIRECT_MAP | Direct copy. |

| Legacy_CreationDate (DATETIME) | CreatedOn (TIMESTAMP) | CONVERT_TO_UTC | Convert local time to UTC timestamp. |

| Legacy_Status (INT) | CustomerStatus (ENUM) | MAP_STATUS_CODE() | Map 1->'ACTIVE', 2->'INACTIVE', 3->'PENDING'. Default to 'ACTIVE'. |

| Legacy_Amount (DECIMAL(10,2)) | OrderTotal (DECIMAL(12,2)) | DIRECT_MAP | Direct copy. Target precision allows for larger values. |

| Legacy_Comments (TEXT) | N/A | ARCHIVE_ONLY | Not migrated to New System Y; stored in an archive. |

| N/A | LastModifiedBy (VARCHAR(50)) | DEFAULT_VALUE("MigrationUser") | Target field required; set a default value for migrated records. |

(A complete field mapping document will be provided as an appendix once discovery is finalized.)

6.2. Transformation Rules

Transformation rules define the specific logic applied to source data to make it compatible with the target system's schema and business rules.

Categories of Transformations:

  1. Data Type Conversion: Changing data from one format to another (e.g., String to Integer, Date to Timestamp).

Example:* Legacy_CreationDate (DATETIME) to CreatedOn (TIMESTAMP UTC).

  1. Data Cleansing & Standardization: Correcting inconsistencies, removing duplicates, formatting data uniformly.

Example:* Trimming whitespace from Legacy_FirstName, converting all Legacy_Email to lowercase.

Example:* Standardizing phone numbers to E.164 format.

  1. Lookup & Reference Data Mapping: Translating codes or values from the source to corresponding values in the target system.

Example:* Legacy_StateCode ('CA') to Address_State ('California') using a predefined lookup table.

Example:* Legacy_Status (INT) to CustomerStatus (ENUM: 'ACTIVE', 'INACTIVE').

  1. Concatenation & Splitting: Combining multiple source fields into one target field or splitting a single source field.

Example (Concatenation):* Combining Legacy_FirstName and Legacy_LastName into FullName in the target (if required).

Example (Splitting):* Splitting Legacy_FullAddress into Address_Street, Address_City, Address_State, Address_PostalCode.

  1. Derivation & Calculation: Creating new data based on existing source fields.

Example:* Calculating CustomerAge from Legacy_DateOfBirth and current date.

Example:* Setting IsActive flag based on Legacy_LastActivityDate.

  1. Default Values & Null Handling: Assigning default values where source data is missing or mapping NULL values appropriately.

Example:* If Legacy_PhoneNumber is NULL, set PhoneNumber in target to an empty string or specific default.

Example:* Assigning DEFAULT_VALUE("MigrationUser") to LastModifiedBy for all migrated records.

  1. Aggregation: Summarizing data from multiple source records into a single target record.

Example:* Summing Legacy_OrderLineItem_Amount for a given Legacy_OrderID to populate OrderTotal in the target.

Transformation Rule Examples (based on Field Mapping Table):

  • GENERATE_UUID() for CustomerID: For each record, generate a new universally unique identifier (UUID) for the CustomerID field in the target system. A mapping table will be maintained to link the old Legacy_CustomerID to the new CustomerID for historical reference and relational integrity during migration.
  • LOOKUP_STATE_NAME() for Address_State: Utilize a predefined lookup table (e.g., State_Codes.csv or a database table) to convert the 2-character state code from Legacy_StateCode (e.g., "NY") to its full name ("New York") for Address_State.
  • MAP_STATUS_CODE() for CustomerStatus: Apply conditional logic:

* If Legacy_Status = 1, then CustomerStatus = 'ACTIVE'.

* If Legacy_Status = 2, then CustomerStatus = 'INACTIVE'.

* If Legacy_Status = 3, then CustomerStatus = 'PENDING'.

* Else (unrecognized code), default CustomerStatus = 'ACTIVE' and log an error/warning.

  • CONVERT_TO_UTC for CreatedOn: Convert the DATETIME value from Legacy_CreationDate (assumed to be in local time zone [e.g., PST]) to a UTC TIMESTAMP for CreatedOn in the target system.

6.3. Validation Scripts and Procedures

Robust validation is crucial to ensure the accuracy, completeness, and integrity of the migrated data. Validation will occur at multiple stages:

6.3.1. Pre-Migration Validation (Source Data Profiling & Cleansing)

  • Purpose: Identify and resolve data quality issues in the source system before migration begins.
  • Scripts/Checks:

* Data Type Conformance: Identify fields with data that doesn't match its declared type (e.g., text in a numeric field).

* Completeness Checks: Identify records with missing mandatory fields (e.g., Legacy_CustomerID is NULL).

* Uniqueness Checks: Verify primary keys and unique identifiers (e.g., duplicate Legacy_CustomerID).

* Referential Integrity Checks: Identify orphaned records (e.g., an order referencing a non-existent customer).

* Data Range/Format Checks: Verify values are within expected ranges (e.g., Legacy_Amount is positive) or conform to specific formats (e.g., Legacy_Email regex).

  • Procedure: Run profiling scripts, generate reports, work with business users to cleanse or define handling for problematic data.

6.3.2. During-Migration Validation (ETL Process Checks)

  • Purpose: Monitor the migration process for errors and ensure data is moving as expected.
  • Scripts/Checks:

* Record Counts: Compare the number of records extracted from the source to the number loaded into the target (per table/entity).

* Error Logging: Capture and log any records that fail transformation rules or target system constraints.

* Skipped Record Tracking: Identify records that were intentionally skipped based on migration rules.

* Performance Monitoring: Track migration speed and resource utilization.

  • Procedure: Implement logging within ETL tools/scripts. Monitor dashboards and alerts during migration runs.

6.3.3. Post-Migration Validation (Target Data Verification)

  • Purpose: Confirm that the migrated data in the target system is accurate, complete, and consistent with business expectations.
  • Scripts/Checks:

* Record Count Comparison: Final verification of total records migrated vs. expected.

* Data Integrity Checks:

* Random Data Sampling: Select a statistically significant sample of records from both source and target and compare field-by-field.

* Key Field Comparison: Verify primary keys and foreign keys are correctly linked.

* Checksum/Hash Comparison: For critical fields or entire records, compare hashes to ensure exact data transfer.

* Referential Integrity: Ensure all foreign

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}