Data Migration Planner
Run ID: 69ccb3413e7fb09ff16a44fe2026-04-01Development
PantheraHive BOS
BOS Dashboard

Deliverable: Data Migration Architectural Plan

Project: Data Migration Planner

Workflow Step: 1 of 3 - Plan Architecture (gemini → plan_architecture)

Date: October 26, 2023

Version: 1.0


Executive Summary

This document outlines the architectural plan for the upcoming data migration. It details the high-level strategy, source and target system landscapes, data flow, tooling, security, performance, and monitoring considerations essential for a successful and robust migration. This plan serves as the foundational blueprint for subsequent detailed design, development, and execution phases.


1. Introduction and Scope

The objective of this architectural plan is to define the technical framework and strategic approach for migrating data from [Source System Name/Description] to [Target System Name/Description]. This document focuses on the architecture of the migration, addressing how data will move, what infrastructure will support it, and how key non-functional requirements (security, performance, reliability) will be met.

Scope of this document:

Out of Scope for this document (to be covered in subsequent steps):


2. Migration Strategy Overview

The chosen migration strategy will dictate the overall approach and impact on business operations.

Proposed Strategy: [Choose one or propose a hybrid, e.g., "Phased Migration with Coexistence"]

Advantages of this Strategy:

Key Phases (High-Level):

  1. Discovery & Planning: Initial analysis, architectural design (this document), detailed planning.
  2. Pilot Migration: Migrate a small, non-critical dataset to validate the end-to-end process, tools, and infrastructure.
  3. Phased Data Migration: Migrate data in defined logical batches (e.g., by module, by business unit, by data type).
  4. Coexistence & Validation: Operate both systems concurrently for migrated data, performing extensive validation.
  5. Cutover & Decommissioning: Switch all operations to the target system and eventually decommission the source.

3. Source and Target System Architecture

Understanding the landscape of both systems is crucial for designing an effective migration.

3.1. Source System Architecture

3.2. Target System Architecture


4. Data Flow Architecture

The data flow architecture defines the path and stages data will traverse during migration.

Conceptual Data Flow Diagram:

text • 397 chars
[Source System]
      |
      | (Extraction)
      V
[Data Extraction Layer/Tools]
      |
      | (Staging & Transformation)
      V
[Staging Area (Data Lake/Warehouse)]
      |
      | (Data Cleansing, Enrichment, Validation)
      V
[Data Transformation Engine/ETL Tool]
      |
      | (Loading)
      V
[Data Loading Layer/Tools]
      |
      | (Pre-load Validation)
      V
[Target System]
Sandboxed live preview

4.1. Data Extraction

  • Methodology: [e.g., Full database dumps, incremental extracts via change data capture (CDC), API calls for specific objects, direct SQL queries.]
  • Tools: [e.g., Custom scripts (Python/Java), database native tools (Oracle Data Pump, SQL Server SSIS), commercial ETL tools (Informatica PowerCenter, Talend), cloud-native services (AWS DMS, Azure Data Factory).]
  • Frequency: [e.g., One-time full extraction for historical data, continuous CDC for delta updates during coexistence.]
  • Security: Encrypted connections (SSL/TLS), secure credentials management.

4.2. Staging Area

  • Purpose: A temporary, isolated environment to store extracted data before transformation and loading. Facilitates data cleansing, validation, and complex transformations without impacting source or target systems.
  • Technology: [e.g., Cloud Storage (AWS S3, Azure Blob Storage, Google Cloud Storage), Data Lake (Apache HDFS, Databricks Delta Lake), Relational Database (PostgreSQL, Snowflake).]
  • Schema: Raw data schema mirroring source, potentially with additional metadata (e.g., extraction timestamp).
  • Security: Encryption at rest and in transit, strict access controls.

4.3. Data Transformation

  • Purpose: Convert source data into a format and structure compatible with the target system, applying business rules, cleansing, and enrichment.
  • Tools: [e.g., Commercial ETL tools (Informatica, Talend, Matillion), Custom scripts (Python with Pandas, Spark), SQL transformations within a data warehouse.]
  • Key Activities:

* Field Mapping: Source field to target field.

* Data Type Conversion: Adapting data types (e.g., VARCHAR to INT).

* Data Cleansing: Removing duplicates, correcting inconsistencies, handling missing values.

* Data Enrichment: Adding derived data, integrating with external data sources.

* Data Aggregation/Disaggregation: Restructuring data as needed.

Data Validation: Applying rules to ensure data quality before* loading.

  • Transformation Rules Engine: A mechanism (e.g., configuration files, metadata-driven ETL) to manage and apply transformation logic.

4.4. Data Loading

  • Methodology: [e.g., Bulk API uploads, database inserts/updates, direct file imports, message queue ingestion.]
  • Tools: [e.g., Target system native loaders (Salesforce Data Loader, SAP Loaders), commercial ETL tools, custom API clients.]
  • Loading Strategy:

* Initial Load: Full historical data.

* Incremental Loads: Delta changes during coexistence.

* Error Handling: Mechanisms to log and manage failed records without stopping the entire load.

  • Pre-load Validation: Final checks before committing data to the target system.
  • Post-load Validation: Verifying data integrity and completeness in the target system.

5. Migration Tooling and Technologies

The selection of appropriate tools is critical for efficiency, reliability, and maintainability.

  • Primary ETL/ELT Tool: [e.g., Informatica PowerCenter, Talend Data Fabric, AWS Glue, Azure Data Factory, Google Cloud Dataflow, Matillion, Fivetran, Custom Python/Spark framework.]

* Rationale: [e.g., Existing organizational expertise, cloud-native integration, scalability for large datasets, robust error handling, metadata management capabilities.]

  • Data Storage (Staging): [e.g., AWS S3 with Glue Catalog, Azure Data Lake Storage Gen2, Snowflake, Google Cloud Storage.]

* Rationale: [e.g., Cost-effectiveness, scalability, integration with chosen ETL tool, support for various data formats.]

  • Data Validation Framework: [e.g., Custom Python scripts, SQL-based validation rules, data quality tools (e.g., Ataccama, Collibra).]

* Rationale: [e.g., Flexibility, integration with existing CI/CD pipelines, comprehensive rule definition.]

  • Version Control: [e.g., Git (GitHub, GitLab, Bitbucket)] for all scripts, configurations, and documentation.
  • Orchestration/Scheduling: [e.g., Apache Airflow, AWS Step Functions, Azure Logic Apps, Control-M] for managing migration job dependencies and scheduling.
  • Monitoring & Alerting: [e.g., Prometheus/Grafana, ELK Stack, Splunk, cloud-native monitoring (CloudWatch, Azure Monitor, GCP Operations).]

6. Security Architecture

Security must be paramount throughout the entire migration process.

  • Data Encryption:

* In Transit: All data transfers will use encrypted protocols (e.g., TLS 1.2+ for APIs, SFTP for file transfers, VPNs for network tunnels).

* At Rest: Data stored in staging areas and temporary locations will be encrypted using industry-standard algorithms (e.g., AES-256).

  • Access Control:

* Least Privilege: Access to source systems, staging areas, and target systems will be granted on a "need-to-know" and "least privilege" basis.

* Role-Based Access Control (RBAC): Defined roles and permissions for migration team members.

* Multi-Factor Authentication (MFA): Enforced for all critical access points.

  • Credential Management:

* Secure storage of API keys, database credentials, and other sensitive information using dedicated secrets management services (e.g., AWS Secrets Manager, Azure Key Vault, HashiCorp Vault).

* No hardcoding of credentials.

  • Network Security:

* Firewall rules and Security Groups to restrict network access to migration components.

* Use of private endpoints or VPNs for sensitive data transfers between environments.

  • Auditing and Logging: Comprehensive logging of all migration activities, access attempts, and data changes, with logs securely stored and regularly reviewed.
  • Data Masking/Anonymization: Consideration for masking or anonymizing sensitive non-production data used for testing and validation environments.

7. Performance and Scalability Considerations

The architecture must support the required data volumes and velocity within acceptable timeframes.

  • Parallel Processing: Design migration jobs to run in parallel where possible (e.g., migrating different tables or partitions concurrently).
  • Batching: Optimize data loading by processing records in efficient batches rather than individually.
  • Indexing: Ensure appropriate indexing on source and target databases to facilitate efficient extraction and loading.
  • Resource Provisioning: Dynamically scale compute and storage resources (e.g., cloud-based ETL tools, auto-scaling databases) to handle peak loads during migration windows.
  • Network Bandwidth: Ensure sufficient network bandwidth between source, staging, and target systems.
  • Incremental Loads: Implement Change Data Capture (CDC) or timestamp-based incremental loading for large datasets to minimize the volume of data transferred during subsequent runs.
  • Performance Testing: Plan for dedicated performance testing of extraction, transformation, and loading processes using realistic data volumes.

8. High Availability and Disaster Recovery

The migration architecture should be resilient to failures.

  • Redundant Components: Utilize redundant infrastructure for critical migration components (e.g., highly available ETL tool instances, replicated staging databases).
  • Checkpointing/Restartability: Design migration jobs to be restartable from the point of failure without re-processing all data from the beginning.
  • Automated Retries: Implement automated retry mechanisms for transient failures during data extraction or loading.
  • Backup and Restore: Regular backups of the staging area and ETL configurations.
  • Disaster Recovery Plan: Define procedures for recovering the migration environment in case of a major outage (e.g., region failure for cloud services).

9. Monitoring and Logging Architecture

Robust monitoring is essential for visibility, troubleshooting, and ensuring migration success.

  • Centralized Logging: Aggregate logs from all migration components (extraction scripts, ETL jobs, staging area operations, loading tools) into a centralized logging system.
  • Key Metrics to Monitor:

* Data volume processed (records, bytes).

* Processing rates (records/second).

* Error rates (failed records, job failures).

* Latency at each stage (extraction, transformation, loading).

* Resource utilization (CPU, memory, disk I/O, network).

* Job status and completion times.

  • Alerting: Set up automated alerts for critical events (e.g., job failures, high error rates, performance degradation, security breaches).
  • Dashboards: Create intuitive dashboards to
gemini Output

This document outlines a comprehensive plan for your data migration project, structured as a professional, executable Python script. It encompasses all critical components: detailed field mapping, data transformation rules, robust validation scripts, clear rollback procedures, and realistic timeline estimates. This output serves as a foundational deliverable, providing both the conceptual framework and practical code templates for your migration.


Data Migration Planner: Comprehensive Plan & Execution Templates

This deliverable provides a detailed, professional output for your data migration planning. It is structured as a Python script (migration_planner.py) that defines and encapsulates all key aspects of a successful data migration. This script serves as both a documentation artifact and a set of executable templates to guide your migration efforts.

Key Components Addressed:

  1. Field Mapping: Explicit definitions for
gemini Output

Data Migration Planner: Comprehensive Plan & Documentation

This document outlines the detailed plan for the upcoming data migration, encompassing field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates. This comprehensive approach ensures a smooth, secure, and successful transition of data from the source to the target system.


1. Executive Summary

This Data Migration Plan details the strategy and tactical steps required to migrate critical data from [Source System Name/Type] to [Target System Name/Type]. The primary objective is to ensure data integrity, minimize downtime, and achieve a seamless transition, enabling [Target System Name/Type] to operate effectively with accurate and complete historical data. This document serves as the foundational guide for all migration activities, providing clear instructions and accountability.


2. Project Scope & Objectives

2.1. Scope Definition

  • Source System: [Name of Source System, e.g., Legacy CRM (Salesforce Classic)]
  • Target System: [Name of Target System, e.g., Salesforce Lightning Enterprise Edition]
  • Data Entities to be Migrated:

* [List specific entities, e.g., Accounts, Contacts, Opportunities, Products, Custom Objects (e.g., Project__c)]

* [Specify data age/range, e.g., All active data from the last 5 years, historical data for reference]

  • Data Not in Scope:

* [List specific entities or types of data not being migrated, e.g., Archived email logs older than 3 years, temporary scratchpad data]

  • Migration Type: [e.g., Big-Bang, Phased, Incremental] - Recommendation: Phased migration to allow for iterative testing and validation.

2.2. Migration Objectives

  • Achieve 100% data integrity and accuracy in the target system for all in-scope data.
  • Ensure complete transfer of all identified historical data required for business operations.
  • Minimize disruption to business operations during the migration window.
  • Validate data post-migration to confirm successful transfer and functionality.
  • Establish robust rollback procedures to mitigate risks.
  • Provide a clear audit trail for all migrated data.

3. Source & Target Systems Overview

| Aspect | Source System | Target System | Notes |

| :---------------- | :---------------------------------------------- | :---------------------------------------------------- | :---------------------------------------------------- |

| Name/Type | [e.g., Custom-built SQL Database (MS SQL Server)] | [e.g., SaaS Application (Salesforce.com)] | |

| Version/Env | [e.g., SQL Server 2012, Production Environment] | [e.g., Spring '24 Release, Production Sandbox] | Migration will initially target a Sandbox environment |

| Key Integrations | [e.g., ERP System (SAP), Marketing Automation] | [e.g., ERP System (SAP), Customer Service Portal] | Ensure compatibility post-migration |

| Access Method | [e.g., ODBC, Direct Database Access] | [e.g., Salesforce API (SOAP/REST), Data Loader] | |

| Estimated Data Volume | [e.g., 50 GB, 10 Million Records across entities] | [e.g., Expected increase of 50 GB] | Critical for performance planning |


4. Data Migration Strategy

The migration will follow a [Phased/Big-Bang/Incremental] approach, specifically:

  • Extraction: Data will be extracted from the source system using [specify method, e.g., SQL queries, custom scripts, ETL tool] into a staging area.
  • Transformation: Data in the staging area will undergo necessary transformations (cleaning, formatting, remapping) as defined in Section 6.2.
  • Loading: Transformed data will be loaded into the target system using [specify method, e.g., Salesforce Data Loader, custom API scripts, ETL tool].
  • Validation: Post-load validation will be performed using automated scripts and manual checks to ensure data integrity and completeness.
  • Iteration: For phased migrations, this cycle will repeat for subsequent data entities. For big-bang, it will be a single, concentrated effort.

5. Detailed Migration Plan Components

This section details the core elements of the migration plan.

5.1. Field Mapping Document

The Field Mapping Document provides a comprehensive, field-by-field translation from the source to the target system. This is a critical artifact for ensuring data accuracy and completeness.

Purpose: To define how each relevant field in the source system corresponds to a field in the target system, including data type considerations and any direct transformations.

Structure: A detailed spreadsheet will be maintained for each major data entity (e.g., Accounts, Contacts). An example mapping for a Contact entity is provided below:

| Source Object | Source Field Name | Source Data Type | Target Object | Target Field Name | Target Data Type | Transformation Rule ID | Notes / Comments |

| :------------ | :---------------- | :--------------- | :------------ | :---------------- | :--------------- | :--------------------- | :---------------------------------------------------- |

| Legacy_CRM.Contact | ContactID | INT | Contact | External_ID__c | TEXT(255) | TR001 | Unique identifier, mapped to custom external ID field |

| Legacy_CRM.Contact | FirstName | VARCHAR(50) | Contact | FirstName | TEXT(40) | N/A | Direct map |

| Legacy_CRM.Contact | LastName | VARCHAR(50) | Contact | LastName | TEXT(80) | N/A | Direct map |

| Legacy_CRM.Contact | Email | VARCHAR(100) | Contact | Email | EMAIL | TR002 | Validate format, handle duplicates |

| Legacy_CRM.Contact | Status | VARCHAR(20) | Contact | Status__c | PICKLIST | TR003 | Map legacy status values to new picklist values |

| Legacy_CRM.Contact | DateCreated | DATETIME | Contact | CreatedDate | DATETIME | TR004 | Convert to target system's UTC format |

| Legacy_CRM.Contact | Phone_Primary | VARCHAR(20) | Contact | Phone | PHONE | TR005 | Cleanse for non-numeric characters |

| Legacy_CRM.Contact | AddressLine1, AddressLine2, City, State, Zip | VARCHAR | Contact | MailingStreet, MailingCity, MailingState, MailingPostalCode | TEXT | TR006 | Concatenate AddressLine1 & AddressLine2 into MailingStreet |

Process:

  1. Discovery: Analyze source and target schemas.
  2. Initial Mapping: Create a preliminary mapping based on field names and descriptions.
  3. Business Review: Review with business stakeholders to confirm semantic correctness and identify transformation needs.
  4. Technical Review: Review with technical teams to confirm data type compatibility and transformation feasibility.
  5. Finalization: Obtain sign-off on all mapping documents.

5.2. Transformation Rules

Transformation rules define the specific logic applied to source data before it is loaded into the target system. Each rule is uniquely identified for traceability.

Purpose: To ensure source data conforms to the target system's data model, business rules, and data quality standards.

Structure: A dedicated document or section within the mapping document will detail each transformation rule.

| Rule ID | Source Field(s) | Target Field | Description | Logic / Pseudocode | Example (Source -> Target) |

| :------ | :-------------- | :----------- | :--------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------- |

| TR001 | ContactID | External_ID__c | Direct copy of the unique contact identifier. | TARGET.External_ID__c = SOURCE.ContactID | 12345 -> 12345 |

| TR002 | Email | Email | Validate email format and convert to lowercase. Handle duplicates by flagging or merging based on policy. | IF IS_VALID_EMAIL(SOURCE.Email) THEN TARGET.Email = LOWER(SOURCE.Email) ELSE TARGET.Email = NULL (and log error) | TEST@EXAMPLE.com -> test@example.com |

| TR003 | Status | Status__c | Map legacy status values to new picklist values. | CASE SOURCE.Status WHEN 'Active' THEN 'Current' WHEN 'Inactive' THEN 'Archived' WHEN 'Lead' THEN 'Prospect' ELSE 'Unknown' | Active -> Current |

| TR004 | DateCreated | CreatedDate | Convert to UTC and ensure target system's datetime format. | TARGET.CreatedDate = CONVERT_TO_UTC(SOURCE.DateCreated, 'YYYY-MM-DDTHH:MM:SSZ') | 2023-10-26 10:30:00 PST -> 2023-10-26T18:30:00Z |

| TR005 | Phone_Primary | Phone | Remove all non-numeric characters. | TARGET.Phone = REGEX_REPLACE(SOURCE.Phone_Primary, '[^0-9]', '') | (123) 456-7890 -> 1234567890 |

| TR006 | AddressLine1, AddressLine2 | MailingStreet | Concatenate address lines. | TARGET.MailingStreet = CONCAT(SOURCE.AddressLine1, ' ', SOURCE.AddressLine2) | 123 Main St, Apt 4B -> 123 Main St Apt 4B |

| TR007 | Legacy_Amount | Amount__c | Convert currency from USD to EUR (example). | TARGET.Amount__c = SOURCE.Legacy_Amount * 0.92 (as of current exchange rate) | 100 USD -> 92 EUR |

| TR008 | OwnerID | OwnerId | Map legacy owner IDs to new user IDs in the target system using a lookup table. | TARGET.OwnerId = LOOKUP(SOURCE.OwnerID, 'LegacyUserID_to_NewUserID_Map') | L101 -> U007 |

Common Transformation Types:

  • Data Type Conversion: e.g., String to Integer, Date to Datetime.
  • Format Conversion: e.g., Phone number formatting, date format standardization.
  • Concatenation/Splitting: Combining multiple source fields into one target field or vice-versa.
  • Lookup/Mapping: Translating source codes/IDs to target system equivalents (e.g., status codes, user IDs).
  • Default Values: Assigning a default value if the source field is null or empty.
  • Aggregation/Derivation: Calculating a new value based on multiple source fields.
  • Data Cleansing: Removing invalid characters, trimming whitespace, deduplication.

5.3. Validation Scripts & Strategy

Validation is crucial to confirm the successful and accurate migration of data. It will be performed at multiple stages.

Purpose: To verify data completeness, accuracy, consistency, and integrity after migration.

Strategy:

  1. Pre-Migration (Source Data Profiling):

* Analyze source data for anomalies, inconsistencies, and missing values.

* Generate summary statistics (row counts, min/max values, distinct counts) for all in-scope data.

Identify potential data quality issues that require cleansing before* migration.

* Tooling: SQL queries, data profiling tools.

  1. Post-Migration (Target Data Verification):

* Record Count Verification: Compare total record counts for each entity between source and target.

Script Example: SELECT COUNT() FROM SourceDB.Contacts; vs SELECT COUNT(*) FROM TargetDB.Contacts;

* Aggregate Sum Checks: Verify numerical fields by summing values in source and target.

Script Example:* SELECT SUM(Amount) FROM SourceDB.Opportunities; vs SELECT SUM(Amount__c) FROM TargetDB.Opportunities;

* Random Sample Data Verification: Manually inspect a statistically significant sample of records (e.g., 5-10% or N records per entity) to confirm visual accuracy and field-level mapping.

* Referential Integrity Checks: Verify relationships between records (e.g., all Contacts have a valid Account).

Script Example: SELECT COUNT() FROM TargetDB.Contacts WHERE AccountId NOT IN (SELECT Id FROM TargetDB.Accounts);

* Business Logic Validation: Run reports or queries that reflect critical business logic in the target system to ensure data behaves as expected.

Script Example:* Verify that all "Open" opportunities have an associated product.

* Duplicate Data Checks: Identify and report any duplicate records created during migration.

Script Example: SELECT Email, COUNT() FROM TargetDB.Contacts GROUP BY Email HAVING COUNT(*) > 1;

* Data Type and Length Verification: Ensure data types and lengths are correctly maintained as per target schema.

Tooling:

  • SQL scripts for direct database comparison.
  • Target system reporting tools (e.g., Salesforce Reports, custom dashboards).
  • ETL tool's built-in validation features.
  • Custom Python/Java scripts for complex validation logic.

5.4. Rollback Procedures

A robust rollback plan is essential to mitigate risks and ensure business continuity in case of migration failure or unforeseen issues.

Purpose: To restore systems and data to their pre-migration state if the migration is unsuccessful or causes critical errors.

Phases of Rollback:

  1. Pre-Migration Backup:

* Source System Backup: A full, verified backup of the source database/application will be

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}