Data Migration Planner
Run ID: 69cc5edeb4d97b7651475cdf2026-03-31Development
PantheraHive BOS
BOS Dashboard

As part of the PantheraHive workflow "Data Migration Planner," this deliverable provides a comprehensive, professional output for step 2: gemini → generate_code. This output includes detailed configuration examples and a Python script designed to facilitate robust data migration planning, execution, and validation.


Project Overview: Data Migration Planner

The goal of this project step is to generate code and configuration templates that encapsulate the critical aspects of a data migration. This includes defining field mappings, specifying transformation rules, outlining validation procedures, establishing rollback strategies, and providing a framework for timeline estimation. The generated code is designed to be highly configurable, maintainable, and extensible, serving as a foundation for your data migration efforts.

Deliverable: Data Migration Planner - Code & Configuration

This section provides the core components for your data migration planning: a structured YAML configuration file and a Python script that interprets and executes the migration logic.

1. Configuration File: migration_config.yaml

This YAML file serves as the central definition for your data migration. It allows you to declaratively define source and target systems, specify which entities (tables/collections) to migrate, detail field mappings, and outline transformation and validation rules without modifying core code.

Purpose

Detailed Structure

The migration_config.yaml is structured to cover all necessary aspects of a migration for multiple entities.

* name: A unique identifier for the migration entity (e.g., 'Users', 'Orders').

* source_entity_name: The actual name of the entity in the source system (e.g., 'TBL_USERS').

* target_entity_name: The actual name of the entity in the target system (e.g., 'users').

* field_mapping: A dictionary mapping source field names to target field names.

* source_field: The name of the field in the source entity.

* target_field: The name of the field in the target entity.

* transformation_rules: A dictionary where keys are target_field names and values are transformation rule definitions.

* target_field: The field in the target entity that requires transformation.

* rule: A dictionary specifying the transformation function and its arguments.

* function: The name of a predefined transformation function (see data_migration_planner.py).

* args: A list of arguments to pass to the transformation function. These can be literal values or references to other source fields (e.g., {'type': 'source_field', 'value': 'first_name'}).

* validation_rules: A list of validation rules to apply to the target data.

* field: The target field to validate.

* rule: A dictionary specifying the validation type and parameters.

* type: The type of validation (e.g., 'not_null', 'unique', 'regex', 'custom_function').

* params: Parameters specific to the validation type (e.g., pattern for regex).

Example migration_config.yaml

text • 2,742 chars
---

### 2. Python Script: `data_migration_planner.py`

This Python script is the operational core of your data migration plan. It's designed to read the `migration_config.yaml`, provide extensible transformation and validation logic, and outline a framework for execution and rollback.

#### Purpose

*   **Configuration Interpretation:** Load and parse the `migration_config.yaml`.
*   **Extensible Logic:** Provide a framework for custom transformation and validation functions.
*   **Migration Orchestration:** Outline the steps for data extraction, transformation, loading, and validation.
*   **Rollback Framework:** Define a structured approach to rollback procedures.
*   **Timeline Estimation:** Offer a module for estimating migration effort.

#### Core Components Explained

##### Configuration Loading (`load_migration_config`)
Loads the YAML configuration file, ensuring it's well-formed and accessible.

##### Data Transformation Engine (`TransformationEngine`)
A class that manages and applies transformation rules defined in the configuration. It includes common, reusable transformation functions and allows for easy addition of custom logic.

*   **Predefined Transformations:** Examples include `to_upper`, `to_lower`, `format_datetime`, `concatenate_fields`, `map_boolean_to_string`, `generate_uuid_if_null`, `passthrough`, `convert_to_decimal`, `capitalize_first_letter`.
*   **`apply_transformation` Method:** Takes a source row, a target field, and its rule, then executes the specified function, resolving arguments from source fields or literal values.

##### Data Validation Engine (`ValidationEngine`)
A class responsible for applying validation rules to transformed data. This helps ensure data quality before and after loading into the target system.

*   **Predefined Validations:** Examples include `not_null`, `not_empty_string`, `is_numeric`, `is_datetime`, `regex`, `in_list`, `unique_check` (which would require external data access).
*   **`validate_row` Method:** Iterates through all validation rules for an entity and reports any failures.

##### Rollback Procedure Framework (`RollbackProcedure`)
A conceptual class outlining the essential steps for a robust rollback strategy. This is crucial for minimizing risks during migration.

*   **`pre_migration_backup`:** Steps to back up critical data in the target system.
*   **`post_migration_restore`:** Steps to restore the target system to its pre-migration state.

##### Timeline Estimation Module (`estimate_timeline`)
A function that provides a high-level estimate of migration effort based on configurable parameters like data volume, complexity, and number of entities. This helps in project planning.

#### Example `data_migration_planner.py`

Sandboxed live preview

As a professional AI assistant within PantheraHive, I am generating the following detailed study plan for Data Migration Excellence, as per your request. This plan is designed to equip a learner with comprehensive knowledge and practical skills required for successful data migration projects.


Comprehensive Study Plan for Data Migration Excellence

1. Introduction & Purpose

This study plan outlines a structured approach to mastering the complexities of data migration. It is tailored for professionals seeking to develop expertise in planning, executing, and validating data migration initiatives. The plan integrates theoretical knowledge with practical application, covering all critical phases from initial assessment to post-migration activities.

2. Learning Objectives

Upon successful completion of this study plan, the learner will be able to:

  • Understand Data Migration Fundamentals: Grasp core concepts, types, methodologies, and common challenges associated with data migration.
  • Conduct Thorough Discovery & Assessment: Perform source and target system analysis, data profiling, and dependency mapping.
  • Develop Robust Migration Strategies: Design comprehensive data migration plans, including scope definition, risk assessment, and tool selection.
  • Master Data Mapping & Transformation: Create detailed field mappings, define complex transformation rules, and implement data cleansing strategies.
  • Implement Data Extraction & Loading: Understand various techniques for extracting data from source systems and loading it into target systems, including ETL/ELT processes.
  • Design & Execute Validation Procedures: Develop effective data validation scripts and quality assurance processes to ensure data integrity and accuracy post-migration.
  • Plan & Manage Cutover & Rollback: Define clear cutover strategies, understand downtime considerations, and establish robust rollback procedures for risk mitigation.
  • Apply Best Practices & Security: Incorporate industry best practices, ensure data security, privacy, and compliance throughout the migration lifecycle.
  • Utilize Data Migration Tools: Gain familiarity with leading data migration and ETL tools, understanding their capabilities and application.
  • Solve Real-World Migration Challenges: Apply learned knowledge to analyze and propose solutions for complex data migration scenarios through case studies and project simulations.

3. Weekly Schedule (12 Weeks)

This schedule assumes approximately 10-15 hours of study per week, including reading, exercises, and project work.


Week 1: Introduction to Data Migration & Fundamentals

  • Topics: What is data migration? Types of migration (on-premise to cloud, database upgrades, application consolidation), common challenges, benefits, and risks. Overview of the data migration lifecycle.
  • Key Concepts: Big Bang vs. Phased migration, ETL vs. ELT, data integrity, data quality.
  • Activities: Read foundational articles, watch introductory videos.

Week 2: Discovery & Assessment – Source & Target Analysis

  • Topics: Understanding source and target systems (databases, applications, file formats), data profiling techniques, schema analysis, data volume estimation, identifying data owners.
  • Key Concepts: Metadata management, data dictionary, data lineage, data dependencies.
  • Activities: Analyze sample source/target schemas, perform basic data profiling on a small dataset.

Week 3: Planning & Strategy – Methodologies & Scope

  • Topics: Defining migration scope, objectives, success criteria. Choosing migration methodologies. Stakeholder identification, communication plan. Risk assessment and mitigation strategies. Tool selection criteria.
  • Key Concepts: Project charter, scope document, risk log, migration strategy document.
  • Activities: Draft a high-level migration plan for a hypothetical scenario.

Week 4: Field Mapping & Data Modeling

  • Topics: Detailed field-level mapping (source to target), data type conversions, handling nulls, primary/foreign key relationships. Understanding target data models and schema design.
  • Key Concepts: Data mapping document, referential integrity, surrogate keys.
  • Activities: Create a detailed field mapping document for a sample dataset.

Week 5: Data Transformation & Cleansing Rules

  • Topics: Defining transformation rules (lookup, concatenation, splitting, aggregation, conditional logic). Data cleansing techniques (deduplication, standardization, correction). Handling historical data.
  • Key Concepts: Business rules, data quality rules, data governance.
  • Activities: Develop transformation rules for specific data quality issues in a sample dataset.

Week 6: Data Extraction Techniques

  • Topics: Methods for data extraction (APIs, direct database queries, file exports, change data capture - CDC). Performance considerations. Handling large volumes of data.
  • Key Concepts: Incremental vs. full extraction, batch processing, streaming.
  • Activities: Practice simple data extraction using SQL or a scripting language from a sample database.

Week 7: Data Loading Techniques

  • Topics: Methods for data loading (inserts, updates, upserts). Understanding target system constraints. Error handling during loading. Performance optimization for loading.
  • Key Concepts: Bulk loading, trickle loading, indexing considerations.
  • Activities: Practice loading data into a target database using SQL or an ETL tool.

Week 8: Data Validation & Quality Assurance

  • Topics: Pre-migration validation, post-migration validation. Data count checks, checksums, reconciliation reports, referential integrity checks, business rule validation.
  • Key Concepts: Data quality metrics, validation scripts, audit trails.
  • Activities: Write sample SQL queries/scripts for validating data counts and specific business rules post-migration.

Week 9: Testing, Cutover & Rollback Procedures

  • Topics: Unit testing, integration testing, user acceptance testing (UAT). Planning the cutover window, communication plan. Developing comprehensive rollback procedures and contingency plans.
  • Key Concepts: Test plan, test cases, go/no-go decision, disaster recovery.
  • Activities: Outline a test plan for a migration scenario, including UAT steps.

Week 10: Post-Migration Activities & Monitoring

  • Topics: Decommissioning old systems, archiving data, performance monitoring of new systems, ongoing data quality management, user training.
  • Key Concepts: Post-migration audit, performance benchmarks, continuous improvement.
  • Activities: Research tools for post-migration monitoring and performance analysis.

Week 11: Security, Compliance & Best Practices

  • Topics: Data security during migration (encryption, access control). Regulatory compliance (GDPR, HIPAA, CCPA). Industry best practices, common pitfalls, lessons learned.
  • Key Concepts: Data masking, access management, regulatory frameworks.
  • Activities: Analyze a data migration scenario for potential security and compliance risks.

Week 12: Project Simulation & Review

  • Topics: End-to-end review of the data migration lifecycle. Advanced topics, troubleshooting common issues. Comprehensive case study analysis.
  • Activities: Work through a complete data migration simulation project, applying all learned concepts. Final project presentation.

4. Recommended Resources

  • Books:

* "Building a Data Warehouse for Dummies" by Wiley Publishing (for foundational data concepts).

* "The Data Warehouse Toolkit" by Ralph Kimball and Margy Ross (for data modeling and ETL principles).

* Specific books on ETL tools (e.g., "Microsoft SQL Server 2019 Integration Services: A Practical Guide").

  • Online Courses & Certifications:

* Coursera/edX/Udemy: "Data Engineering with Google Cloud," "AWS Certified Database – Specialty," "Azure Data Engineer Associate." Look for courses on ETL, data warehousing, and cloud migration.

* Vendor-Specific Training: Official training programs from Microsoft (Azure Data Factory, SSIS), AWS (DMS, Glue), Google Cloud (Dataflow), Informatica, Talend.

  • Documentation & Whitepapers:

* Cloud Providers: AWS Database Migration Service (DMS) documentation, Azure Data Factory documentation, Google Cloud Dataflow documentation.

* ETL Tools: Official documentation for Talend, Informatica PowerCenter, Microsoft SSIS, Apache NiFi.

* Industry Reports: Gartner, Forrester reports on data migration trends, tools, and best practices.

  • Articles & Blogs:

* Tech blogs from major cloud providers (AWS, Azure, Google Cloud).

* Blogs from data migration solution providers and consulting firms.

* Medium, Towards Data Science for practical guides and case studies.

  • Tools for Hands-on Practice:

* Databases: PostgreSQL, MySQL, SQL Server (free developer editions).

* ETL Tools: Talend Open Studio (free), SQL Server Integration Services (SSIS - part of SQL Server Developer Edition), Python with pandas, petl libraries.

* Cloud Services (Free Tiers): AWS Free Tier (DMS, Glue), Azure Free Account (Data Factory), Google Cloud Free Tier (Dataflow).

5. Milestones

  • Milestone 1 (End of Week 3): Submission of a high-level data migration strategy document for a defined scenario.
  • Milestone 2 (End of Week 5): Completion of a detailed field mapping and transformation rule set for a sample dataset.
  • Milestone 3 (End of Week 8): Development of a simple data extraction and loading script, along with basic validation queries.
  • Milestone 4 (End of Week 9): Outline of a comprehensive test plan and rollback procedure for a critical migration phase.
  • Milestone 5 (End of Week 12): Successful completion and presentation of a full-cycle data migration simulation project (e.g., migrating a small application database).

6. Assessment Strategies

  • Weekly Quizzes/Exercises: Short assessments to test understanding of theoretical concepts and practical application (e.g., writing SQL queries for profiling or transformation).
  • Practical Assignments: Hands-on tasks such as developing mapping documents, creating transformation rules, or writing validation scripts for given datasets.
  • Mid-Program Project (End of Week 6): A mini-project encompassing data profiling, mapping, and initial transformation design.
  • Final Project/Case Study: An end-to-end data migration simulation, requiring the learner to apply all learned skills to plan, execute, validate, and document a migration. This will involve designing the architecture, mapping, transformation, validation, and rollback procedures.
  • Peer Reviews: Engaging with peers to review migration plans, scripts, and strategies, providing constructive feedback.
  • Self-Assessment & Reflection: Regularly reflecting on challenges encountered and solutions found, documenting lessons learned.

This detailed study plan provides a robust framework for developing expertise in data migration. Consistent effort, hands-on practice, and engagement with real-world scenarios will be key to achieving the defined learning objectives and becoming a proficient data migration planner.

python

data_migration_planner.py

import yaml

import datetime

import re

import uuid

import logging

from decimal import Decimal, InvalidOperation

Configure logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

def load_migration_config(config_path: str) -> dict:

"""

Loads the data migration configuration from a YAML file.

Args:

config_path (str): The path to the YAML configuration file.

Returns:

dict: The loaded configuration dictionary.

Raises:

FileNotFoundError: If the configuration file does not exist.

yaml.YAMLError: If there is an error parsing the YAML file.

"""

try:

with open(config_path, 'r') as file:

config = yaml.safe_load(file)

logging.info(f"Configuration loaded successfully from {config_path}")

return config

except FileNotFoundError:

logging.error(f"Configuration file not found at {config_path}")

raise

except yaml.YAMLError as e:

logging.error(f"Error parsing YAML configuration file: {e}")

raise

class TransformationEngine:

"""

Manages and applies data transformation rules defined in the migration configuration.

"""

def __init__(self):

# Register common transformation functions

self.transformations = {

"passthrough": self._passthrough,

"to_upper": self._to_upper,

"to_lower": self._to_lower,

"capitalize_first_letter": self._capitalize_first_letter,

"format_datetime": self._format_datetime,

"concatenate_fields": self._concatenate_fields,

"map_boolean_to_string": self._map_boolean_to_string,

"generate_uuid_if_null": self._generate_uuid_if_null,

"convert_to_decimal": self._convert_to_decimal,

# Add more custom

gemini Output

Data Migration Planner: Comprehensive Migration Plan

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Client Name]

Prepared By: PantheraHive Solutions


1. Executive Summary

This document outlines the comprehensive plan for the data migration from [Source System Name] to [Destination System Name]. The objective is to ensure a secure, efficient, and accurate transfer of all identified data, minimizing business disruption and preserving data integrity. This plan details the scope, methodology, data mapping, transformation rules, validation procedures, rollback strategy, and an estimated timeline to guide the successful execution of this critical initiative.

2. Introduction & Project Scope

2.1 Purpose

The purpose of this document is to provide a detailed roadmap for migrating existing data from [Source System Name] (the "Source System") to [Destination System Name] (the "Destination System"). This plan will serve as the foundational guide for all project stakeholders, ensuring clarity, accountability, and a structured approach to the migration.

2.2 Project Goals

  • Accuracy: Ensure 100% data fidelity and integrity in the Destination System.
  • Completeness: Migrate all in-scope data records without loss.
  • Efficiency: Execute the migration within the defined timeline and budget.
  • Minimal Disruption: Minimize downtime and impact on business operations.
  • Security: Maintain data confidentiality, integrity, and availability throughout the process.
  • Validation: Implement robust validation mechanisms to confirm successful migration.

2.3 Scope of Migration

The migration will encompass the following data entities and modules:

  • Customer Data: Customer profiles, contact information, account details.
  • Product Data: Product catalog, pricing, inventory levels.
  • Order Data: Historical and active orders, line items, shipping details.
  • Financial Data: Invoices, payments, general ledger entries (as specified).
  • [Specific Module/Data Type 1]: [Description]
  • [Specific Module/Data Type 2]: [Description]

2.4 Out of Scope

The following data or functionalities are explicitly excluded from this migration plan:

  • Archived data older than [e.g., 5 years].
  • Unstructured data (e.g., documents, images) not directly linked to core entities (unless specified).
  • System configurations or user preferences not directly tied to core data records.
  • Integration with third-party systems (beyond data transfer into the Destination System).

3. Source and Destination Systems Overview

3.1 Source System

  • System Name: [Source System Name]
  • Type: [e.g., ERP, CRM, Custom Application]
  • Database: [e.g., Microsoft SQL Server 2017, Oracle 12c, PostgreSQL]
  • Key Modules: [e.g., Sales, Inventory, Finance, Customer Management]
  • Data Volume: Approximately [e.g., 500 GB, 10 million records] across [e.g., 150 tables].
  • Connectivity: [e.g., Direct database access, API endpoints, Flat file exports]

3.2 Destination System

  • System Name: [Destination System Name]
  • Type: [e.g., New ERP, Cloud-based CRM, Custom Application]
  • Database: [e.g., Azure SQL Database, AWS Aurora PostgreSQL, MongoDB]
  • Key Modules: [e.g., Customer Management, Order Processing, Product Catalog]
  • Connectivity: [e.g., API for bulk inserts, Direct database connection via secure tunnel]

4. Migration Strategy

4.1 Chosen Approach: Phased Migration

Given the complexity and volume of data, a Phased Migration approach is recommended. This strategy allows for:

  • Reduced risk by migrating data in manageable batches (e.g., by module or data type).
  • Early identification and resolution of issues.
  • Minimizing downtime for specific business functions.
  • Better control over data quality and validation.

Phases:

  1. Phase 1: Master Data (Customers, Products, Vendors)
  2. Phase 2: Transactional Data (Historical Orders, Invoices)
  3. Phase 3: Supporting Data (Payments, Shipments, Custom Entities)
  4. Final Cutover: Delta load and full system switch.

4.2 Downtime Strategy

  • Scheduled Outage: A planned downtime window of [e.g., 12-24 hours] will be required during the final cutover phase for the delta load and system switch.
  • Business Impact: Users will be notified well in advance. Critical business operations will be suspended or shifted to manual processes during this window.
  • Contingency: A detailed rollback plan is in place (see Section 11) to mitigate extended downtime risks.

5. Data Identification & Extraction

5.1 Data Sources

  • Primary Database: [Source System Database Name]
  • Flat Files: [e.g., CSV exports for specific legacy modules]
  • API Endpoints: [e.g., for real-time customer updates]

5.2 Extraction Method

Data will be extracted using a combination of:

  • Direct Database Queries: SQL scripts will be developed to extract data directly from the Source System database, ensuring data integrity and allowing for initial filtering.
  • ETL Tool Connectors: Utilizing [e.g., Talend, Azure Data Factory, AWS Glue] connectors for specific data sources that require more complex extraction logic or API interaction.
  • Export Utilities: For certain legacy modules, native export utilities may be used to generate flat files (e.g., CSV, XML).

5.3 Extraction Frequency

  • Initial Full Load: A complete dataset will be extracted once at the beginning of each migration phase.
  • Delta Loads: For the final cutover, a delta load mechanism will be implemented to capture changes (inserts, updates, deletes) in the Source System since the last full load. This will minimize the final cutover window.

6. Data Field Mapping

This section outlines the detailed mapping of source fields to destination fields. A comprehensive mapping document will be maintained in [e.g., an Excel spreadsheet, a dedicated mapping tool] and will include all identified entities. Below is an illustrative example:

Entity: Customer

| Source Table.Field Name | Source Data Type | Destination Table.Field Name | Destination Data Type | Transformation Rule(s) | Notes / Comments |

| :---------------------- | :--------------- | :--------------------------- | :-------------------- | :--------------------- | :----------------------------------------------------------------------------------------------------------------------------- |

| Customers.CustomerID | INT | CRM.Customer.ExternalID | VARCHAR(50) | CAST(CustomerID AS VARCHAR(50)) | Source PK, mapped to an external ID field in CRM for traceability. |

| Customers.Name | NVARCHAR(255) | CRM.Customer.FullName | NVARCHAR(255) | TRIM() | Remove leading/trailing spaces. |

| Customers.Address1 | NVARCHAR(255) | CRM.Customer.StreetAddress | NVARCHAR(255) | TRIM() | |

| Customers.City | NVARCHAR(100) | CRM.Customer.City | NVARCHAR(100) | TRIM() | |

| Customers.StateCode | CHAR(2) | CRM.Customer.State | VARCHAR(2) | UPPER() | Standardize to uppercase two-letter codes. |

| Customers.ZipCode | VARCHAR(10) | CRM.Customer.PostalCode | VARCHAR(10) | TRIM() | |

| Customers.Email | NVARCHAR(255) | CRM.Customer.Email | NVARCHAR(255) | LOWER(), TRIM() | Convert to lowercase, remove spaces. |

| Customers.Phone | VARCHAR(20) | CRM.Customer.PhoneNumber | VARCHAR(20) | CleanPhoneNumber() | Custom function: Remove non-numeric characters, format as (XXX) XXX-XXXX. |

| Customers.CreationDate| DATETIME | CRM.Customer.CreatedAt | DATETIME | CONVERT_TZ('UTC') | Convert to UTC timezone. |

| Orders.OrderID | INT | CRM.Order.ExternalOrderID | VARCHAR(50) | CAST(OrderID AS VARCHAR(50)) | Source PK for orders. |

| Orders.OrderTotal | DECIMAL(10,2) | CRM.Order.TotalAmount | DECIMAL(10,2) | ROUND(Value, 2) | Ensure 2 decimal places. |

| Orders.OrderStatus | VARCHAR(50) | CRM.Order.Status | VARCHAR(50) | MapOrderStatus() | Custom function: Map source statuses

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}