Data Migration Planner
Run ID: 69ccf99d3e7fb09ff16a6d142026-04-01Development
PantheraHive BOS
BOS Dashboard

This document outlines a comprehensive plan for your upcoming data migration, providing detailed definitions, actionable strategies, and illustrative code examples for key components. This deliverable serves as a foundational blueprint for executing a successful, secure, and validated data transfer.


Data Migration Planner: Detailed Execution Blueprint

1. Introduction and Migration Overview

This deliverable provides the detailed planning artifacts for your data migration project. Our goal is to ensure a seamless transition of data from the [Source System Name] to the [Target System Name], minimizing downtime, preserving data integrity, and enabling immediate operational readiness post-migration.

The migration will encompass:

2. Key Migration Components

2.1. Field Mapping Definitions

Field mapping is the cornerstone of any data migration, explicitly defining how each source data element corresponds to a target data element. It includes source and target field names, data types, nullability, and references to any required transformation rules.

Structure: A mapping document will be maintained, typically in a spreadsheet or a configuration file (e.g., JSON, YAML), but for programmatic use, a Python dictionary can represent this effectively.

Example Code: migration_config.py (Field Mapping)

This Python dictionary defines the mapping from source fields to target fields, including metadata and references to transformation functions.

text • 710 chars
#### 2.3. Validation Scripts

Validation scripts are crucial for ensuring data quality and integrity throughout the migration lifecycle. They run at various stages:
*   **Pre-migration validation:** On source data, to identify and rectify issues before extraction.
*   **Post-transformation validation:** On staged data, after transformations, to ensure rules were applied correctly and no new issues were introduced.
*   **Post-load validation:** On target data, to confirm all data was loaded accurately and completely.

**Example Code: `validation_scripts.py`**

This module provides functions for common data validation checks. It assumes data is loaded into a Pandas DataFrame for ease of manipulation.

Sandboxed live preview

As a professional AI assistant within PantheraHive, I am providing the detailed study plan as requested, focusing on equipping you with the knowledge and skills necessary for successful data migration projects.


Detailed Study Plan: Mastering Data Migration

This comprehensive study plan is designed to guide you through the intricate process of data migration, from initial planning and architecture to execution, validation, and post-migration activities. By following this structured approach, you will develop a robust understanding of best practices, tools, and methodologies crucial for managing complex data transitions.

1. Introduction and Overview

Data migration is a critical process involving the transfer of data between storage types, formats, or computer systems. Whether moving to a new database, upgrading systems, or consolidating applications, a well-planned data migration is essential for business continuity and data integrity. This study plan breaks down the learning journey into manageable weekly modules, ensuring a holistic understanding of all key aspects.

2. Learning Objectives

Upon successful completion of this study plan, you will be able to:

  • Comprehend Data Migration Lifecycle: Understand the end-to-end phases of a data migration project, including planning, execution, and post-migration.
  • Perform Data Assessment & Profiling: Analyze source data for quality, consistency, and completeness, identifying potential challenges.
  • Design Field Mappings & Transformation Rules: Create accurate source-to-target data mappings and define robust data transformation logic.
  • Develop Data Validation Strategies: Implement pre-migration and post-migration validation techniques to ensure data integrity and accuracy.
  • Formulate Rollback & Contingency Plans: Design effective backup and rollback procedures to mitigate risks and ensure business continuity.
  • Evaluate Migration Methodologies: Understand different migration approaches (e.g., Big Bang, Trickle Migration) and select appropriate strategies.
  • Utilize Data Migration Tools: Gain familiarity with various ETL tools, scripting languages, and cloud-native migration services.
  • Manage Data Migration Projects: Apply project management principles to plan, execute, monitor, and control data migration initiatives.
  • Identify and Mitigate Risks: Proactively identify common data migration risks and develop strategies for mitigation.

3. Weekly Schedule (10 Weeks)

This schedule provides a structured path, allocating specific topics and activities for each week.

Week 1: Introduction to Data Migration & Project Planning

  • Learning Objectives: Understand data migration fundamentals, types, phases, and initial project setup.
  • Topics:

* What is Data Migration? Why is it necessary?

* Types of Data Migration (Storage, Database, Application, Cloud).

* Key Phases of a Data Migration Project (Analyze, Design, Build, Test, Execute, Validate).

* Stakeholder Identification and Management.

* Defining Project Scope, Goals, and Success Criteria.

  • Activities:

* Read foundational articles on data migration.

* Draft a high-level Data Migration Project Charter for a hypothetical scenario.

* Research common data migration challenges and failure points.

  • Estimated Study Time: 8-10 hours

Week 2: Data Assessment & Profiling

  • Learning Objectives: Master techniques for analyzing source data quality and structure.
  • Topics:

* Data Profiling: Tools and Methodologies.

* Identifying Data Quality Issues (duplicates, inconsistencies, missing values).

* Source System Analysis: Schema, Data Types, Relationships.

* Data Volume and Velocity Assessment.

* Data Cleansing Strategies.

  • Activities:

* Practice using a data profiling tool (e.g., OpenRefine, basic SQL queries, Python with Pandas) on a sample dataset.

* Generate a data quality report for the sample data.

* Identify potential data cleansing rules.

  • Estimated Study Time: 10-12 hours

Week 3: Field Mapping & Schema Design

  • Learning Objectives: Develop accurate source-to-target data mappings and understand target schema considerations.
  • Topics:

* Source-to-Target Field Mapping Principles.

* Data Type Conversion and Compatibility.

* Schema Evolution and Design for the Target System.

* Handling Primary Keys, Foreign Keys, and Unique Identifiers.

* Documentation Standards for Mapping.

  • Activities:

* Create a detailed field mapping document (Excel/Google Sheets) for a given source and target schema.

* Identify and document any schema discrepancies or required target system modifications.

* Review examples of good mapping documentation.

  • Estimated Study Time: 10-12 hours

Week 4: Data Transformation Rules

  • Learning Objectives: Define and document complex data transformation logic.
  • Topics:

* ETL (Extract, Transform, Load) Principles.

* Common Transformation Types: Lookup, Merge, Split, Aggregation, Derivation, Cleansing, Standardization.

* Conditional Logic and Business Rules.

* Handling Historical Data and Versioning.

* Performance Considerations for Transformations.

  • Activities:

* For the mapping document created in Week 3, add detailed transformation rules using pseudo-code or a specific scripting language (e.g., Python).

* Design a transformation for a complex scenario (e.g., combining multiple source fields into one target field with specific formatting).

  • Estimated Study Time: 10-12 hours

Week 5: Data Extraction & Loading Strategies

  • Learning Objectives: Understand various methods for data extraction and loading, and their implications.
  • Topics:

* Extraction Methods: API, Database Dumps, File Extracts (CSV, XML, JSON), Change Data Capture (CDC).

* Loading Methods: Direct Inserts, Updates, Upserts, Bulk Loading Utilities.

* Incremental vs. Full Load Strategies.

* Performance Tuning for Extraction and Loading.

* Security Considerations during Data Transfer.

  • Activities:

* Research and compare 3 different data extraction methods for a large database.

* Outline a strategy for an initial full load followed by incremental updates.

* Explore a cloud-native migration service (e.g., AWS DMS, Azure Data Migration Service).

  • Estimated Study Time: 8-10 hours

Week 6: Data Validation & Quality Assurance

  • Learning Objectives: Design and implement robust data validation and reconciliation processes.
  • Topics:

* Pre-Migration Validation: Source data quality checks, transformation rule validation.

* Post-Migration Validation: Data reconciliation (row counts, sum checks, statistical comparisons).

* Data Integrity Checks: Referential integrity, uniqueness, domain constraints.

* Error Handling and Reporting Mechanisms.

* Defining Data Migration Success Criteria.

  • Activities:

* Develop a set of validation scripts (SQL or Python) for the migrated data based on Week 3 & 4's work.

* Define a data reconciliation strategy, including checksums and record counts.

* Document error handling procedures for common migration failures.

  • Estimated Study Time: 10-12 hours

Week 7: Rollback & Contingency Planning

  • Learning Objectives: Create comprehensive plans for risk mitigation and system recovery.
  • Topics:

* Backup Strategies for Source and Target Systems.

* Developing a Detailed Rollback Plan: Steps, Triggers, Roles, Responsibilities.

* Go/No-Go Decision Criteria.

* Disaster Recovery and Business Continuity Implications.

* Risk Assessment and Mitigation Strategies specific to data migration.

  • Activities:

* Draft a detailed rollback procedure for a critical system migration.

* Identify potential failure points in a migration process and outline mitigation strategies.

* Define the Go/No-Go criteria for a migration cutover.

  • Estimated Study Time: 8-10 hours

Week 8: Migration Execution & Monitoring

  • Learning Objectives: Understand the cutover process, execution methodologies, and monitoring techniques.
  • Topics:

* Cutover Planning: Downtime management, communication plan.

* Migration Methodologies: Big Bang vs. Trickle Migration (Phased Migration).

* Parallel Run Strategies.

* Monitoring Tools and Metrics during Migration.

* Incident Management and Troubleshooting.

  • Activities:

* Outline a cutover plan for a small application's database migration.

* Compare the pros and cons of Big Bang vs. Trickle migration for a specific business scenario.

* Identify key metrics to monitor during a migration execution.

  • Estimated Study Time: 8-10 hours

Week 9: Post-Migration & Optimization

  • Learning Objectives: Learn about post-migration activities, data archiving, and performance optimization.
  • Topics:

* Post-Migration Audit and Reporting.

* Data Archiving and Retention Policies for Source Systems.

* Performance Tuning of the New System post-migration.

* User Acceptance Testing (UAT) and User Adoption Strategies.

* Lessons Learned and Continuous Improvement.

  • Activities:

* Design a post-migration audit checklist.

* Propose a strategy for archiving the old system's data.

* Develop a "lessons learned" template for a migration project.

  • Estimated Study Time: 8-10 hours

Week 10: Tools, Best Practices & Case Studies

  • Learning Objectives: Consolidate knowledge, explore common tools, and analyze real-world scenarios.
  • Topics:

* Overview of Commercial and Open-Source ETL/Migration Tools (e.g., Talend, SSIS, Informatica, AWS DMS, Azure Data Factory, Google Cloud Dataflow).

* Scripting for Migration (Python, PowerShell, Shell Scripting).

* Industry Best Practices and Common Pitfalls.

* Security, Compliance, and Governance in Data Migration.

* Review of successful and failed data migration case studies.

  • Activities:

* Research and compare features of 2-3 prominent data migration tools.

* Analyze a provided data migration case study, identifying strengths and weaknesses.

* Consolidate all learned concepts into a final comprehensive data migration plan for a hypothetical scenario.

  • Estimated Study Time: 12-15 hours

4. Recommended Resources

  • Books:

* "The Data Warehouse Toolkit" by Ralph Kimball (focus on ETL chapters).

* "Designing Data-Intensive Applications" by Martin Kleppmann (for understanding distributed systems and data consistency).

* "Data Migration" by Marc Anthony (specific guides may vary, search for recent editions).

  • Online Courses:

* Coursera/edX: Data Engineering Specializations, Database Design courses, Cloud Data Migration courses (AWS, Azure, GCP specific).

* Udemy/LinkedIn Learning: Courses on specific ETL tools (Talend, SSIS), Python for Data Engineering, SQL advanced concepts.

  • Documentation & Whitepapers:

* Cloud Provider Migration Services: AWS Database Migration Service (DMS), Azure Data Migration Service, Google Cloud Data Migration Service documentation.

* Vendor Documentation: Informatica, Talend, Microsoft SSIS, Oracle Data Integrator.

* Industry Blogs: DZone, Towards Data Science, specific vendor blogs (e.g., AWS Architecture Blog).

  • Tools (Hands-on Practice):

* Databases: PostgreSQL, MySQL, SQL Server (for source/target practice

python

validation_scripts.py

import pandas as pd

import logging

from typing import List, Dict, Any

Configure logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

--- Validation Functions ---

def validate_not_null(df: pd.DataFrame, column: str, record_id_col: str = None) -> List[Dict[str, Any]]:

"""Checks for NULL values in a specified column."""

issues = []

null_records = df[df[column].isnull()]

if not null_records.empty:

for idx, row in null_records.iterrows

gemini Output

Data Migration Planner: Comprehensive Migration Plan

Project: [Project Name - e.g., Legacy CRM to New ERP Migration]

Date: [Current Date]

Version: 1.0


Executive Summary

This document outlines a comprehensive plan for the data migration from [Source System Name, e.g., Legacy CRM] to [Target System Name, e.g., New ERP]. The migration aims to ensure a seamless transition of critical business data, maintaining data integrity, accuracy, and completeness. This plan details the scope, strategy, field mapping, transformation rules, validation procedures, rollback mechanisms, and a high-level timeline to guide the successful execution of this crucial project. Our objective is to minimize business disruption while maximizing data quality in the target system.


1. Introduction & Project Scope

1.1 Project Overview

The purpose of this project is to migrate all relevant historical and active data from [Source System Name] to [Target System Name]. This migration is a foundational step for the successful implementation and adoption of [Target System Name], enabling enhanced operational efficiency, improved reporting, and better data management capabilities.

1.2 Migration Scope

The scope of this data migration includes:

  • Source System: [e.g., Legacy CRM (version X.Y.Z)]
  • Target System: [e.g., New ERP (version A.B.C)]
  • Data Entities Included:

* Customer Records (Accounts, Contacts)

* Sales Orders

* Products/Services Catalog

* Historical Transactions (e.g., Invoices, Payments)

* Support Cases (if applicable)

* [Add other specific entities as required]

  • Data Entities Excluded:

* Archived data older than [X years] (unless specifically requested)

* Audit logs not critical for business operations

* Temporary or transient data

* [Add other specific exclusions as required]

  • Data Volume: Estimated [X GB / Y Million Records]
  • Migration Type: [e.g., Full historical data migration, Delta migration for ongoing data]

1.3 Goals and Objectives

  • Migrate 100% of in-scope data entities accurately and completely to [Target System Name].
  • Ensure data integrity and consistency throughout the migration process.
  • Minimize downtime and business disruption during the cutover phase.
  • Provide robust data validation and reconciliation mechanisms.
  • Establish clear rollback procedures in case of unforeseen issues.
  • Deliver a clean, usable dataset in [Target System Name] that meets business requirements.

2. Migration Strategy

2.1 Migration Approach

We recommend a [e.g., Phased / Big Bang] migration approach.

  • [Phased Approach Example]: Data will be migrated in logical stages (e.g., Master Data first, then Transactional Data, or by business unit). This allows for incremental testing, reduces the overall risk, and provides opportunities for learning and adjustment between phases. This approach is suitable for large, complex migrations where extended downtime is not feasible.
  • [Big Bang Approach Example]: All in-scope data will be migrated simultaneously during a planned outage window. This approach is typically faster, reduces the complexity of managing parallel systems, but carries higher risk and requires thorough preparation and testing. This approach is suitable for smaller, less complex migrations or when the business can tolerate a longer single outage.

2.2 Migration Tooling

The migration will utilize a combination of the following tools and methods:

  • ETL Tool: [e.g., Talend, Informatica, SSIS, custom Python scripts, etc.] for extraction, transformation, and loading.
  • Data Export/Import Utilities: Native tools provided by [Source/Target System Names] (e.g., CSV exports, API integrations).
  • Custom Scripts: SQL scripts for data extraction, manipulation, and validation.
  • Version Control: Git or similar for managing all migration scripts and configurations.

3. Data Analysis & Preparation

3.1 Source Data Analysis

A thorough analysis of the [Source System Name] data will be performed, including:

  • Data Profiling: Identifying data types, formats, uniqueness, nullability, and distribution.
  • Schema Comparison: Mapping source tables/fields to target tables/fields.
  • Data Quality Assessment: Identifying inconsistencies, missing values, duplicates, and erroneous data that require cleansing.
  • Dependency Mapping: Understanding relationships between different data entities.

3.2 Data Cleansing Strategy

Prior to migration, a data cleansing strategy will be implemented to address identified data quality issues:

  • De-duplication: Identifying and merging duplicate records (e.g., customer accounts).
  • Standardization: Enforcing consistent formats (e.g., address formats, phone numbers).
  • Correction: Correcting erroneous data based on business rules or external data sources.
  • Enrichment: Adding missing critical data where possible (e.g., country codes).
  • Archiving: Identifying and marking data not in scope for migration (e.g., old, inactive records).
  • Business Review: Critical cleansing decisions will be reviewed and approved by business stakeholders.

4. Detailed Migration Plan

4.1 Field Mapping (Source to Target)

A detailed field mapping document will be maintained, outlining the exact correspondence between source and target fields. This will include:

  • Source Table/Field Name: Original field name in [Source System Name].
  • Source Data Type: Original data type (e.g., VARCHAR(50), INT, DATETIME).
  • Target Table/Field Name: Corresponding field name in [Target System Name].
  • Target Data Type: Required data type in [Target System Name].
  • Transformation Rule ID: Reference to specific transformation logic (see Section 4.2).
  • Notes/Comments: Any specific considerations or business rules for the field.

Example Field Mapping Table:

| Source Table | Source Field Name | Source Data Type | Target Table | Target Field Name | Target Data Type | Transformation Rule ID | Notes |

| :----------- | :---------------- | :--------------- | :----------- | :---------------- | :--------------- | :--------------------- | :---- |

| CRM_Contacts | ContactID | INT | ERP_Customers | CustomerID | INT | T001 | Primary Key Mapping |

| CRM_Contacts | FirstName | VARCHAR(50) | ERP_Customers | FirstName | NVARCHAR(50) | T002 | Case conversion |

| CRM_Contacts | LastName | VARCHAR(50) | ERP_Customers | LastName | NVARCHAR(50) | T002 | Case conversion |

| CRM_Contacts | Email | VARCHAR(100) | ERP_Customers | EmailAddress | NVARCHAR(100) | T003 | Validate format |

| CRM_Contacts | Phone | VARCHAR(20) | ERP_Customers | PhoneNumber | NVARCHAR(20) | T004 | Format to E.164 |

| CRM_Orders | OrderDate | DATETIME | ERP_Orders | OrderPlacedDate | DATETIME2(7) | T005 | Timezone adjustment |

| CRM_Products | ProductCategory | VARCHAR(50) | ERP_Products | CategoryCode | NVARCHAR(10) | T006 | Lookup and map to ERP codes |

4.2 Transformation Rules

Each transformation rule will be documented with a unique ID, a clear description, and the logic applied.

Example Transformation Rules:

  • T001: Primary Key Mapping

* Description: Retain ContactID from CRM_Contacts as CustomerID in ERP_Customers. Ensure uniqueness. If conflicts arise, a new ID will be generated, and a cross-reference table maintained.

* Logic: CustomerID = Source.ContactID (if unique); else CustomerID = GenerateNewID(), CrossReferenceTable.Add(Source.ContactID, NewID).

  • T002: Case Conversion

* Description: Convert FirstName and LastName to proper case (first letter capitalized, rest lowercase).

* Logic: Target.FirstName = ProperCase(Source.FirstName), Target.LastName = ProperCase(Source.LastName).

  • T003: Email Format Validation

* Description: Validate email addresses against a standard regex pattern. If invalid, log the record and either nullify the email or flag for manual review.

* Logic: IF IsValidEmail(Source.Email) THEN Target.EmailAddress = Source.Email ELSE Target.EmailAddress = NULL (or 'INVALID').

  • T004: Phone Number Formatting

* Description: Clean and format phone numbers to E.164 international standard.

* Logic: Target.PhoneNumber = FormatPhoneNumber(Source.Phone, 'E.164').

  • T005: Timezone Adjustment

* Description: Convert OrderDate from [Source System's Timezone, e.g., PST] to [Target System's Timezone, e.g., UTC].

* Logic: Target.OrderPlacedDate = ConvertTimeZone(Source.OrderDate, 'PST', 'UTC').

  • T006: Category Code Lookup

* Description: Map ProductCategory from source system (e.g., 'Electronics', 'Books') to target system's standardized CategoryCode (e.g., 'ELC', 'BOK'). A lookup table will be used.

* Logic: Target.CategoryCode = Lookup(Source.ProductCategory, 'CategoryMappingTable'). If no match, default to 'MISC' and flag.

  • T007: Concatenation

* Description: Combine AddressLine1 and AddressLine2 from source into a single StreetAddress field in the target.

* Logic: Target.StreetAddress = Source.AddressLine1 + ' ' + Source.AddressLine2. Handle nulls gracefully.

  • T008: Default Value Assignment

* Description: If CustomerType is missing in the source, default it to 'Individual'.

* Logic: Target.CustomerType = ISNULL(Source.CustomerType, 'Individual').

4.3 Data Validation Scripts

Robust validation is crucial to ensure data integrity and accuracy post-migration. Validation will occur at multiple stages:

  • Pre-Migration Validation (Source System):

* Purpose: Identify data quality issues before extraction.

* Scripts: SQL queries to identify duplicates, referential integrity violations, invalid formats, and missing mandatory fields in the source system.

* Action: Report identified issues for cleansing or business decision.

  • Mid-Migration Validation (ETL Process):

* Purpose: Monitor data quality during transformation and loading.

* Scripts/Tools: ETL tool's built-in validation rules, logging of rejected records, error handling for data type mismatches, constraint violations.

* Action: Log errors, quarantine bad records, and alert administrators.

  • Post-Migration Validation (Target System):

* Purpose: Verify successful and accurate migration in the target system.

* Key Checks:

* Row Counts: Compare record counts for each entity between source and target.

SELECT COUNT() FROM Source.Table vs. SELECT COUNT(*) FROM Target.Table

* Sum/Average Checks: Validate numerical fields (e.g., total sales, average order value).

* SELECT SUM(Amount) FROM Source.Orders vs. SELECT SUM(Amount) FROM Target.Orders

* Uniqueness Constraints: Verify primary keys and unique indices are enforced in the target.

SELECT CustomerID, COUNT() FROM Target.Customers GROUP BY CustomerID HAVING COUNT(*) > 1

* Referential Integrity: Ensure relationships between tables are maintained (e.g., all CustomerID in Orders exist in Customers).

* SELECT DISTINCT OrderCustomerID FROM Target.Orders WHERE OrderCustomerID NOT IN (SELECT CustomerID FROM Target.Customers)

* Random Sample Data Verification: Manually inspect a statistically significant sample of records (e.g., 5-10% of records) for accuracy of all fields.

* Business Rule Validation: Verify specific business logic (e.g., an order status can only be 'Pending' or 'Completed').

* Reporting: Generate detailed validation reports highlighting discrepancies and errors.

* Action: Investigate discrepancies, re-run specific parts of the migration if necessary, or manually correct data errors.

4.4 Rollback Procedures

A comprehensive rollback plan is essential to mitigate risks and provide a safety net in case of critical failures during or after the migration.

  • 1. Pre-Migration Backups:

* Source System: Perform a full database backup of [Source System Name] immediately before the migration window.

* Target System (if applicable): If [Target System Name] contains existing data, perform a full database backup before loading new data.

* ETL Staging Area: Backup any intermediate staging databases or files.

  • 2. Transactional Control:

* Database Transactions: Utilize database transactions for loading data into the target system. If an error occurs during a batch load, the entire batch can be rolled back.

* Batching: Break down the migration into smaller, manageable batches to limit the impact of a failed load.

  • 3. Rollback Steps (if migration fails during execution):

* Step 3.1: Stop Migration Processes: Immediately halt all ETL jobs and data loading processes.

* Step 3.2: Restore Target System: Restore [Target System Name] database to its pre-migration state using the backup taken in Step 1.

* Step 3.3: Revert Configuration Changes: Roll back any configuration changes made in [Target System Name] specifically for the migration.

* Step 3.4: Re-enable Source System (if frozen): If the source system was frozen, unfreeze it to allow business operations to resume.

* Step 3.5: Communicate: Inform all stakeholders about the rollback and the revised plan.

* Step 3.6: Root Cause Analysis: Conduct a thorough investigation to identify the cause

data_migration_planner.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}