Data Migration Planner
Run ID: 69cb932c61b1021a29a8a57e2026-03-31Development
PantheraHive BOS
BOS Dashboard

Plan a complete data migration with field mapping, transformation rules, validation scripts, rollback procedures, and timeline estimates.

Detailed Study Plan: Becoming a Data Migration Planner

This document outlines a comprehensive, structured study plan designed to equip an individual with the necessary knowledge and skills to excel as a Data Migration Planner. The plan covers core concepts, practical application, and essential tools required to plan and manage complex data migration projects effectively.


1. Program Overview & Goal

Program Goal: To develop a deep understanding of data migration principles, methodologies, and best practices, enabling the planning, design, and oversight of complete data migration projects, including field mapping, transformation rules, validation, rollback procedures, and timeline estimation.

Target Audience: Aspiring Data Migration Specialists, Data Engineers, Solution Architects, Project Managers, or anyone looking to specialize in data migration planning.

Duration: 6 Weeks (intensive, approximately 15-20 hours of study per week)


2. Weekly Schedule & Learning Objectives

This section details the weekly breakdown of topics, specific learning objectives, and practical application goals.

Week 1: Data Migration Fundamentals & Strategy

  • Learning Objectives:

* Understand the full data migration lifecycle and its key phases (Discovery, Design, Build, Test, Execute, Validate).

* Identify common business drivers and challenges for data migration projects.

* Differentiate between various migration strategies (e.g., Big Bang, Phased, Hybrid) and their suitability.

* Learn about stakeholder identification, communication, and risk management in migration projects.

* Grasp the importance of data governance and compliance (e.g., GDPR, HIPAA) in migration.

  • Key Activities:

* Research and summarize data migration best practices.

* Analyze a hypothetical business case for data migration and propose an initial strategy.

* Identify potential risks and mitigation strategies for the proposed case.

  • Deliverable: Initial Data Migration Strategy Outline (for a sample scenario)

Week 2: Data Profiling & Field Mapping

  • Learning Objectives:

* Master techniques for data profiling to assess data quality, completeness, and consistency.

* Identify common data quality issues (duplicates, inconsistencies, missing values, invalid formats).

* Design and document comprehensive Source-to-Target Mapping (STTM) specifications.

* Understand how to handle schema differences, data types, and constraint mappings.

* Learn to use tools for automated data profiling and mapping where applicable.

  • Key Activities:

* Perform data profiling on a sample dataset (e.g., using SQL queries or Excel functions).

* Create a detailed STTM document for a given source and target schema, including data type conversions and basic business rules.

  • Deliverable: Detailed Source-to-Target Mapping (STTM) Document for a sample dataset.

Week 3: Data Transformation Rules & Logic

  • Learning Objectives:

* Develop a strong understanding of various data transformation types (e.g., lookup, aggregation, normalization, denormalization, string manipulation, date formatting).

* Design and document complex transformation rules based on business requirements.

* Understand the principles of ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) architectures.

* Gain practical experience in expressing transformation logic using SQL and basic scripting (e.g., Python).

* Learn about data cleansing techniques and their integration into transformation.

  • Key Activities:

* Write SQL queries and/or Python scripts to implement specified data transformation rules on a sample dataset.

* Document the transformation logic clearly for a given migration scenario.

  • Deliverable: Transformation Rule Specification Document with sample SQL/Python scripts.

Week 4: Data Validation & Quality Assurance

  • Learning Objectives:

* Design and implement robust data validation scripts and procedures.

* Differentiate between pre-migration, in-migration, and post-migration validation strategies.

* Define key data quality checks (e.g., referential integrity, uniqueness, range checks, business rule validation).

* Understand data reconciliation techniques to ensure data integrity and completeness post-migration.

* Learn about error logging, reporting, and exception handling mechanisms.

  • Key Activities:

* Develop SQL/Python scripts for pre- and post-migration data validation for a sample migration.

* Design a data reconciliation plan, including checksums and record counts.

* Simulate data errors and test validation scripts.

  • Deliverable: Data Validation Plan and Sample Validation Scripts.

Week 5: Rollback Procedures & Cutover Planning

  • Learning Objectives:

* Develop comprehensive rollback strategies and contingency plans for various failure scenarios.

* Understand the importance of data backup and recovery mechanisms in a migration context.

* Plan detailed cutover activities, including downtime management, communication, and resource allocation.

* Learn about performance testing and tuning strategies relevant to migration.

* Integrate security considerations and compliance requirements into cutover and rollback planning.

  • Key Activities:

* Create a detailed rollback plan for a hypothetical critical data migration project.

* Develop a cutover checklist including communication plan and go/no-go criteria.

* Research and document different backup and restore technologies.

  • Deliverable: Rollback Plan and Cutover Checklist.

Week 6: Project Management, Estimation & Tools

  • Learning Objectives:

* Estimate data migration timelines, effort, and resource requirements accurately.

* Understand different project management methodologies (Agile, Waterfall) as applied to migration.

* Explore common data migration tools and platforms (e.g., ETL tools, cloud-native services, scripting languages).

* Learn about data archiving strategies post-migration.

* Consolidate all learned concepts into a complete data migration project plan.

  • Key Activities:

* Develop a project schedule and resource estimate for a medium-complexity data migration.

* Research and compare 2-3 data migration tools/platforms for a specific use case.

* Capstone Project: Create a comprehensive Data Migration Plan for a simulated end-to-end scenario.

  • Deliverable: Comprehensive Data Migration Project Plan (Capstone Project).

3. Recommended Resources

This section provides a curated list of resources to support learning throughout the program.

  • Books:

* "The DAMA Guide to the Data Management Body of Knowledge (DMBOK2)": Essential for understanding data governance, quality, and architecture. Focus on Data Integration, Data Quality, and Data Governance chapters.

* "Data Migration: Planning and Execution" by John Schmidt: A practical guide specifically on data migration.

* "Designing Data-Intensive Applications" by Martin Kleppmann: For a deeper understanding of underlying data systems and architecture.

  • Online Courses & Platforms:

* Coursera/edX: Look for specializations in Data Engineering, Database Management, or Cloud Data Solutions (e.g., AWS Data Analytics, Google Cloud Data Engineering).

* Udemy/Pluralsight: Courses on specific ETL tools (e.g., SSIS, Talend, Informatica), SQL for Data Analysts, Python for Data Operations, Cloud Data Migration Services (AWS DMS, Azure Data Factory, GCP Dataflow).

* LinkedIn Learning: Courses on Project Management for IT, Data Governance.

  • Documentation & Whitepapers:

* Cloud Provider Documentation: AWS Database Migration Service (DMS), Azure Data Factory (ADF), Google Cloud Dataflow/Dataproc documentation.

* Database Vendor Documentation: SQL Server, Oracle, PostgreSQL, MySQL documentation on backup, restore, and data movement.

* Industry Whitepapers: From Gartner, Forrester, Deloitte, PwC on data migration best practices, trends, and case studies.

  • Tools for Practice:

* SQL Database: PostgreSQL, MySQL, SQL Server Express (free versions) for hands-on query writing and schema management.

* Python: Jupyter Notebooks or IDE (e.g., VS Code) with libraries like Pandas for data manipulation and scripting.

* Spreadsheets: Microsoft Excel or Google Sheets for initial mapping, profiling, and data analysis.

* Open-Source ETL: Pentaho Data Integration Community Edition (Kettle) for basic ETL flow design.

* Cloud Free Tiers: Experiment with AWS, Azure, or GCP free tiers for their data migration services.


4. Milestones

Achieving these milestones demonstrates a progressive mastery of data migration planning skills.

  • End of Week 2: Successful completion and detailed documentation of a Source-to-Target Mapping (STTM) for a complex sample dataset, demonstrating understanding of data types, transformations, and constraints.
  • End of Week 4: Development and successful execution of a set of data validation and reconciliation scripts (SQL/Python) that accurately identify and report data quality issues for a sample migration scenario.
  • End of Week 6 (Capstone Project): Presentation of a comprehensive Data Migration Project Plan for a simulated end-to-end scenario, covering strategy, detailed mapping, transformation rules, validation, rollback, cutover, and timeline estimation.

5. Assessment Strategies

Progress will be measured through a combination of practical assignments, project work, and conceptual understanding.

  • Weekly Practical Assignments (60%):

* Submission of deliverables outlined in the "Weekly Schedule" section (e.g., STTM document, transformation scripts, validation plan).

* Evaluated on completeness, accuracy, adherence to best practices, and clarity of documentation.

  • Capstone Project (30%):

* The final comprehensive Data Migration Project Plan will be assessed based on its depth, practicality, logical flow, and demonstration of integrated knowledge across all study areas.

* Includes a brief presentation of the plan.

  • Conceptual Quizzes/Self-Assessment (10%):

* Short weekly quizzes or self-assessment exercises to test understanding of theoretical concepts, terminology, and strategic considerations.

  • Peer Review & Feedback:

* (Optional, but highly recommended) Engage with a mentor or peer group to review assignments and provide constructive feedback, enhancing learning and critical thinking.

This detailed study plan provides a robust framework for developing into a proficient Data Migration Planner, capable of tackling the complexities of modern data migration initiatives.

gemini Output

This document outlines the professional code artifacts essential for planning a robust and successful data migration. It covers programmatic representations for field mapping, data transformation rules, validation scripts, conceptual rollback procedures, and a structure for timeline estimates. The provided code, primarily in Python, is designed for clarity, maintainability, and adaptability, serving as a foundational blueprint for your data migration strategy.


Data Migration Planner: Code Artifacts for a Comprehensive Plan

A well-executed data migration relies on meticulous planning and clear definitions of every step. This deliverable provides concrete, professional code examples and structures that form the backbone of your data migration plan. These artifacts ensure consistency, automate processes where possible, and provide a clear reference for all stakeholders.

1. Core Data Structures for Migration Definition

This section defines Python data

gemini Output

Data Migration Plan: [Project Name Placeholder]

Document Version: 1.0

Date: October 26, 2023


Executive Summary

This document outlines the comprehensive plan for the data migration from [Source System Name/Description] to [Target System Name/Description]. The primary objective is to ensure a secure, accurate, and efficient transfer of all required data, minimizing business disruption and preserving data integrity. This plan details the strategy, field mapping, transformation rules, validation procedures, rollback mechanisms, and estimated timelines necessary for a successful migration.


1. Introduction

1.1 Purpose

The purpose of this document is to provide a detailed and actionable plan for the data migration project. It serves as a foundational guide for all project stakeholders, outlining the methodology, technical specifications, and procedural steps required to transition data from the existing system(s) to the new target platform.

1.2 Scope

This plan covers the migration of all identified master data, transactional data, and historical records from [Specify Source System(s), e.g., Legacy CRM, ERP Module X, Flat Files] to [Specify Target System(s), e.g., New Salesforce Instance, SAP S/4HANA Module Y]. Out-of-scope items include [Specify, e.g., archival of old system, migration of unstructured data like documents unless specified].

1.3 Objectives

  • Data Integrity: Ensure 100% accuracy and completeness of migrated data in the target system.
  • Minimal Downtime: Execute the migration with the least possible impact on business operations.
  • Data Quality Improvement: Implement cleansing and transformation rules to enhance data quality.
  • Security & Compliance: Maintain data security and adhere to all relevant regulatory compliance standards throughout the migration process.
  • Successful Go-Live: Enable a smooth transition to the new system with all necessary data available and validated.

2. Source and Target Systems Overview

2.1 Source System(s)

  • System Name: [e.g., Legacy CRM, Oracle E-Business Suite 11i]
  • Database/Technology: [e.g., SQL Server 2012, Oracle 11g]
  • Key Modules/Data Domains: [e.g., Customer Master, Sales Orders, Product Catalog]
  • Approximate Data Volume: [e.g., 500 GB, 10 million records]
  • Known Data Quality Issues: [e.g., duplicate customer records, inconsistent address formats, missing mandatory fields]

2.2 Target System(s)

  • System Name: [e.g., Salesforce Sales Cloud, SAP S/4HANA]
  • Database/Technology: [e.g., Cloud-based, HANA DB]
  • Key Modules/Data Domains: [e.g., Accounts, Opportunities, Products]
  • Integration Points: [e.g., ERP integration, Marketing Automation]
  • Data Model Constraints: [e.g., strict field lengths, mandatory lookup fields, specific date formats]

3. Data Migration Strategy

3.1 Overall Approach

The data migration will follow a [Phased / Big Bang / Incremental] approach.

  • [Phased Approach]: Data will be migrated in logical batches (e.g., master data first, then transactional data, or by business unit) to allow for incremental validation and reduced risk.
  • [Big Bang Approach]: All data will be migrated simultaneously during a predefined cutover window. This approach minimizes complexity but requires extensive planning and testing.
  • [Incremental Approach]: Initial bulk migration followed by continuous synchronization for new/updated data until cutover.

3.2 Data Volume and Complexity

Considering the estimated [X] GB of data and [Y] distinct data entities, the migration will involve a significant volume and moderate complexity due to various data types and interdependencies.

3.3 Downtime Considerations

A planned downtime window of approximately [X] hours/days will be required for the final cutover and data load. This window will be communicated well in advance to minimize business impact. During this period, [specify impacted systems/services].


4. Data Analysis and Design

4.1 Data Field Mapping

A detailed field-level mapping document will be maintained in a separate artifact, but key examples are provided below. This mapping ensures each source field has a corresponding target field, specifying data types, lengths, and any necessary transformations.

| Source System Field (Table.Field) | Source Data Type | Target System Field (Object.Field) | Target Data Type | Mandatory | Transformation Rule ID | Notes/Comments |

| :-------------------------------- | :--------------- | :--------------------------------- | :--------------- | :-------- | :--------------------- | :--------------------------------------------------------- |

| CRM.Customers.CustomerID | INT | SFDC.Account.External_ID__c | Text(255) | Yes | TR-001 | Map to a custom external ID field for unique identification |

| CRM.Customers.FirstName | VARCHAR(50) | SFDC.Contact.FirstName | Text(40) | Yes | TR-002 | Trim whitespace, capitalize first letter |

| CRM.Customers.LastName | VARCHAR(50) | SFDC.Contact.LastName | Text(80) | Yes | TR-002 | Trim whitespace, capitalize first letter |

| CRM.Customers.Address1 | VARCHAR(100) | SFDC.Account.BillingStreet | Text(255) | Yes | TR-003 | Concatenate with Address2 if exists |

| CRM.Customers.Status | VARCHAR(20) | SFDC.Account.Status__c | Picklist | Yes | TR-004 | Map legacy status values to new picklist values |

| CRM.Orders.OrderDate | DATETIME | SFDC.Opportunity.CloseDate | Date | Yes | TR-005 | Extract date part only |

| CRM.Products.LegacyCode | VARCHAR(10) | SFDC.Product.ProductCode | Text(255) | Yes | TR-006 | Direct map |

4.2 Data Transformation Rules

Transformation rules address discrepancies between source and target data models, improve data quality, and ensure business logic is maintained.

  • TR-001: Data Type Conversion & Formatting

* Rule: Convert Source INT (CustomerID) to Target Text (External_ID__c). Ensure leading zeros are preserved if applicable (e.g., LPAD(CustomerID, 10, '0') if CustomerID is numeric and needs fixed length text).

  • TR-002: String Manipulation & Cleansing

* Rule: For FirstName, LastName, City, State:

* Trim leading/trailing whitespace.

* Convert to Proper Case (capitalize the first letter of each word).

* Remove invalid characters (e.g., REGEXP_REPLACE(field, '[^a-zA-Z0-9 ]', '')).

  • TR-003: Concatenation & Derivation

* Rule: Concatenate Address1 and Address2 from Source into BillingStreet in Target. If Address2 is null, use Address1 only. CONCAT(Address1, ' ', Address2).

* Rule: Derive Full Name in Target from FirstName and LastName in Source.

  • TR-004: Value Mapping & Lookup

* Rule: Map Source CRM.Customers.Status values to Target SFDC.Account.Status__c picklist values:

* 'Active' -> 'Active Customer'

* 'Inactive' -> 'Inactive Customer'

* 'Pending' -> 'Prospect'

* Any other -> 'Unknown' (default value)

  • TR-005: Date/Time Formatting

* Rule: Extract only the date part from CRM.Orders.OrderDate (DATETIME) to SFDC.Opportunity.CloseDate (Date). TO_DATE(OrderDate).

  • TR-006: Data Aggregation/Splitting

* Rule: If a source field contains multiple values (e.g., comma-separated interests), split them into multiple target records or a multi-select picklist field.

* Rule: Aggregate multiple related source records into a single summary field in the target (e.g., sum of all order totals for a customer).

4.3 Data Cleansing Strategy

Prior to migration, a dedicated data cleansing phase will be executed. This includes:

  • Duplicate Detection & Merging: Identify and merge duplicate records based on defined matching rules (e.g., email, address, name combinations).
  • Standardization: Apply formatting rules for addresses, phone numbers, and names using data quality tools or scripts.
  • Completeness Checks: Identify and fill missing mandatory fields (e.g., using default values or through manual intervention).
  • Invalid Data Removal: Remove or quarantine records that do not meet minimum quality thresholds.

4.4 Error Handling Strategy

  • Logging: All transformation and loading errors will be logged with detailed information (record ID, error message, timestamp).
  • Quarantine: Records failing validation or transformation rules will be quarantined in a separate error table for review and manual correction.
  • Reporting: Regular error reports will be generated and distributed to the data migration team for resolution.
  • Retry Mechanism: For transient errors (e.g., network issues), a retry mechanism will be implemented.

5. Data Migration Execution Plan

5.1 Pre-Migration Activities

  • Environment Setup: Provision and configure migration environments (Dev, QA, Staging, Production).
  • Tooling Configuration: Set up and configure chosen migration tools (e.g., custom scripts, ETL tools like Talend, Informatica, or cloud-native tools).
  • Baseline Data Capture: Capture current state metrics and data counts from the source system for post-migration comparison.
  • Source System Lock-down: Implement read-only mode or freeze changes in the source system during the final cutover.

5.2 Cutover Strategy

  • Notification: All impacted users and stakeholders will be notified of the cutover schedule.
  • Source System Freeze: At the start of the cutover window, the source system will be put into a read-only state or taken offline.
  • Final Data Extraction: The last delta or full data extraction from the source system will commence.
  • Target System Preparation: Target system configurations, user permissions, and integrations will be finalized.

5.3 Migration Phases and Steps

  1. Extract: Extract data from the source system(s) using [specify method, e.g., SQL queries, API calls, flat file exports].
  2. Transform: Apply defined transformation and cleansing rules using [specify tool/scripts].
  3. Load: Load the transformed data into the target system(s) using [specify method, e.g., API, bulk loader, data loader tool].
  4. Validate: Perform comprehensive data validation after each load phase (refer to Section 6).
  5. Reconcile: Reconcile record counts and key financial/operational summaries between source and target.

6. Validation Strategy

Data validation is critical to ensure accuracy and completeness. It will be performed at multiple stages:

6.1 Pre-Migration Validation

  • Source Data Profiling: Analyze source data for completeness, uniqueness, consistency, and validity.
  • Schema Validation: Verify source schema against expected structure.
  • Data Volume Check: Confirm expected record counts and data sizes.

6.2 In-Migration Validation

  • Record Counts: Verify that the number of records extracted equals the number of records transformed and loaded.
  • Error Logging Review: Monitor and review all errors generated during extraction, transformation, and loading.
  • Sample Data Review: Perform spot checks on a random sample of migrated records for accuracy and correct transformation.

6.3 Post-Migration Validation

  • Record Count Reconciliation: Compare total record counts for each entity between source and target systems.
  • Data Integrity Checks:

* Referential Integrity: Verify relationships

data_migration_planner.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}