Disaster Recovery Plan
Run ID: 69cbbdcc61b1021a29a8be672026-03-31Infrastructure
PantheraHive BOS
BOS Dashboard

Generate a comprehensive disaster recovery plan with RTO/RPO targets, backup strategies, failover procedures, communication plans, and testing schedules.

This document outlines a comprehensive Disaster Recovery Plan (DRP) designed to ensure the continuity and rapid restoration of critical business functions and IT services in the event of a disruptive incident. This plan establishes clear objectives, strategies, and procedures to minimize downtime, prevent data loss, and maintain stakeholder confidence.


Disaster Recovery Plan: [Your Organization Name]

Version: 1.0

Date: October 26, 2023

Prepared By: [Your Name/Department]

1. Introduction and Purpose

The purpose of this Disaster Recovery Plan (DRP) is to provide a structured approach for responding to unforeseen events that disrupt the normal operation of critical IT systems and business processes. This plan aims to:

  • Minimize the impact of disasters on business operations.
  • Ensure the timely recovery of critical IT infrastructure and data.
  • Protect the organization's assets and reputation.
  • Comply with regulatory requirements and industry best practices.
  • Define clear roles, responsibilities, and communication protocols during a disaster.

2. Scope

This DRP covers all critical IT systems, applications, data, and associated infrastructure essential for the continued operation of [Your Organization Name]'s core business functions. This includes, but is not limited to:

  • Primary Data Center infrastructure (servers, storage, network)
  • Cloud-based services and applications (SaaS, IaaS, PaaS)
  • Key business applications (e.g., ERP, CRM, Financial Systems, E-commerce)
  • Databases (e.g., SQL, NoSQL)
  • Network connectivity (LAN, WAN, Internet)
  • Telephony systems
  • End-user computing environments (desktops, laptops, VDI)
  • Associated data (customer data, operational data, intellectual property)

3. Key Personnel and Roles

A dedicated Disaster Recovery Team (DRT) is established with clearly defined roles and responsibilities.

| Role | Primary Contact | Alternate Contact | Responsibilities |

| DR Team Lead | TBD | TBD | Coordinates the overall DR strategy, leads the DRT, and serves as the primary point of contact for executive leadership during a disaster.

gemini Output

As a professional AI assistant, I am generating the comprehensive Disaster Recovery Plan as requested. This document is designed to be detailed, actionable, and ready for immediate implementation and review by your organization.


Disaster Recovery Plan: [Your Organization Name]

Document Version: 1.0

Date: October 26, 2023

Prepared By: [Your Name/Department/PantheraHive]

Approved By: [Approving Authority - e.g., CIO, CEO]


1. Introduction and Purpose

This Disaster Recovery Plan (DRP) outlines the procedures and strategies for [Your Organization Name] to recover critical IT infrastructure, data, and business operations in the event of a disruptive incident or disaster. The primary goal of this DRP is to minimize downtime, prevent data loss, and ensure the rapid restoration of essential services to maintain business continuity and protect the organization's reputation and financial stability.

This plan focuses on the technical recovery aspects, complementing broader Business Continuity Planning (BCP) efforts.

2. Scope

This DRP covers the recovery of critical IT systems, applications, and data hosted at [Primary Data Center Location(s)] and cloud environments. It addresses potential disruptions caused by natural disasters, cyber-attacks, major equipment failures, and other significant unforeseen events affecting the availability of IT services.

In-Scope Systems/Applications (Examples):

  • ERP System (e.g., SAP, Oracle EBS)
  • CRM System (e.g., Salesforce)
  • Financial Systems (e.g., Accounting Software)
  • Email Services (e.g., Microsoft Exchange Online, Google Workspace)
  • Database Servers (e.g., SQL Server, Oracle, PostgreSQL)
  • Web Servers and Customer-Facing Portals
  • Network Infrastructure (Routers, Switches, Firewalls)
  • Core Storage Systems
  • Virtualization Platforms (e.g., VMWare, Hyper-V)
  • Cloud-based services critical to operations

Out-of-Scope (for this document):

  • Broader Business Continuity processes (e.g., supply chain recovery, human resource management during a crisis, non-IT related operational recovery). These are typically covered in a separate BCP document.

3. Key Definitions

  • Disaster Recovery (DR): The process of restoring IT services and data after a disruptive event.
  • Business Continuity Plan (BCP): A broader plan that encompasses all aspects of an organization's recovery after a disaster, including IT, operations, personnel, and public relations.
  • Recovery Time Objective (RTO): The maximum tolerable duration of time that a computer system, network, or application can be down after a failure or disaster.
  • Recovery Point Objective (RPO): The maximum tolerable amount of data loss, measured in time, that an organization can sustain during a disaster. It determines the frequency of backups.
  • Failover: The process of switching to a redundant or standby system upon the failure or abnormal termination of the previously active system.
  • Failback: The process of restoring systems to their original primary location after a failover event and the primary site has been restored.
  • Warm Site: A disaster recovery site that has essential equipment, but not the actual data. It requires some configuration and data loading to become operational.
  • Hot Site: A fully equipped disaster recovery site that has near real-time data replication and can be operational with minimal downtime.

4. Disaster Recovery Team Roles and Responsibilities

The DR Team is responsible for executing this plan. Clear roles and responsibilities ensure an organized and effective response.

| Role | Primary Contact | Secondary Contact | Responsibilities

The user wants a comprehensive Disaster Recovery Plan (DRP) as a professional business document.

The plan needs to include:

  • RTO/RPO targets
  • Backup strategies
  • Failover procedures
  • Communication plans
  • Testing schedules

I need to structure it well with markdown headers, bullet points, and sections, making it specific and actionable.

Let's break down each required section and how to make it comprehensive:

1. RTO/RPO Targets:

* Define RTO/RPO clearly.

* Create a table mapping critical systems/applications to their respective RTO/RPO.

* Justify why certain systems have stricter targets (e.g., financial systems vs. internal HR portal).

2. Backup Strategies:

* Data classification (e.g., Critical, Important, Non-critical).

* Backup types (Full, Incremental, Differential) and their application.

* Backup frequency per data type/system.

* Backup retention policies (daily, weekly, monthly, yearly).

* Backup media and storage locations (on-premise, off-site, cloud, immutable storage).

* Encryption for backups.

* Regular verification of backups (test restores).

3. Failover Procedures:

* Activation criteria for DR.

* Disaster declaration process (who declares, how).

Step-by-step failover procedures for each critical system category*:

* Network (DNS changes, VPNs)

* Databases (replication, restoration)

* Applications (VM/container spin-up, configuration)

* Storage (snapshot recovery, replication)

* Cloud-specific failover (if applicable, e.g., Azure Site Recovery, AWS DR solutions).

* Failback procedures (how to return to primary site).

* Pre-requisites and post-failover checks.

4. Communication Plans:

* Internal Communication:

* DR Team (initial notification, status updates).

* Employees (outage notification, expected recovery, alternative work arrangements).

* Management/Executives (strategic updates).

* External Communication:

* Customers (impact, expected resolution, support channels).

* Vendors/Partners (impact on shared services).

gemini Output

Disaster Recovery Plan

Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name/Organization]

Prepared By: PantheraHive


1. Introduction and Purpose

This Disaster Recovery Plan (DRP) outlines the procedures, resources, and responsibilities required to restore critical IT systems, applications, and data following a disruptive event. The primary objectives are to minimize downtime, prevent data loss, and ensure the rapid resumption of business-critical operations, thereby safeguarding the organization's continuity, reputation, and financial stability.

This plan is a living document and will be reviewed and updated regularly to reflect changes in our IT infrastructure, business processes, and risk profile.

2. Scope

This DRP covers the recovery of essential IT infrastructure, applications, and data hosted within [Specify Primary Data Center/Cloud Environment, e.g., "Our primary on-premise data center and AWS cloud infrastructure"]. It encompasses the procedures for responding to major incidents that render primary systems inoperable, requiring activation of a secondary recovery environment.

In-Scope Systems & Data (Examples - detailed inventory in Appendix A):

  • Core Business Applications (e.g., ERP, CRM, Billing System)
  • Database Servers (e.g., SQL Server, Oracle, PostgreSQL)
  • Application Servers (e.g., Web Servers, Application Logic Servers)
  • Network Infrastructure (e.g., Firewalls, Routers, Switches)
  • Virtualization Platforms (e.g., VMware, Hyper-V)
  • Storage Systems (SAN, NAS, Object Storage)
  • Critical Data Repositories (e.g., customer data, financial records, intellectual property)
  • Key User Productivity Tools (e.g., Email, Collaboration Platforms)

Out-of-Scope (Examples):

  • Minor IT incidents (e.g., single server failure, software bug) handled by standard IT operations.
  • Physical office relocation (unless directly caused by a disaster impacting IT infrastructure).
  • Individual user workstation recovery (addressed by standard IT support).

3. Disaster Recovery Team & Responsibilities

The Disaster Recovery Team is responsible for executing this plan. Specific roles and responsibilities are outlined below. Contact information for all team members is provided in Appendix B.

| Role | Primary Responsibility | Backup/Alternate |

| :------------------------- | :---------------------------------------------------------------------------------- | :-------------------------------------------------------------- |

| DR Coordinator | Overall plan management, disaster declaration, communication hub. | [Name/Role] |

| Infrastructure Lead | Network, server, and storage recovery. | [Name/Role] |

| Application Lead | Application restoration, configuration, and testing. | [Name/Role] |

| Database Lead | Database restoration, integrity checks, and synchronization. | [Name/Role] |

| Network Lead | Network connectivity, DNS, VPN, and firewall configuration at DR site. | [Name/Role] |

| Communications Lead | Internal/External communication, media liaison. | [Name/Role] |

| Business Continuity Lead | Liaison with business units, ensuring alignment with continuity objectives. | [Name/Role] |

| Security Lead | Ensuring security protocols are maintained during recovery, incident response. | [Name/Role] |

4. Disaster Declaration Criteria & Process

A disaster is defined as an event that renders critical IT systems and/or the primary data center unavailable for an extended period, exceeding defined RTOs, or causing significant data loss.

Declaration Process:

  1. Incident Detection: Monitoring systems, user reports, or external notifications identify a major disruption.
  2. Initial Assessment: The IT Operations team assesses the impact, estimated duration of outage, and potential for data loss.
  3. Escalation: If the incident meets disaster criteria, the IT Operations Manager or a designated alternate escalates to the DR Coordinator.
  4. DR Coordinator Review: The DR Coordinator, in consultation with relevant leads, reviews the assessment and formally recommends disaster declaration to Executive Management.
  5. Executive Approval: Executive Management provides formal approval for disaster declaration.
  6. DR Plan Activation: Upon approval, the DR Coordinator activates the DRP and convenes the DR Team.

5. Recovery Time Objectives (RTO) & Recovery Point Objectives (RPO)

RTO and RPO targets are critical metrics defining the acceptable downtime and data loss for different systems. These targets are prioritized based on business criticality.

| System/Application Tier | Description | Recovery Time Objective (RTO) | Recovery Point Objective (RPO) |

| :---------------------- | :------------------------------------------------ | :------------------------------ | :----------------------------- |

| Tier 0 (Mission Critical) | Immediate and direct impact on core business functions, revenue, or legal compliance. | < 1 Hour | < 15 Minutes |

| Tier 1 (Business Critical) | Significant impact on business operations, revenue, or customer service if unavailable. | < 4 Hours | < 1 Hour |

| Tier 2 (Important) | Noticeable impact, but business can operate with manual workarounds for a limited time. | < 12 Hours | < 4 Hours |

| Tier 3 (Supporting) | Minor impact, systems not directly revenue-generating but support business functions. | < 24 Hours | < 24 Hours |

  • RTO (Recovery Time Objective): The maximum tolerable duration of time in which a system, application, or service can be unavailable after an incident or disaster.
  • RPO (Recovery Point Objective): The maximum tolerable amount of data loss measured in time (e.g., 1 hour of data loss) from a service disruption.

6. Backup Strategies

Our backup strategy is designed to ensure data integrity, availability, and recoverability across all critical systems.

  • Backup Types:

* Full Backups: Performed weekly for all critical systems and databases.

* Differential Backups: Performed daily for critical systems, capturing changes since the last full backup.

* Incremental Backups: Performed hourly/daily for highly volatile data, capturing changes since the last full or incremental backup.

* Database Transaction Logs: Continuously backed up or replicated for Tier 0/1 databases to achieve granular RPO.

  • Backup Frequency & Retention:

* Tier 0/1 Data: Hourly backups/replication, 7-day retention on-site, 30-day off-site.

* Tier 2 Data: Daily backups, 14-day retention on-site, 90-day off-site.

* Tier 3 Data: Weekly backups, 30-day retention on-site, 1-year off-site.

* Archival Data: Monthly/Quarterly, long-term retention as per compliance requirements (e.g., 7 years).

  • Backup Locations:

* On-site Storage: Used for immediate recovery (short-term retention).

* Off-site Storage: Data replicated to a geographically separate, secure facility (e.g., [Specify Off-site Location/Provider, e.g., "AWS S3 in a separate region"]).

* Cloud Storage: Leveraging object storage (e.g., AWS S3, Azure Blob Storage) for cost-effective, durable, and scalable off-site backups.

  • Encryption: All data at rest and in transit during backup operations is encrypted using [Specify Encryption Standard, e.g., "AES-256"].
  • Integrity Checks: Regular (e.g., weekly) verification of backup data integrity and restorability through automated checks and periodic test restores.

7. Recovery Sites

Our disaster recovery strategy leverages a Warm Standby approach for Tier 0/1 applications and a Cold Standby/Backup & Restore for Tier 2/3 applications.

  • Primary Site: [Specify Location, e.g., "On-premise Data Center, 123 Main St, Anytown"].
  • Recovery Site: [Specify Location/Cloud Provider, e.g., "AWS Region us-east-1 (N. Virginia)"].

* Infrastructure: The recovery site maintains pre-provisioned virtual machine instances, network configurations, and storage capacity. Databases are replicated asynchronously or synchronously (for Tier 0).

* Connectivity: Dedicated VPN tunnels or Direct Connect links ensure secure and high-bandwidth connectivity between the primary and recovery sites, and for remote access during a disaster.

8. Failover Procedures

These procedures detail the steps to transition operations from the primary site to the recovery site.

8.1. Pre-Disaster Activities (Continuous)

  • Maintain up-to-date documentation of system configurations, network diagrams, and application dependencies.
  • Ensure continuous replication of critical data and systems to the recovery site.
  • Regularly monitor replication health and DR site readiness.
  • Verify backup integrity and restorability.

8.2. Disaster Declaration & Initial Response

  1. DR Coordinator formally declares a disaster and activates the DR Plan.
  2. DR Team convenes (in-person or virtually) and reviews the incident details.
  3. Communications Lead initiates internal and external communication protocols.
  4. Security Lead assesses potential security implications and ensures incident response protocols are active.

8.3. Failover Execution Steps

Phase 1: Infrastructure Activation (Infrastructure Lead)

  1. Isolate Primary Site (if applicable): If the primary site is compromised, ensure it is isolated to prevent further data corruption or security breaches.
  2. Activate Recovery Site Network:

* Verify network connectivity at the DR site.

* Configure firewalls, routing, and VPNs as per DR site design.

* Update DNS records (internal and external) to point to DR site IP addresses (TTL reduction initiated prior to disaster if possible).

  1. Provision/Activate Compute Resources:

* Power on pre-provisioned VMs/instances or scale up cloud resources.

* Verify resource allocation (CPU, RAM, storage).

Phase 2: Data Restoration & Database Recovery (Database Lead)

  1. Database Failover/Restoration (Tier 0/1):

* Initiate database failover to the standby instances at the DR site (if replication is active).

* Perform necessary data integrity checks and point-in-time recovery to the latest RPO.

  1. Database Restoration (Tier 2/3):

* Restore databases from the latest available off-site backups to DR site database servers.

* Apply transaction logs as needed.

  1. Data Synchronization: Ensure any required data synchronization or consistency checks are performed.

Phase 3: Application Recovery (Application Lead)

  1. Restore/Deploy Applications:

* Deploy application code and configurations to the activated application servers at the DR site.

* Install necessary middleware and dependencies.

  1. Configure Application Services:

* Update application configuration files to point to DR site databases and other services.

* Configure load balancers and application gateways.

  1. Start Applications: Start applications in their defined priority order (refer to Appendix A for application dependencies).

Phase 4: Verification & User Access (Application Lead, Network Lead, DR Coordinator)

  1. Internal Testing: Perform comprehensive internal testing of all recovered applications and systems.

* Functionality tests.

* Connectivity tests.

* Performance checks.

  1. User Acceptance Testing (UAT): Engage key business users for UAT of critical functionalities.
  2. External Access: Once verified, enable external access (e.g., update external DNS, open firewall rules for public access).
  3. Monitor: Continuously monitor system health, performance, and security at the DR site.

9. Communication Plan

Effective communication is paramount during a disaster.

9.1. Internal Communication (DR Coordinator, Communications Lead)

  • DR Team: Dedicated communication channel (e.g., emergency conference bridge, Slack/Teams channel). Hourly updates during active recovery.
  • Executive Management: Regular updates (e.g., every 2-4 hours initially, then daily) on status, estimated recovery times, and business impact.
  • Employees: Information regarding system availability, expected timelines, and alternative work arrangements (e.g., email, SMS, company intranet, emergency hotline).
  • Channels:

* Emergency Call Tree (Appendix B)

* Dedicated collaboration channels (Slack, Microsoft Teams)

* SMS/Mass Notification System

* Internal Email (

disaster_recovery_plan.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}