Disaster Recovery Plan
Run ID: 69ccb3bd3e7fb09ff16a45522026-04-01Infrastructure
PantheraHive BOS
BOS Dashboard

Generate a comprehensive disaster recovery plan with RTO/RPO targets, backup strategies, failover procedures, communication plans, and testing schedules.

Disaster Recovery Plan: Comprehensive Framework

This document outlines a comprehensive Disaster Recovery Plan (DRP) designed to ensure the continuity of critical business operations, minimize downtime, and facilitate rapid recovery from disruptive events. It encompasses strategic targets, detailed procedures, communication protocols, and a structured testing regimen to maintain organizational resilience.


1. Executive Summary

This Disaster Recovery Plan (DRP) provides a structured approach to prevent, mitigate, and recover from potential disasters that could impact critical IT systems and business operations. It establishes clear objectives, roles, responsibilities, and procedures to ensure the timely restoration of services, protection of data integrity, and continuity of essential business functions. Key components include defined Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO), robust backup strategies, detailed failover/failback procedures, a comprehensive communication plan, and a rigorous testing schedule to validate effectiveness.


2. Introduction and Purpose

The purpose of this DRP is to provide a clear, actionable framework for responding to and recovering from disruptive incidents, whether natural disasters, cyberattacks, equipment failures, or other unforeseen events. It aims to:

  • Minimize the impact of disasters on critical business processes.
  • Ensure the availability and integrity of vital data and systems.
  • Facilitate the rapid restoration of operations to acceptable levels.
  • Protect the organization's reputation and financial stability.
  • Comply with regulatory requirements and industry best practices.

3. Scope

This DRP covers all critical IT infrastructure, applications, data, and associated business processes essential for the organization's core operations. This includes, but is not limited to:

  • Primary Data Center infrastructure (servers, storage, networking).
  • Cloud-based services and applications (SaaS, PaaS, IaaS).
  • Critical business applications (e.g., ERP, CRM, financial systems, email).
  • Network connectivity (LAN, WAN, Internet access).
  • Key operational data and databases.
  • Personnel and communication channels involved in recovery efforts.

Exclusions:

  • Detailed Business Continuity Plan (BCP) for non-IT related business functions (though closely linked).
  • Specific crisis management plans beyond IT and data recovery.

4. Disaster Recovery Team and Roles

A dedicated Disaster Recovery Team is essential for effective execution of the DRP. Roles and responsibilities must be clearly defined and understood.

4.1. Core DR Team Roles:

  • DR Coordinator/Incident Commander: Overall lead, decision-maker, communication hub, liaison with executive management.
  • IT Infrastructure Lead: Oversees server, storage, and network recovery.
  • Applications Lead: Manages recovery and validation of critical applications.
  • Data Recovery Lead: Responsible for data restoration and integrity.
  • Network Operations Lead: Focuses on network connectivity and security.
  • Communications Lead: Manages internal and external communications during an incident.
  • Security Lead: Ensures security protocols are maintained throughout recovery.

4.2. Contact Information:

(Placeholder for a table with names, roles, primary phone, secondary phone, email. This table should be maintained in an appendix and accessible offline.)


5. Risk Assessment Summary (High-Level)

A brief overview of potential threats and their potential impact, informing the DRP's design.

  • Natural Disasters: Fires, floods, earthquakes, severe weather, power outages.
  • Technological Failures: Hardware failure, software corruption, network outages, power grid failure.
  • Human Error: Accidental data deletion, misconfigurations, security breaches.
  • Cyber Threats: Ransomware, DDoS attacks, data breaches, insider threats.
  • Third-Party Failures: Cloud provider outages, ISP failures, vendor supply chain disruptions.

6. Business Impact Analysis (BIA) Summary & RTO/RPO Targets

The BIA identifies critical business processes, their dependencies, and the impact of their disruption. This section summarizes key findings and defines recovery targets.

6.1. Critical Business Processes & Systems:

| Process/System Name | Description | Business Impact if Unavailable | Dependencies |

| :------------------ | :---------- | :--------------------------- | :----------- |

| Financial Reporting | Monthly/Quarterly reporting | Regulatory non-compliance, financial loss | ERP, Database, Network |

| Customer Order Entry | Processing customer orders | Revenue loss, customer dissatisfaction | CRM, E-commerce platform, Database |

| Email Service | Internal/External communication | Communication breakdown, operational paralysis | Exchange/O365, Network |

| Data Analytics | Business intelligence | Impaired decision-making | Data Warehouse, BI tools |

| ... | ... | ... | ... |

6.2. Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO):

RTO defines the maximum tolerable downtime for a critical application or system. RPO defines the maximum tolerable period in which data might be lost from an IT service due to a major incident.

| Application/System | Priority | RTO (Hours) | RPO (Hours) | Recovery Tier |

| :----------------- | :------- | :---------- | :---------- | :------------ |

| Tier 1: Critical | | | | |

| ERP System | High | 4 | 1 | Hot Standby |

| Core Database | High | 2 | 0.5 | Always-On AG/Replication |

| Email Service | High | 6 | 2 | Warm Standby |

| Tier 2: Important| | | | |

| CRM System | Medium | 12 | 4 | Warm Standby/Cloud Backup |

| File Shares | Medium | 24 | 6 | Offsite Backup |

| Tier 3: Non-Critical| | | | |

| Development Env. | Low | 48 | 24 | Cold Site/Backup |

| ... | ... | ... | ... | ... |


7. Backup and Data Recovery Strategy

A robust backup strategy is fundamental to achieving RPO targets and ensuring data integrity.

7.1. Backup Types and Frequencies:

  • Full Backups: Weekly (e.g., Saturday night) for all critical systems and data.
  • Differential Backups: Daily (e.g., nightly) for changes since the last full backup.
  • Incremental Backups: Hourly/Daily for changes since the last full or incremental backup (for high RPO systems).
  • Database Transaction Logs: Continuously shipped/replicated for critical databases (e.g., SQL Server Always-On, Oracle Data Guard).
  • Cloud-Native Backups: Leverage native snapshot and backup services for cloud resources (e.g., AWS EBS snapshots, Azure Backup).

7.2. Backup Storage Locations:

  • On-Premise (Short-Term): Local disk arrays for rapid recovery of recent data.
  • Offsite (Long-Term): Secure, geographically separate location (e.g., tape library, cloud storage like AWS S3/Glacier, Azure Blob Storage) to protect against site-wide disasters.
  • Cloud-to-Cloud: For SaaS applications, ensure vendor provides adequate backup/recovery, or implement third-party cloud-to-cloud backup solutions.

7.3. Data Retention Policies:

  • Daily Backups: Retain for 7-14 days.
  • Weekly Backups: Retain for 4-8 weeks.
  • Monthly Backups: Retain for 12 months.
  • Annual Backups: Retain for 7 years (or as per regulatory compliance).

7.4. Encryption:

All backups, both in transit and at rest, must be encrypted using industry-standard encryption protocols (e.g., AES-256).

7.5. Backup Software/Services:

(List specific tools used, e.g., Veeam Backup & Replication, Azure Backup, AWS Backup, Commvault, Rubrik, etc.)


8. Failover and Failback Procedures

Detailed steps for switching to a redundant system (failover) and returning to the primary system (failback).

8.1. Failover Strategy by Recovery Tier:

  • Hot Standby (Tier 1):

* Description: A fully functional duplicate system running concurrently or nearly concurrently with the primary, ready to take over immediately.

* Example: Active-Passive clusters, database Always-On Availability Groups, multi-region cloud deployments with active load balancing.

* Procedure: Automated or near-instantaneous switchover.

  • Warm Standby (Tier 2):

* Description: A scaled-down or partially configured duplicate system that requires some configuration and data synchronization before becoming fully operational.

* Example: Replicated VMs in a DR site, pre-configured cloud instances with data restored from backups.

* Procedure: Manual intervention to power on, configure, and restore latest data.

  • Cold Site/Backup & Restore (Tier 3):

* Description: A basic facility with necessary infrastructure but no active hardware or data. Requires full setup and data restoration from scratch.

* Example: Empty office space, cloud region where resources are provisioned on demand.

* Procedure: Provision infrastructure, install OS/applications, restore data from offsite backups.

8.2. General Failover Procedure (Example for a critical application):

  1. Declare Disaster: DR Coordinator officially declares a disaster based on predefined criteria.
  2. Activate DR Team: Notify and assemble the DR team.
  3. Assess Damage: Determine the extent of the outage and primary system status.
  4. Initiate Failover:

* Network Rerouting: Update DNS records, load balancer configurations, or VPN settings to point to the DR site/systems.

* System Activation: Power on/activate DR systems.

* Data Synchronization/Restoration: Ensure the latest available data is synchronized or restored.

* Application Startup: Start critical applications in the DR environment.

  1. Verification and Testing:

* Perform smoke tests and end-to-end user acceptance testing (UAT) to confirm functionality.

* Validate data integrity and consistency.

  1. User Communication: Inform stakeholders that services have been restored in the DR environment.
  2. Monitor: Continuously monitor DR systems for performance and stability.

8.3. General Failback Procedure (Returning to Primary Site):

  1. Assess Primary Site Readiness: Ensure primary site infrastructure is fully restored, stable, and secure.
  2. Plan Failback Window: Schedule a maintenance window to minimize disruption.
  3. Synchronize Data: Replicate data changes from the DR environment back to the primary site.
  4. Initiate Failback:

* Network Rerouting: Update DNS/load balancer to point back to the primary site.

* System Activation: Activate primary systems.

* Application Startup: Start applications on primary systems.

  1. Verification and Testing:

* Perform smoke tests and UAT on primary systems.

* Validate data integrity.

  1. Deactivate DR Systems: Power down or decommission DR resources (if not a continuous hot standby).
  2. User Communication: Inform stakeholders that services are fully restored on primary systems.
  3. Post-Incident Review: Conduct a thorough review of the incident, failover, and failback process.

9. Communication Plan

Effective communication is paramount during a disaster to manage expectations, coordinate efforts, and maintain confidence.

9.1. Internal Communication:

  • DR Team: Dedicated communication channels (e.g., secure chat, conference bridge, emergency SMS group).
  • Employees:

* Initial Notification: Email, SMS, emergency website/portal.

* Status Updates: Regular updates via email, intranet, or dedicated communication platform.

* Instructions: Guidance on alternative work arrangements, system access, or manual workarounds.

  • Executive Management: Regular briefings from the DR Coordinator on incident status, recovery progress, and potential impacts.

9.2. External Communication:

  • Customers:

* Initial Notification: Website banner, social media, mass email (if email system is operational), automated phone message.

* Status Updates: Regular updates through the same channels, setting clear expectations.

* Customer Support: Provide clear channels for customer inquiries.

  • Vendors/Suppliers: Notify critical vendors (e.g., cloud providers, ISPs, hardware support) to coordinate recovery efforts.
  • Regulators/Legal Counsel: As required by law or contract, provide timely notification.
  • Public Relations/Media: All external communication to media must be handled by a designated spokesperson (e.g., PR Lead or DR Coordinator with PR guidance).

9.3. Communication Tools:

  • Emergency notification system (e.g., Everbridge, AlertMedia).
  • Conference bridge numbers (pre-assigned).
  • Secure messaging apps (e.g., Slack, Microsoft Teams, if independent of primary network).
  • Pre-drafted communication templates for various scenarios.
  • Offline contact lists for key personnel.

10. Testing and Maintenance Schedule

Regular testing and maintenance are crucial to ensure the DRP remains effective and current.

10.1. Testing Objectives:

  • Validate the accuracy and completeness of the DRP.
  • Confirm RTO/RPO targets can be met.
  • Identify weaknesses, gaps, or single points of failure.
  • Train DR team members and familiarize them with procedures.
  • Ensure systems and data can be successfully restored and applications function correctly.

10.2. Testing Types and Frequency:

  • Tabletop Exercises (Annual): Discussion-based walk-through of the DRP without actual system activation. Focuses on roles, procedures, and decision-making.
  • Component-Level Testing (Quarterly/Semi-Annually): Testing specific components like backup restoration, individual application failover, or network rerouting.
  • Full DR Simulation (Annual): A comprehensive test involving actual failover to the DR site/environment, followed by verification of critical systems and applications. This should ideally include a failback to the primary site.
  • Unannounced Drills (Ad-Hoc): Random, unannounced tests to gauge the team's readiness and response time under pressure.

10.3. Documentation and Review:

  • Test Reports: Document all test results, including successes, failures, lessons learned, and identified action items.
  • Post-Test Review: Conduct a meeting after each test to discuss findings and update the DRP.
  • DRP Review (Annual): The entire DRP document should be reviewed and updated annually, or whenever significant changes occur in IT infrastructure, business processes, or personnel.

10.4. Maintenance Activities:

  • Backup Verification: Regularly verify the integrity and restorability of backups.
  • DR Site Updates: Ensure the DR environment is kept in sync with the primary production environment (e.g., patching, software updates, configuration changes).
  • Contact List Updates: Keep all contact information current.
  • Vendor Contract Review: Ensure DR-related vendor contracts (e.g., cloud SLAs, support agreements) are up-to-date.

11. Training

All members of the DR team and relevant personnel must receive adequate training on their roles and responsibilities within the DRP.

  • Initial Training: For new DR team members.
  • Refresher Training: Annual refresher training for all DR team members.
  • Awareness Training: General awareness for all employees on emergency procedures and communication channels.

12.

gemini Output

Disaster Recovery Plan (DRP)

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name/Organization]

Prepared By: PantheraHive Solutions


1. Executive Summary

This Disaster Recovery Plan (DRP) outlines the strategies, procedures, and responsibilities for responding to and recovering from a disruptive event that impacts critical IT systems and business operations. The primary objective is to minimize downtime and data loss, ensuring business continuity and rapid restoration of essential services. This plan details Recovery Time Objectives (RTOs), Recovery Point Objectives (RPOs), backup strategies, failover procedures, communication protocols, and a robust testing schedule to maintain readiness.

2. Introduction

2.1. Purpose

The purpose of this DRP is to provide a structured and actionable framework for restoring IT infrastructure, applications, and data following a disaster. It aims to protect the organization's assets, maintain critical business functions, and safeguard its reputation by enabling a swift and efficient recovery.

2.2. Scope

This DRP covers all critical IT systems, applications, data, and associated infrastructure located at [Primary Data Center Location] and extends to cloud-based services utilized by the organization. It addresses potential disaster scenarios including, but not limited to, natural disasters, cyberattacks, major equipment failures, and widespread service disruptions.

2.3. Objectives

  • Minimize the financial and operational impact of a disaster.
  • Ensure the safety and well-being of personnel.
  • Achieve defined RTOs and RPOs for critical systems and data.
  • Restore essential business functions within acceptable timeframes.
  • Establish clear roles, responsibilities, and communication channels.
  • Provide a framework for regular testing, review, and maintenance of recovery capabilities.

3. Recovery Objectives: RTO and RPO Targets

Recovery objectives are critical metrics that define the acceptable limits for downtime and data loss.

3.1. Recovery Time Objective (RTO)

The RTO specifies the maximum tolerable duration for which a critical application or system can be unavailable after an incident. It dictates how quickly systems must be restored.

  • Tier 0: Mission-Critical Systems (e.g., Core ERP, Primary Database, Customer-Facing Web Applications)

* RTO: 1-4 Hours

* Description: Systems whose unavailability would result in immediate and severe business impact, significant financial loss, or regulatory non-compliance.

  • Tier 1: Business-Critical Systems (e.g., CRM, Email, Financial Reporting, Internal Collaboration Tools)

* RTO: 4-8 Hours

* Description: Systems essential for daily operations, whose prolonged unavailability would significantly disrupt business processes.

  • Tier 2: Important Systems (e.g., Development Environments, Secondary Internal Applications)

* RTO: 12-24 Hours

* Description: Systems that support non-immediate business functions, whose temporary unavailability can be tolerated with some manual workarounds.

  • Tier 3: Non-Critical Systems (e.g., Test Environments, Archival Systems with low access frequency)

* RTO: 24-72 Hours

* Description: Systems with minimal impact on immediate business operations if unavailable for an extended period.

3.2. Recovery Point Objective (RPO)

The RPO specifies the maximum tolerable period in which data might be lost from an IT service due to a major incident. It defines the point in time to which systems and data must be recovered.

  • Tier 0: Mission-Critical Data (e.g., Transactional Data, Customer Orders, Financial Records)

* RPO: 0-15 Minutes

* Description: Data with extremely high integrity requirements, where any loss would be catastrophic. Achieved via synchronous or near-synchronous replication.

  • Tier 1: Business-Critical Data (e.g., Employee Records, Project Files, Configuration Data)

* RPO: 1-4 Hours

* Description: Data vital for ongoing operations, where minor data loss can be managed. Achieved via frequent snapshots or asynchronous replication.

  • Tier 2: Important Data (e.g., Historical Reports, Non-production Database Snapshots)

* RPO: 4-12 Hours

* Description: Data that can be recreated or whose loss has limited operational impact. Achieved via daily backups.

  • Tier 3: Non-Critical Data (e.g., Logs, Temporary Files, Development Code not yet committed)

* RPO: 24 Hours

* Description: Data that can be easily recovered from existing sources or is not critical to immediate operations. Achieved via daily or less frequent backups.

4. Backup and Data Protection Strategies

A multi-layered approach to data protection ensures resilience against various failure types.

4.1. Backup Types and Frequency

  • Mission-Critical Data (Tier 0 RPO):

* Strategy: Real-time replication (synchronous or asynchronous) to a geographically separate hot standby site or cloud region.

* Frequency: Continuous.

* Example Technologies: Database AlwaysOn Availability Groups, SAN Replication, Cloud native replication services.

  • Business-Critical Data (Tier 1 RPO):

* Strategy: Hourly snapshots/incremental backups with daily full backups.

* Frequency: Hourly (incremental/snapshots), Daily (full).

* Example Technologies: VM snapshots, application-aware backups, block-level incremental backups.

  • Important & Non-Critical Data (Tier 2 & 3 RPO):

* Strategy: Daily incremental backups with weekly full backups.

* Frequency: Daily (incremental), Weekly (full).

* Example Technologies: File-level backups, system image backups.

4.2. Backup Retention Policies

  • Short-Term Retention:

* Daily Backups: Retain for 7-14 days.

* Weekly Backups: Retain for 4 weeks.

  • Long-Term Retention:

* Monthly Backups: Retain for 12 months.

* Annual Backups: Retain for 7 years (or as per regulatory requirements).

  • Archival: Specific data sets may be archived indefinitely on immutable storage.

4.3. Backup Locations

  • On-Premises (Local): Fast recovery for minor incidents. Used for immediate operational restore points.
  • Off-Site (Geographically Separated): Data replicated or transferred to a secondary data center or secure facility at least [e.g., 50 miles] away. Protects against site-wide disasters.
  • Cloud (e.g., AWS S3, Azure Blob, Google Cloud Storage): Cost-effective, scalable, and highly durable storage for long-term retention and disaster recovery. Utilizes multiple availability zones for resilience.

4.4. Backup Verification and Integrity

  • Regular Restore Testing: Conduct quarterly spot checks by restoring random files and databases to an isolated environment to verify data integrity and restorability.
  • Checksum Verification: Automated checks during backup processes to ensure data hasn't been corrupted.
  • Alerting: Implement alerts for failed backups or replication issues.
  • Documentation: Maintain detailed logs of all backup and verification activities.

4.5. Encryption

  • Data at Rest: All backups stored on-premises, off-site, or in the cloud must be encrypted using industry-standard encryption protocols (e.g., AES-256).
  • Data in Transit: Data transferred between primary site, off-site storage, and cloud repositories must be encrypted using secure protocols (e.g., TLS 1.2+).

5. Disaster Recovery Procedures: Failover & Failback

These procedures detail the step-by-step actions required to activate the DR plan, recover systems, and eventually restore operations to the primary site.

5.1. Disaster Detection and Declaration

  1. Detection: Monitoring systems (NOC, SIEM, application performance monitors) detect a critical incident or outage.
  2. Initial Assessment: DR Coordinator and IT Lead assess the scope, severity, and potential duration of the outage.
  3. Declaration Criteria: A disaster is declared if:

* A critical system outage exceeds its defined RTO.

* The primary data center is physically inaccessible or severely damaged.

* There is widespread data corruption or loss impacting critical systems.

* A major cyberattack compromises core infrastructure.

  1. Notification: Once declared, the DR Coordinator formally notifies the DR Team and Executive Management.

5.2. DR Team Activation and Initial Response

  1. DR Team Assembly: DR Coordinator initiates the DR call tree. All team members report to the designated command center (physical or virtual).
  2. Situation Briefing: DR Coordinator provides an initial briefing on the disaster, confirmed impacts, and objectives.
  3. Roles and Responsibilities Confirmation: Each team member confirms their assigned role and immediate tasks.
  4. Secure Communication Channels: Establish dedicated communication channels for the DR team (e.g., conference bridge, secure chat).

5.3. Failover Procedures (Recovery Site Activation)

The following steps are high-level; detailed runbooks will be maintained in an appendix.

  1. Network Infrastructure:

* Activate recovery site network (VPNs, firewalls, routing).

* Update DNS records (internal and external) to point to recovery site IP addresses. TTLs should be low (e.g., 5 minutes) for critical services.

  1. Virtualization/Compute Infrastructure:

* Initiate failover of replicated VMs/containers to the recovery site.

* Provision new compute resources if necessary (e.g., in cloud environments).

  1. Storage & Database Recovery:

* Activate replicated storage volumes or cloud storage.

* Perform database failover (e.g., SQL AlwaysOn to secondary replica, Oracle Data Guard switchover).

* Restore databases from the latest available backups if replication is not viable (this will impact RPO).

* Verify data integrity post-restore/failover.

  1. Application Recovery:

* Start critical applications in the defined recovery order (dependencies first).

* Configure application settings to point to recovery site databases and services.

* Perform post-recovery configuration and health checks.

  1. User Access Restoration:

* Restore authentication services (e.g., Active Directory, identity providers).

* Verify user connectivity and application access from designated access points (e.g., VPN, VDI).

* Communicate access instructions to employees.

  1. Verification and Handover:

* Perform end-to-end testing of critical business processes.

* Validate application functionality and performance.

* Monitor systems for stability and performance.

* Formally hand over operational control to the Operations Team once stable.

5.4. Failback Procedures (Return to Primary Site)

Failback is a planned, controlled process to return operations to the primary data center once it has been fully restored and validated.

  1. Primary Site Restoration:

* Repair or rebuild the primary data center infrastructure.

* Install and configure necessary hardware and software.

* Perform comprehensive testing of the primary site infrastructure.

  1. Data Synchronization:

* Initiate reverse replication or data synchronization from the recovery site back to the restored primary site.

* Ensure all changes made at the recovery site are propagated to the primary site without data loss.

  1. Planned Failback:

* Schedule a maintenance window for the failback operation.

* Communicate downtime to stakeholders.

* Perform a controlled failover from the recovery site back to the primary site.

* Update DNS records and network configurations to point back to the primary site.

  1. Post-Failback Verification:

* Conduct thorough testing of all systems and applications on the primary site.

* Monitor for stability and performance.

* Decommission recovery site resources if they are no longer needed (cost optimization).

  1. Post-Incident Review:

* Conduct a lessons learned session with the DR team and relevant stakeholders.

* Identify areas for improvement in the DRP and recovery procedures.

* Update the DRP based on findings.

6. Communication Plan

Effective communication is paramount during a disaster to manage expectations, coordinate efforts, and minimize panic.

6.1. Internal Communication

  • DR Team:

* Method: Dedicated conference bridge, secure chat (e.g., Microsoft Teams, Slack), emergency SMS group.

* Frequency: Continuous updates during active recovery, hourly formal briefings

gemini Output

Comprehensive Disaster Recovery Plan

Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name/Organization]


1. Executive Summary

This Disaster Recovery Plan (DRP) outlines the strategies, procedures, and responsibilities required to restore critical business functions and IT systems in the event of a disruptive incident. The primary objective is to minimize downtime, prevent data loss, and ensure business continuity by meeting defined Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs). This plan covers essential infrastructure, applications, and data, detailing backup strategies, failover procedures, communication protocols, and a rigorous testing schedule to maintain readiness.

2. Introduction and Purpose

2.1. Purpose:

The purpose of this DRP is to provide a structured framework for responding to and recovering from major disruptions, including natural disasters, cyberattacks, significant system failures, or other catastrophic events. It aims to:

  • Minimize the duration of service disruption and data loss.
  • Ensure the timely and orderly recovery of critical IT infrastructure and business applications.
  • Protect the organization's reputation and financial stability.
  • Comply with regulatory requirements and industry best practices.

2.2. Scope:

This plan covers the recovery of the following critical IT systems and services:

  • [List specific critical applications, e.g., CRM, ERP, Financial Systems, E-commerce Platform]
  • [List specific critical infrastructure components, e.g., Primary Data Center, Virtualization Platform, Core Network Services, Database Servers, Web Servers]
  • Associated data storage and network connectivity.
  • Personnel and communication processes essential for recovery.

2.3. Assumptions:

  • A dedicated Disaster Recovery (DR) team is established and trained.
  • Off-site backup storage is accessible.
  • Necessary hardware, software licenses, and network connectivity for the DR environment are available.
  • Third-party vendor support agreements are in place and current.
  • Personnel contact information is up-to-date and accessible off-site.

3. Disaster Recovery Team and Roles

A dedicated DR team is essential for effective incident response and recovery. Each member has specific responsibilities and is equipped with the necessary knowledge and authority.

3.1. DR Team Structure and Key Roles:

| Role | Primary Responsibility | Backup | Contact (Primary) | Contact (Backup) |

| :------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------- | :----------------------------- | :----------------------------- |

| DR Coordinator | Overall plan activation, management, communication, and decision-making during a disaster. | [Backup DR Coordinator Name] | [Phone/Email] | [Phone/Email] |

| IT Infrastructure Lead | Recovery of servers, virtualization platforms, storage, and network infrastructure. | [Backup Infra Lead Name] | [Phone/Email] | [Phone/Email] |

| Applications Lead | Recovery and configuration of critical business applications. | [Backup Apps Lead Name] | [Phone/Email] | [Phone/Email] |

| Data & Database Lead | Data restoration, database recovery, and integrity validation. | [Backup Data Lead Name] | [Phone/Email] | [Phone/Email] |

| Network Lead | Restoration of network connectivity, firewall rules, VPNs, and DNS. | [Backup Network Lead Name] | [Phone/Email] | [Phone/Email] |

| Communications Lead | Internal and external communication management, public relations, and stakeholder updates. | [Backup Comms Lead Name] | [Phone/Email] | [Phone/Email] |

| Business Continuity Lead | Coordination with business units, prioritization of business functions, and impact assessment. | [Backup BC Lead Name] | [Phone/Email] | [Phone/Email] |

3.2. Activation Criteria:

The DR Plan will be activated by the DR Coordinator (or their designated backup) upon confirmation of a major incident that:

  • Renders primary IT systems or facilities unavailable for an extended period.
  • Poses a significant threat to data integrity or security that cannot be resolved through standard operational procedures.
  • Is declared a disaster by senior management or emergency services.

4. Critical Systems and Services Analysis

4.1. Business Impact Analysis (BIA) Summary:

The following table identifies critical business functions, the IT systems supporting them, and their prioritization for recovery.

| Business Function | Supporting IT System(s) | Impact if Unavailable (High/Med/Low) | Priority (1=Highest) |

| :------------------------- | :---------------------- | :----------------------------------- | :------------------- |

| [e.g., Order Processing] | [e.g., ERP System] | High | 1 |

| [e.g., Customer Support] | [e.g., CRM System] | High | 1 |

| [e.g., Financial Reporting]| [e.g., Financial App] | Medium | 2 |

| [e.g., Website/E-commerce] | [e.g., Web Servers, DB] | High | 1 |

| [e.g., Email/Collaboration]| [e.g., Exchange/M365] | Medium | 2 |

5. Recovery Time Objective (RTO) and Recovery Point Objective (RPO) Targets

RTO defines the maximum tolerable duration of downtime after an incident. RPO defines the maximum tolerable amount of data loss measured in time.

| Critical System/Service | RTO Target (Hours) | RPO Target (Minutes) | Justification/Comment |

| :------------------------- | :----------------- | :------------------- | :----------------------------------------------------------------------------------------------------------------------- |

| ERP System | 4 | 15 | Critical for order fulfillment, inventory, and finance. High business impact for every hour of downtime. |

| CRM System | 4 | 15 | Essential for customer interaction and sales. Data loss impacts customer history and ongoing operations. |

| E-commerce Platform | 2 | 5 | Direct revenue generation. Every minute of downtime results in lost sales. Real-time data sync is crucial. |

| Financial Reporting System | 8 | 60 | Required for daily financial operations and compliance. Can tolerate slightly more downtime than core revenue systems. |

| Core Network Services | 1 | 0 | Foundation for all IT operations. Must be restored immediately to enable other recoveries. |

| Database Servers | 2 | 5 | Underpins most critical applications. Data integrity is paramount. |

6. Backup and Data Protection Strategies

6.1. Backup Types and Frequency:

  • Full Backups: Performed weekly on all critical systems and data. Stored off-site.
  • Incremental Backups: Performed nightly for all critical systems, capturing changes since the last full or incremental backup. Stored off-site.
  • Database Transaction Logs: Continuously shipped (e.g., every 5 minutes) for critical databases (e.g., ERP, CRM databases) to enable point-in-time recovery.
  • VM Snapshots: Taken hourly for critical virtual machines (e.g., application servers) and retained for 24 hours.

6.2. Backup Retention Policies:

  • Daily Incremental: 7 days
  • Weekly Full: 4 weeks
  • Monthly Full: 12 months (archived)
  • Yearly Full: 7 years (archived)

6.3. Backup Locations:

  • Primary On-site Storage: Local NAS/SAN for immediate recovery (short-term retention).
  • Primary Off-site Storage: Encrypted cloud storage (e.g., AWS S3, Azure Blob, Google Cloud Storage) for long-term retention and disaster recovery.
  • Secondary Off-site Storage: Physically separate data center or another cloud region for extreme disaster scenarios.

6.4. Data Encryption:

  • All data at rest in backup locations is encrypted using AES-256 encryption.
  • All data in transit during backup operations is encrypted using TLS 1.2 or higher.

6.5. Data Integrity and Verification:

  • Automated daily checks for backup job completion and success.
  • Monthly verification of random backup sets by attempting a restore to a test environment.
  • Annual full restore test of critical systems as part of the DR testing schedule.

7. Failover and Recovery Procedures

This section outlines detailed, step-by-step procedures for failover to the DR site and subsequent recovery.

7.1. Disaster Declaration and Activation:

  1. Incident Detection: Monitoring systems alert DR Coordinator of critical system failure.
  2. Assessment & Verification: DR Coordinator assesses the incident severity and potential impact.
  3. Disaster Declaration: DR Coordinator (or designated backup) formally declares a disaster and authorizes DRP activation.
  4. DR Team Notification: DR Coordinator activates the communication plan (Section 8) to notify the DR team and key stakeholders.
  5. Relocation (if necessary): If the primary facility is inaccessible, DR team members move to the designated DR command center or work remotely.

7.2. Failover Procedures (Example for a Critical Application - ERP System):

  • DR Site: [e.g., Azure West US Region / Secondary Data Center]
  • Target RTO: 4 hours
  • Target RPO: 15 minutes
  1. Verify DR Site Readiness (IT Infrastructure Lead):

* Confirm network connectivity between DR site and corporate VPN.

* Ensure virtual infrastructure (hypervisors, storage) is operational.

* Verify firewall rules and security groups are configured for ERP access.

  1. Restore Core Network Services (Network Lead):

* Activate DR site VPN tunnels.

* Update DNS records (internal and external) to point to DR site IP addresses for ERP services (TTL reduced to 5 mins pre-disaster).

* Configure DR site load balancers/application gateways.

  1. Restore Database Servers (Data & Database Lead):

* Provision new database server VMs at DR site if not pre-provisioned.

* Restore the latest full database backup.

* Apply incremental backups and transaction logs to achieve RPO.

* Perform database integrity checks and bring online.

  1. Restore Application Servers (Applications Lead):

* Provision new application server VMs at DR site if not pre-provisioned.

* Restore the latest application server image/backup.

* Install and configure ERP application software.

* Update application configuration to point to the restored database.

  1. Restore Web Servers (Applications Lead):

* Provision new web server VMs at DR site.

* Restore web application content and configurations.

* Configure web servers to connect to the DR application servers.

  1. Functional Testing (Applications Lead, Business Continuity Lead):

* Perform internal functional tests of the ERP application.

* Engage key business users for user acceptance testing (UAT) of critical functions.

  1. Service Go-Live (DR Coordinator):

* Once UAT is successful, DR Coordinator authorizes re-opening of services to end-users.

* Announce system availability via communication channels.

7.3. Data Synchronization and Failback Procedures:

  1. Monitor Primary Site: Continuously monitor the primary site for recovery and stability.
  2. Replication Setup: Once the primary site is stable, establish reverse replication from the DR site back to the primary site.
  3. Delta Synchronization: Allow sufficient time for all changes made at the DR site to synchronize back to the primary site.
  4. Failback Planning: Schedule the failback during a low-impact maintenance window.
  5. Failback Execution:

* Briefly halt services at the DR site.

* Verify data consistency on the primary site.

* Switch DNS entries and network routes back to the primary site.

* Perform functional testing on the primary site.

  1. Post-Failback: Decommission temporary DR resources and revert configurations.

8. Communication Plan

Effective communication is critical during a disaster. This plan defines how information will be disseminated internally and externally.

8.1. Internal Communication (DR Team, Employees, Management):

  • Initial Notification: SMS, dedicated crisis communication app (e.g., PagerDuty, Everbridge), and personal phone calls to DR team members within 15 minutes of declaration.
  • Regular Updates: Hourly updates to the DR team and executive management via conference calls, dedicated chat channels, or a crisis management portal.
  • Employee Communication: Regular updates to all employees (e.g., every 2-4 hours) via company-wide email, intranet portal, or dedicated status page, informing them of system status and expected recovery times.
  • Pre-approved Templates: Utilize pre-approved communication templates for various stages of the disaster (e.g., "Incident Declared," "Recovery in Progress," "Services Restored").

8.2. External Communication (Customers, Partners, Vendors, Media, Regulatory Bodies):

  • Customers:

* Initial Notification: Public status page, email blast, and social media updates within 1 hour of impact, confirming an issue and that recovery efforts are underway.

* Regular Updates: Every 2-4 hours via status page, email, and social media, providing progress updates and estimated resolution times.

* Resolution Notification: Final notification once services are fully restored.

  • Partners/Vendors: Direct email and phone calls from the Communications Lead to critical partners/vendors, informing them of the situation and potential impact on shared services.
  • Media: All media inquiries must be directed to the Communications Lead. Only pre-approved statements will be issued.
  • Regulatory Bodies: If required, the Legal/Compliance team (in coordination with the Communications Lead) will notify relevant regulatory bodies within the specified timeframe (e.g., 72 hours for data breaches under GDPR).

8.3. Communication Channels and Tools:

  • Primary: Dedicated crisis management platform (e.g., Microsoft Teams, Slack with dedicated channels), conference bridge, SMS alerts.
  • Secondary: Personal mobile phones, satellite phones (for extreme scenarios).
  • Public Status Page: [URL to Status Page]
  • Emergency Contact List: Stored off-site and in a secure, accessible cloud location.

9. Testing and Maintenance Schedule

Regular testing and continuous maintenance are crucial to ensure the DRP remains effective and current.

9.1. Types of Tests:

  • Tabletop Exercises (Quarterly): Discussion-based walk-through of the
disaster_recovery_plan.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}