Disaster Recovery Plan
Run ID: 69ccc1d03e7fb09ff16a4d672026-04-01Infrastructure
PantheraHive BOS
BOS Dashboard

Generate a comprehensive disaster recovery plan with RTO/RPO targets, backup strategies, failover procedures, communication plans, and testing schedules.

This document outlines a comprehensive Disaster Recovery Plan (DRP) designed to enable rapid recovery of critical business operations and IT infrastructure in the event of a disruptive incident. It defines recovery objectives, strategies, procedures, and communication protocols to minimize downtime and data loss, ensuring business continuity.


Disaster Recovery Plan

1. Introduction and Purpose

This Disaster Recovery Plan (DRP) provides a structured approach for an organization to respond to and recover from disruptive events that impact its IT infrastructure and operations. The primary goal is to restore critical business functions within predefined timeframes, minimize data loss, and ensure the ongoing availability of essential services. This plan serves as a living document, subject to regular review and updates.

2. Key Definitions

  • Disaster Recovery (DR): The process of restoring data, hardware, and software in the aftermath of a natural or human-induced catastrophe.
  • Business Continuity Plan (BCP): A comprehensive plan that ensures business functions can continue during and after a disaster. DRP is a critical component of BCP.
  • Recovery Time Objective (RTO): The maximum tolerable duration for a system, application, or service to be unavailable following an incident. It dictates how quickly systems must be restored.
  • Recovery Point Objective (RPO): The maximum tolerable amount of data loss, measured in time, that an application or system can sustain during a disaster. It dictates how frequently data must be backed up.
  • Failover: The process of switching to a redundant or standby system upon the failure or abnormal termination of the previously active system.
  • Failback: The process of restoring operations to the primary system or site after the disaster has been resolved and the primary environment is fully recovered.

3. Scope

This DRP covers the recovery of critical IT infrastructure, applications, and data essential for the organization's core business operations. This includes, but is not limited to:

  • Core Business Applications: (e.g., ERP, CRM, Financial Systems, E-commerce Platforms)
  • Data Services: (e.g., Databases, File Shares, Document Management Systems)
  • Communication Systems: (e.g., Email, Internal Chat, VoIP)
  • Network Infrastructure: (e.g., Routers, Switches, Firewalls, VPN access)
  • Server Infrastructure: (e.g., Virtualization platforms, Operating Systems)
  • End-User Computing: (e.g., VDI, workstation restoration procedures)

Specific systems and their criticality are detailed in the Appendix (e.g., "Critical Systems Inventory").

4. Recovery Objectives (RTO/RPO Targets)

Recovery objectives are defined based on Business Impact Analysis (BIA) and criticality assessments. These targets represent the maximum acceptable downtime (RTO) and data loss (RPO) for different tiers of systems.

| System Tier | System/Application Examples | Recovery Time Objective (RTO) | Recovery Point Objective (RPO) |

| :---------- | :-------------------------- | :---------------------------- | :----------------------------- |

| Tier 0 | Core Production Databases, Primary ERP/CRM, E-commerce | < 1 Hour | < 15 Minutes |

| Tier 1 | Mission-Critical Applications (e.g., Email, VoIP, Key Financial Reporting) | 2 - 4 Hours | < 1 Hour |

| Tier 2 | Business-Critical Applications (e.g., Internal Portals, HR Systems, Development Environments) | 8 - 24 Hours | < 4 Hours |

| Tier 3 | Support Systems, Non-Critical Applications, Archive Data | 24 - 72 Hours | < 24 Hours |

Note: Specific RTO/RPO for each application/system will be maintained in the "Critical Systems Inventory" appendix.

5. Disaster Declaration Criteria

A disaster is declared when an incident significantly impairs the organization's ability to conduct critical business operations, and recovery cannot be achieved through standard incident management procedures. Criteria for declaration typically include:

  • Loss of a primary data center or critical IT facility.
  • Widespread system failure affecting Tier 0 or Tier 1 applications beyond RTO.
  • Significant data corruption or loss that cannot be recovered locally.
  • Major network outage affecting connectivity to critical services.
  • Severe environmental events (fire, flood, earthquake) rendering the primary site unusable.
  • Cyber-attack (e.g., ransomware) that encrypts or destroys critical data and systems.

The DR Coordinator (or designated alternate) is responsible for initiating the disaster declaration process in consultation with the Crisis Management Team.

6. Roles and Responsibilities

A well-defined command structure is crucial during a disaster.

  • Crisis Management Team (CMT):

* Chair: CEO / COO

* Members: Senior Management (IT, Operations, HR, Legal, Communications, Finance)

* Responsibilities: Overall strategic decision-making, external communications, financial implications, legal compliance, employee welfare.

  • DR Coordinator:

* Role: Head of IT Operations / Designated Senior IT Manager

* Responsibilities: Oversee DR plan activation, coordinate all recovery efforts, communicate status to CMT, manage DR budget.

  • Technical Recovery Teams:

* Network Team: Restore network connectivity, VPNs, firewalls.

* Server Team: Recover virtual/physical servers, operating systems.

* Database Team: Restore and recover databases.

* Application Team: Deploy and configure business applications.

* Data Storage Team: Manage storage array recovery, data replication.

* Responsibilities: Execute technical recovery steps for their respective domains.

  • Communication Team:

* Role: Marketing / PR Manager, HR Representative

* Responsibilities: Manage internal and external communications as per the communication plan.

Detailed contact lists for all roles are maintained in the Appendix.

7. Backup and Data Recovery Strategies

A robust backup strategy is the foundation of any DRP.

  • Backup Types & Frequency:

* Full Backups: Weekly (e.g., Sunday night) for all critical systems.

* Incremental Backups: Daily (e.g., nightly) for all critical systems, capturing changes since the last backup.

* Differential Backups: Daily (e.g., nightly) for selected Tier 0/1 systems, capturing changes since the last full backup.

* Real-time Replication: For Tier 0 databases and critical file shares (e.g., synchronous replication to a secondary site or cloud region).

  • Retention Policies:

* Daily backups: 30 days

* Weekly backups: 3 months

* Monthly backups: 1 year

* Annual backups: 7 years (for regulatory compliance)

  • Backup Storage Locations:

* On-site: Short-term recovery, accessible for quick restores.

* Off-site (Secure Facility): Encrypted tapes/disks rotated to a geographically separate, secure location.

* Cloud Storage (e.g., AWS S3, Azure Blob): Primary off-site storage for critical data, leveraging object storage with redundancy and versioning. Data is encrypted in transit and at rest.

  • Data Encryption: All backups, whether in transit or at rest, are encrypted using industry-standard protocols (e.g., AES-256).
  • Data Restoration Procedures:

* Documented procedures for restoring individual files, databases, and entire systems.

* Regular verification of backup integrity and restorability.

8. Failover Procedures

Failover procedures detail the step-by-step process for activating redundant systems and services at the recovery site or cloud environment.

  • Activation Checklist:

1. Disaster Declaration: Confirm disaster status and activate DRP.

2. Notify Teams: Alert all DR team members.

3. Isolate Primary Site (if applicable): Prevent further data corruption or access to compromised systems.

4. Initiate Recovery Site Activation:

* Power on/provision recovery infrastructure (servers, network devices).

* Verify network connectivity to the recovery site.

5. Restore/Replicate Data: Ensure the latest available data is present at the recovery site.

6. Start Critical Services: Boot servers and start applications in order of criticality (Tier 0 first).

7. Configure DNS/Load Balancers: Redirect user traffic to the recovery site.

8. Validation & Testing: Perform functional tests to ensure applications are operational.

9. Communicate Status: Inform internal stakeholders and external parties.

  • System-Specific Failover Steps (Examples):

* Network Infrastructure:

* Activate redundant firewalls/routers at recovery site.

* Update DNS records (internal/external) to point to recovery site IP addresses.

* Establish VPN tunnels for remote access.

* Verify internet egress and ingress.

* Virtual Servers (VMware/Hyper-V/Cloud IaaS):

* Initiate VM replication failover (e.g., VMware Site Recovery Manager, Azure Site Recovery).

* Power on VMs at the recovery site.

* Reconfigure IP addresses/network settings if necessary.

* Verify VM accessibility and performance.

* Databases (e.g., SQL Server, Oracle, PostgreSQL):

* Activate database replication (e.g., AlwaysOn Availability Groups, Data Guard, cloud managed database failover).

* Promote standby database to primary.

* Run database consistency checks.

* Verify application connectivity to the new primary database.

* Applications:

* Deploy application code/binaries to recovered servers.

* Configure application settings (database connection strings, environment variables).

* Perform functional testing for all critical application modules.

* Ensure integration points with other systems are active.

* Storage:

* Activate storage replication or restore from backups to recovery site storage.

* Map LUNs/Volumes to recovered servers.

9. Failback Procedures

Failback is the process of returning operations to the primary site once it has been fully restored and deemed stable.

  • Failback Prerequisites:

* Primary site infrastructure fully restored and tested.

* All necessary data synchronized from the recovery site to the primary site.

* Approval from the Crisis Management Team.

  • Failback Steps:

1. Prepare Primary Site: Ensure all necessary hardware, software, and network components are ready.

2. Synchronize Data: Replicate data changes from the recovery site back to the primary site (reverse replication). This step is critical to prevent data loss.

3. Schedule Downtime: Plan a maintenance window for the failback process, if required.

4. Stop Operations at Recovery Site: Gracefully shut down applications and services at the recovery site.

5. Activate Primary Site: Direct traffic back to the primary site (e.g., update DNS, reconfigure load balancers).

6. Verify Operations: Conduct thorough testing of all systems and applications at the primary site.

7. Deactivate Recovery Site (Optional): Once confident in primary site operations, power down or decommission recovery site resources to save costs.

8. Post-Failback Review: Conduct a lessons learned session.

10. Communication Plan

Effective communication is paramount during a disaster to manage expectations and provide timely updates.

  • Internal Communication:

* Audience: DR Team, Employees, Management, Board of Directors.

* Methods:

* Emergency notification system (SMS, automated calls, email).

* Dedicated internal status page/portal.

* Team-specific chat channels (e.g., Slack, Teams).

* Regular conference calls/briefings.

* Templates: Pre-approved templates for initial notification, status updates, and all-clear messages.

  • External Communication:

* Audience: Customers, Vendors, Partners, Regulators, Media, Public.

* Methods:

* Public website status page.

* Dedicated customer email updates.

* Social media channels (e.g., Twitter, LinkedIn).

* Press releases (if applicable).

* Dedicated hotline for critical customer inquiries.

* Templates: Pre-approved statements for various scenarios, including initial outage notification, estimated recovery times, and full recovery announcements.

  • Key Information to Communicate:

* Nature of the incident (high-level, non-technical).

* Affected services/systems.

* Estimated time to recovery (ETR).

* Actions being taken.

* Impact on customers/partners.

* Contact information for inquiries.

Detailed contact lists for internal and external stakeholders are maintained in the Appendix.

11. Testing and Maintenance Schedule

Regular testing and maintenance ensure the DRP remains effective and up-to-date.

  • Testing Types:

* Tabletop Exercises (Annually): A facilitated discussion of the DRP, reviewing roles, responsibilities, and decision points without actual system activation.

* Component-Level Failover Tests (Quarterly): Testing specific system components (e.g., database failover, single application recovery).

*Simulated Full DR Dr

gemini Output

Disaster Recovery Plan

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name/Organization Name]

Prepared By: PantheraHive


1. Executive Summary

This Disaster Recovery Plan (DRP) outlines the strategies, procedures, and responsibilities required to ensure the rapid and effective recovery of critical IT systems and data in the event of a disaster. The primary objective is to minimize downtime, prevent data loss, and maintain business continuity, thereby safeguarding the organization's operations, reputation, and financial stability. This plan details Recovery Time Objectives (RTOs), Recovery Point Objectives (RPOs), comprehensive backup strategies, failover procedures, a robust communication framework, and a structured testing and maintenance schedule.

2. Introduction

2.1. Purpose

The purpose of this Disaster Recovery Plan is to provide a structured and actionable framework for responding to unforeseen events that disrupt normal business operations. It defines the steps necessary to restore critical IT infrastructure, applications, and data to an operational state following a disaster, ensuring the continued availability of essential services.

2.2. Objectives

  • Minimize the impact of disruptive events on business operations.
  • Achieve defined Recovery Time Objectives (RTOs) for critical systems and applications.
  • Achieve defined Recovery Point Objectives (RPOs) for critical data.
  • Ensure the integrity and availability of critical business data.
  • Establish clear roles, responsibilities, and communication protocols during a disaster.
  • Provide a systematic approach for testing, maintaining, and updating the DRP.
  • Protect the organization's assets, reputation, and customer trust.

2.3. Scope

This DRP covers the recovery of critical IT infrastructure, applications, and data hosted within [Primary Data Center Location(s)] and replicated to [DR Site Location(s)]. It includes, but is not limited to, the following critical systems and services:

  • [List critical applications, e.g., ERP System, CRM, E-commerce Platform, Database Servers, Email Services]
  • Core network infrastructure (routers, firewalls, switches)
  • Virtualization platforms (e.g., VMware, Hyper-V)
  • Storage Area Networks (SANs)
  • Cloud-based services integrated with on-premise systems
  • End-user computing environments (as applicable)

3. Roles and Responsibilities

A dedicated Disaster Recovery Team (DRT) is established to manage and execute the DRP. Each member has specific responsibilities during a disaster event.

| Role | Responsibility | Primary Contact | Secondary Contact |

| :------------------------- | :----------------------------------------------------------------------------------------------------------------- | :----------------------- | :------------------------- |

| DR Coordinator | Overall plan activation, management, decision-making, communication with senior leadership. | [Name/Title] | [Name/Title] |

| Technical Lead | Oversees all technical recovery efforts, coordinates technical teams, ensures RTO/RPO adherence. | [Name/Title] | [Name/Title] |

| Network Lead | Manages network recovery, connectivity to DR site, VPNs, DNS, firewall configurations. | [Name/Title] | [Name/Title] |

| Server/Compute Lead | Manages server (physical/virtual) and compute resource recovery, provisioning, and configuration. | [Name/Title] | [Name/Title] |

| Storage/Database Lead | Manages data recovery, database restoration, data integrity, and storage replication. | [Name/Title] | [Name/Title] |

| Application Lead(s) | Manages recovery and validation of specific critical applications. | [Name/Title(s)] | [Name/Title(s)] |

| Communication Lead | Manages internal and external communications, maintains contact lists, drafts status updates. | [Name/Title] | [Name/Title] |

| Business Unit Liaisons | Represent specific business units, assist with business process validation post-recovery. | [Name/Title(s)] | [Name/Title(s)] |

4. Disaster Declaration Criteria and Incident Response

4.1. Disaster Declaration Criteria

A disaster is defined as an event that renders the primary production environment (or a significant portion thereof) inoperable for an extended period, exceeding defined tolerance levels, and necessitating the activation of the DRP. Examples include:

  • Major power outage lasting more than [X] hours.
  • Catastrophic hardware failure impacting critical infrastructure.
  • Major data corruption or loss.
  • Unrecoverable cyber-attack (e.g., ransomware, data breach).
  • Natural disasters (fire, flood, earthquake) rendering the primary site inaccessible.
  • Man-made disasters (terrorism, civil unrest) impacting the primary site.

4.2. Incident Response Overview

Upon detection of a potential disaster, the following initial steps are taken:

  1. Detection & Assessment: IT Operations identifies the incident, assesses its scope and impact.
  2. Notification: Incident Response Team (IRT) notifies the DR Coordinator and relevant leads.
  3. Severity Classification: Incident is classified based on impact to RTO/RPO and business operations.
  4. Disaster Declaration: If the incident meets DRP activation criteria, the DR Coordinator, in consultation with senior management, formally declares a disaster and authorizes DRP activation.
  5. Activation of DRT: DR Coordinator activates the DR Team and initiates the recovery procedures outlined in this plan.

5. Recovery Objectives (RTO/RPO)

Recovery Time Objective (RTO) is the maximum tolerable duration of time that a system, application, or service can be unavailable after an incident. Recovery Point Objective (RPO) is the maximum tolerable amount of data that can be lost from a service due to a major incident.

The following table outlines the RTO and RPO targets for critical systems and applications:

| System/Application ID | System/Application Name | Criticality Level | RTO (Time) | RPO (Data Loss) | Notes |

| :-------------------- | :--------------------------- | :---------------- | :--------------- | :--------------- | :----------------------------------------------------------------- |

| APP-001 | ERP System (Production) | Critical (Tier 1) | 4 Hours | 15 Minutes | Core business operations, financial transactions. |

| APP-002 | E-commerce Platform | Critical (Tier 1) | 4 Hours | 15 Minutes | Revenue-generating, customer-facing. |

| APP-003 | CRM System | Essential (Tier 2) | 8 Hours | 1 Hour | Customer management, sales support. |

| DB-001 | Primary Transaction Database | Critical (Tier 1) | 2 Hours | 5 Minutes | Supports ERP and E-commerce. Requires active-passive replication. |

| EMAIL-001 | Email & Collaboration | Essential (Tier 2) | 12 Hours | 4 Hours | Internal/external communication. Cloud-based with failover. |

| FILE-001 | File Shares (Critical) | Essential (Tier 2) | 8 Hours | 1 Hour | Core business documents. |

| APP-004 | Internal Wiki/Documentation | Supporting (Tier 3)| 24 Hours | 24 Hours | Internal knowledge base. |

Note: Criticality Levels: Tier 1 (Mission Critical), Tier 2 (Business Essential), Tier 3 (Business Supporting)

6. Backup and Data Recovery Strategies

A multi-layered backup strategy ensures data availability and integrity.

6.1. Backup Types and Frequency

  • Full Backups: Complete copy of all data. Performed [e.g., Weekly, Sunday night].
  • Differential Backups: All changes since the last full backup. Performed [e.g., Daily, Monday-Saturday night].
  • Incremental Backups: All changes since the last backup (full or incremental). Performed [e.g., Hourly for critical databases, or Daily in addition to differential].
  • Database Backups: Transaction logs backed up every [e.g., 15 minutes] for critical databases, full database backups [e.g., daily].
  • Snapshot Replication: Near-real-time replication of critical virtual machines/storage volumes to the DR site.

6.2. Retention Policies

  • Critical Data (Tier 1): [e.g., 7 days daily, 4 weeks weekly, 12 months monthly, 7 years annually].
  • Essential Data (Tier 2): [e.g., 7 days daily, 4 weeks weekly, 3 months monthly].
  • Supporting Data (Tier 3): [e.g., 14 days daily, 2 weeks weekly].

6.3. Storage Locations

  • On-site: Short-term recovery, stored in secure, fire-rated cabinets. [e.g., Daily backups for 3 days].
  • Off-site (Physical): Weekly full backups transported to a secure, geographically separate location [e.g., Iron Mountain, or a designated secondary office].
  • Cloud Storage: Encrypted backups to [e.g., AWS S3, Azure Blob Storage, Google Cloud Storage] for long-term retention and geographic diversity.
  • DR Site Replication: Real-time or near-real-time replication of critical data and VMs to the dedicated DR facility at [DR Site Location].

6.4. Data Security and Integrity

  • Encryption: All data at rest and in transit is encrypted using [e.g., AES-256].
  • Access Control: Strict access controls (least privilege) enforced for backup systems and storage.
  • Immutability: Immutable backups configured in cloud storage where possible to prevent ransomware or accidental deletion.
  • Verification: Regular verification of backup integrity and restorability through test restores.

6.5. Data Restoration Procedures

  1. Identify Data Loss: Determine the scope and timeframe of data loss.
  2. Locate Backup: Identify the appropriate backup set (full, differential, incremental) and storage location.
  3. Restore Target: Prepare the target system or storage for data restoration.
  4. Execution: Initiate the restore process using [e.g., Veeam, Commvault, native database tools].
  5. Validation: Verify data integrity and completeness post-restore.
  6. Application Integration: Ensure restored data is correctly integrated with applications.

7. Recovery Procedures (Failover)

This section details the step-by-step procedures for activating and recovering systems at the designated DR site.

7.1. Disaster Recovery Activation Procedure

  1. DR Declaration: DR Coordinator declares a disaster as per Section 4.
  2. DR Team Notification: DR Coordinator notifies all DR Team members via [e.g., SMS, emergency call tree, dedicated communication channel].
  3. DR Site Activation: Network Lead initiates activation of DR site network infrastructure (firewalls, routers, VPNs).
  4. Compute/Storage Activation: Server/Compute Lead and Storage Lead initiate the bringing online of compute resources and storage at the DR site.
  5. Application Recovery: Application Leads begin recovery of critical applications in the priority order defined in Section 5.

7.2. Infrastructure Recovery

  • 7.2.1. Network Recovery (Network Lead)

* Verify connectivity to the DR site.

* Activate DR site firewalls and security appliances.

* Configure/activate VPN tunnels to required external parties (vendors, cloud services).

* Update DNS records (internal and external) to point to DR site IPs (TTL set low for critical services).

* Verify routing and network segmentation.

  • 7.2.2. Server/Compute Recovery (Server/Compute Lead)

* Power on/provision virtual machines from replication snapshots or backups at the DR site.

* Verify IP addressing and network configurations for recovered servers.

* Perform initial health checks on operating systems.

* Ensure adequate compute resources (CPU, RAM) are available.

  • 7.2.3. Storage Recovery (Storage/Database Lead)

* Verify replication status and consistency of storage volumes at the DR site.

* Mount replicated storage volumes to appropriate servers/VMs.

* Confirm data accessibility and integrity.

7.3. Application Recovery (Application Lead(s) & Database Lead)

The recovery of applications will follow the priority order established in Section 5.

  • 7.3.1. Database Recovery (Database Lead)

* Initiate database failover/restoration from replicated instances or latest backups.

* Perform point-in-time recovery to meet RPO.

* Verify database integrity and consistency checks.

* Ensure all necessary services (listeners, agents) are running.

  • 7.3.2. Application Stack Recovery (Application Lead(s))

* Restore application servers in dependency order (e.g., web servers before application servers, application servers before databases).

* Configure application settings, connection strings, and integrations to point to recovered databases and services at the DR site.

* Deploy necessary application components or patches if required.

* Perform functional testing of each application component.

  • 7.3.3. User Access and Validation

* Provide temporary access instructions to end-users

gemini Output

Disaster Recovery Plan

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name/Organization]

Prepared By: PantheraHive


1. Executive Summary

This Disaster Recovery Plan (DRP) outlines the strategies, procedures, and responsibilities necessary to ensure the swift and effective recovery of critical IT systems, applications, and data following a disruptive event. The primary goal is to minimize downtime and data loss, maintaining business continuity and resilience. This plan establishes clear Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs), details backup and failover procedures, defines communication protocols, and sets forth a rigorous testing schedule to ensure readiness.


2. Introduction & Purpose

2.1 Purpose

The purpose of this DRP is to provide a structured and actionable framework for restoring IT services and data in the event of a disaster (e.g., natural disaster, cyber-attack, major hardware failure, power outage, human error). It aims to:

  • Minimize the impact of service disruptions on business operations.
  • Protect critical data assets.
  • Ensure the rapid restoration of essential business functions.
  • Define roles, responsibilities, and procedures for all disaster recovery personnel.
  • Comply with relevant regulatory requirements and industry best practices.

2.2 Scope

This DRP covers all critical IT infrastructure, applications, and data hosted within [Specify Primary Data Center/Cloud Region] and essential for the continued operation of [Specify Core Business Functions]. This includes, but is not limited to:

  • Core business applications (e.g., ERP, CRM, E-commerce, Financial Systems).
  • Database servers.
  • Network infrastructure (firewalls, routers, switches).
  • Storage systems.
  • Virtualization platforms.
  • Key user data and shared file systems.
  • Cloud-based services integrated into the primary infrastructure.

3. Key Definitions & Terminology

  • Disaster Recovery (DR): The process of restoring IT systems and data after a disruptive event.
  • Business Continuity Plan (BCP): A broader plan that encompasses DR, focusing on maintaining all critical business functions (not just IT) during and after a disaster.
  • Recovery Time Objective (RTO): The maximum tolerable duration of time that a computer system, application, or network can be down after a disaster or disruption before it causes unacceptable damage to the business.
  • Recovery Point Objective (RPO): The maximum tolerable amount of data (measured in time) that can be lost from an IT service due to a major incident. It defines the point in time to which systems must be recovered.
  • DR Site: The alternate location where IT systems and data will be recovered and operated from during a disaster at the primary site.
  • Failover: The process of automatically or manually switching from a primary system to a redundant or standby system upon the failure or abnormal termination of the primary system.
  • Failback: The process of restoring operations to the original primary site after the disaster has been resolved and the primary site has been fully recovered and verified.
  • Warm Site: A DR site that has hardware and connectivity but requires some configuration and data loading to become operational.
  • Hot Site: A fully equipped DR site with hardware, software, and data mirroring, ready for immediate activation.
  • Cold Site: A basic facility with power and cooling, but no hardware or pre-installed software, requiring significant setup time.

4. Disaster Recovery Team & Roles

The Disaster Recovery Team is responsible for executing this plan. Specific roles and responsibilities are detailed in Section 12.

  • DR Coordinator/Incident Commander: [Name/Role] - Overall leadership, decision-making, external communication.
  • IT Infrastructure Lead: [Name/Role] - Server, storage, virtualization recovery.
  • Network Lead: [Name/Role] - Network connectivity, firewall configuration, VPNs.
  • Application Lead: [Name/Role] - Application deployment, configuration, and testing.
  • Database Lead: [Name/Role] - Database recovery, replication, and integrity.
  • Data Protection Lead: [Name/Role] - Backup restoration, data integrity.
  • Communication Lead: [Name/Role] - Internal and external communication.

Primary Contact List (Internal & External) is located in Appendix A.


5. Critical Systems & Applications

The following table identifies critical systems and applications, their business impact, and dependencies. This prioritization drives RTO/RPO targets.

| System/Application ID | System Name | Business Function Impacted | Criticality (Tier) | Dependencies |

| :-------------------- | :----------------------- | :------------------------------- | :----------------- | :------------------------------------------ |

| SYS-001 | ERP System (SAP/Oracle) | Order Processing, Finance, Inventory | Tier 1 (Critical) | Database, Authentication, Network |

| SYS-002 | CRM System (Salesforce) | Sales, Customer Support | Tier 1 (Critical) | Database, Authentication, Network |

| SYS-003 | E-commerce Platform | Online Sales, Customer Experience | Tier 1 (Critical) | Database, Payment Gateway, Inventory System |

| SYS-004 | Financial Reporting | Accounting, Compliance | Tier 2 (Important) | ERP, Database |

| SYS-005 | Email & Collaboration | Internal/External Communication | Tier 2 (Important) | Network, Directory Services |

| SYS-006 | File Servers | Document Storage, Sharing | Tier 2 (Important) | Network, Directory Services |

| SYS-007 | Development/Test Env. | Software Development | Tier 3 (Non-Critical) | Network, Storage |

  • Tier 1 (Critical): Systems essential for immediate business operations; disruption leads to significant financial loss, legal penalties, or irreparable reputational damage.
  • Tier 2 (Important): Systems important for daily operations; disruption can cause delays or inconveniences but not immediate catastrophic impact.
  • Tier 3 (Non-Critical): Systems whose disruption has minimal immediate impact on core business operations.

6. Recovery Time Objectives (RTO) & Recovery Point Objectives (RPO)

RTO and RPO targets are established based on the criticality of each system and application.

| System/Application Criticality | RTO Target | RPO Target | Justification |

| :----------------------------- | :------------------ | :------------------ | :----------------------------------------------------- |

| Tier 1 (Critical) | 0-4 Hours | 0-15 Minutes | Minimize business disruption, financial loss, and compliance risks. |

| Tier 2 (Important) | 4-24 Hours | 1-4 Hours | Allow for controlled recovery with minimal extended impact. |

| Tier 3 (Non-Critical) | 24-72 Hours | 4-24 Hours | Recovery can be prioritized after critical systems are restored. |

Note: Specific RTO/RPO values for individual systems are detailed in the Critical Systems & Applications inventory (see Appendix B).


7. Backup & Data Protection Strategy

A robust backup and data protection strategy is fundamental to achieving RPO targets.

7.1 Backup Types & Frequency

  • Full Backups: Weekly, performed on [Day of week, e.g., Sunday]. Captures all selected data.
  • Incremental Backups: Daily, performed on [Time, e.g., 2 AM] Monday-Saturday. Captures only data that has changed since the last full or incremental backup.
  • Differential Backups: Not currently used, but can be implemented for specific systems if RTO/RPO requirements change. Captures data changed since the last full backup.
  • Database Transaction Logs: Continuously replicated or backed up every [e.g., 15 minutes] for Tier 1 databases to ensure minimal data loss (near-zero RPO).

7.2 Backup Locations & Retention

  • On-site Storage: Short-term retention (7 days) for quick recovery of recent data. Stored on [Specify storage type, e.g., NAS/SAN].
  • Off-site Storage:

* Cloud Storage (AWS S3 / Azure Blob / Google Cloud Storage): Encrypted backups replicated daily for long-term retention.

* Retention Policy:

* Daily backups: 30 days

* Weekly full backups: 90 days

* Monthly full backups: 1 year

* Annual full backups: 7 years (for compliance)

  • Data Replication: For Tier 1 systems, real-time or near real-time data replication is employed to the DR site to meet stringent RPO targets. This typically involves technologies like database mirroring, SAN replication, or VM replication (e.g., VMware vSphere Replication, Azure Site Recovery).

7.3 Data Encryption & Security

  • All data at rest (on-site and off-site backups) is encrypted using AES-256 encryption.
  • All data in transit during replication or backup transfer is secured using TLS 1.2+.
  • Access to backup systems and storage is restricted to authorized personnel via Role-Based Access Control (RBAC) and Multi-Factor Authentication (MFA).

7.4 Data Integrity Checks

  • Regular (monthly) automated verification of backup file integrity.
  • Periodic (quarterly) restore tests of random data sets to confirm recoverability.
  • Checksum verification during backup and restore operations.

8. Disaster Detection & Activation

8.1 Monitoring & Alerting

  • Infrastructure Monitoring: Tools like [e.g., Nagios, Zabbix, Datadog, Azure Monitor] monitor server health, network connectivity, storage utilization, and application performance.
  • Security Information and Event Management (SIEM): [e.g., Splunk, Microsoft Sentinel] monitors security events and potential threats.
  • Automated Alerts: Critical events trigger alerts via SMS, email, and internal communication channels to the DR team.

8.2 Incident Classification

Incidents are classified based on their severity and impact:

  • Level 1 (Disaster): Catastrophic failure of primary data center, major cyber-attack resulting in widespread data loss/system compromise, regional natural disaster. Requires full DRP activation.
  • Level 2 (Major Incident): Failure of a critical system or application affecting a significant portion of users, but not the entire data center. May require partial DRP activation or targeted system recovery.
  • Level 3 (Minor Incident): Isolated system failure, performance degradation. Handled by standard IT operations, not DRP activation.

8.3 DRP Activation Criteria

The DRP will be officially activated by the DR Coordinator/Incident Commander if any of the following conditions are met:

  1. Loss of primary data center functionality (power, network, environmental).
  2. Unrecoverable failure of a Tier 1 system with an RTO breach imminent.
  3. Widespread data corruption or loss that cannot be resolved via standard operational recovery.
  4. Declaration of a disaster by local authorities affecting the primary site.
  5. A security incident deemed unrecoverable without full DR invocation.

8.4 DRP Activation Process

  1. Detection: Monitoring systems or manual reports identify a potential disaster.
  2. Assessment: DR Coordinator and key team members assess the incident's scope, impact, and potential for recovery at the primary site.
  3. Declaration: If activation criteria are met, the DR Coordinator formally declares a disaster and activates the DRP.
  4. Notification: All DR team members are notified immediately via primary and secondary communication channels (e.g., dedicated emergency phone tree, SMS, email).
  5. Relocation (if necessary): Personnel relocate to the designated command center or DR site, if required.

9. Failover & Recovery Procedures

These procedures detail the steps to recover systems and data at the designated DR site.

DR Site Location: [Specify DR site location, e.g., AWS us-east-2 region, Azure East US 2, Co-location facility in City X].

DR Site Type: [e.g., Hot Site with active-passive replication / Warm Site with daily data synchronization].

9.1 General Failover Steps

  1. Verify Disaster State: Confirm primary site is inoperable or severely compromised.
  2. Isolate Primary Site (if compromised): Disconnect primary site from the network to prevent further damage or data corruption.
  3. Activate DR Site Infrastructure:

* Power on/provision compute resources at the DR site.

* Verify network connectivity (internet, VPNs to corporate network/cloud).

* Configure firewalls and security groups.

  1. Restore/Failover Core Services (Tier 1):

* Network Services: DNS, DHCP (if applicable), VPN gateways.

* Directory Services: Active Directory/LDAP (if not replicated, restore from backup).

* Database Servers:

* Initiate failover for replicated databases (e.g., AlwaysOn Availability Groups, RDS multi-AZ failover).

* For non-replicated databases, restore latest full backup + transaction logs to meet RPO.

* Verify database integrity and consistency.

* Application Servers:

* Power on/provision application server VMs at DR site.

* Deploy/configure application code (if not pre-deployed).

* Update configuration files to point to DR database servers.

* Verify application service startup.

  1. Restore/Failover Important Services (Tier 2): Follow similar steps for applications like Email, File Servers, etc., based on their RTO.
  2. Update DNS Records: Switch public and internal DNS records to point to the DR site IP addresses/endpoints. TTLs should be low (e.g., 5 minutes) for critical services.
  3. Conduct Post-Recovery Verification & Testing:

* Application Functionality: Test core business workflows.

* Data Integrity: Verify data consistency between primary (last known good) and DR site.

* User Acceptance Testing (UAT): Involve key business users to validate functionality.

* Performance Testing: Ensure the DR site can handle expected load.

  1. Stabilize & Monitor: Continuously monitor DR site performance, logs, and security.

9.2 Specific Recovery Procedures (Examples)

  • Database (e.g., SQL Server):

1. Confirm primary database cluster is down.

2. Initiate failover to secondary replica in DR site (if AlwaysOn AG).

3. If no replication, restore latest full backup from cloud storage to DR database server.

4. Apply transaction log backups in sequence until target RPO is met.

5. Verify database integrity (DBCC CHECKDB).

6. Update application connection strings.

  • **
disaster_recovery_plan.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}