Disaster Recovery Plan
Run ID: 69ccec153e7fb09ff16a65712026-04-01Infrastructure
PantheraHive BOS
BOS Dashboard

Generate a comprehensive disaster recovery plan with RTO/RPO targets, backup strategies, failover procedures, communication plans, and testing schedules.

Comprehensive Disaster Recovery Plan

This document outlines a comprehensive Disaster Recovery Plan (DRP) designed to ensure business continuity and minimize the impact of disruptive events on critical IT systems and operations. It details strategies for data backup, system recovery, communication protocols, and regular testing to maintain organizational resilience.


1. Introduction and Purpose

The purpose of this Disaster Recovery Plan (DRP) is to provide a structured approach to resuming critical business operations and IT services following a disaster or significant disruption. This plan aims to minimize downtime, prevent data loss, and ensure the rapid restoration of essential functions, thereby safeguarding business continuity and stakeholder confidence.

Scope: This DRP covers all critical IT infrastructure, applications, and data essential for core business operations. It addresses various disaster scenarios, including natural disasters, cyberattacks, major equipment failures, and human error.


2. Recovery Objectives: RTO and RPO Targets

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are critical metrics defining the acceptable downtime and data loss following a disaster. These targets are critical for prioritizing recovery efforts and selecting appropriate technologies.

2.1. Definitions

  • Recovery Time Objective (RTO): The maximum acceptable duration of time that a business process or system can be down following an incident before significant damage is incurred.
  • Recovery Point Objective (RPO): The maximum acceptable amount of data (measured in time) that can be lost from an IT service due to a major incident. It dictates the frequency of data backups.

2.2. Targeted RTO/RPO Matrix

The following table outlines general RTO/RPO targets. These should be further refined based on a detailed Business Impact Analysis (BIA) for each critical system and application.

| System/Application Category | Example Systems/Data | RTO (Time) | RPO (Time) | Justification |

| :-------------------------- | :------------------- | :--------- | :--------- | :------------ |

| Tier 1: Mission-Critical | CRM, ERP, Core Databases, Financial Systems, Primary Websites | 1-4 Hours | 0-1 Hour | Direct revenue impact, legal/regulatory compliance, customer-facing |

| Tier 2: Business-Critical | Email, File Servers, HR Systems, Development Environments | 4-24 Hours | 1-4 Hours | Significant operational impact, internal productivity |

| Tier 3: Business-Support | Test/Dev environments, Non-essential internal applications, Archival Data | 24-72 Hours | 4-24 Hours | Minimal immediate impact, can tolerate longer downtime |

| Tier 4: Non-Critical | Legacy archives, Non-essential tools | > 72 Hours | > 24 Hours | No immediate operational impact, low priority for recovery |

Actionable Insight: A detailed BIA must be conducted to classify all applications and data into these tiers and assign specific RTO/RPO values.


3. Backup Strategies

A robust backup strategy is fundamental to achieving RPO targets and ensuring data recoverability. This section outlines the approach to data backup, storage, and retention.

3.1. Backup Types and Frequency

  • Full Backups: Complete copies of all selected data. Performed weekly for critical systems.
  • Incremental Backups: Copies only data that has changed since the last backup (full or incremental). Performed daily for critical systems.
  • Differential Backups: Copies all data that has changed since the last full backup. Performed daily for critical systems.
  • Database Transaction Logs: Continuous logging for critical databases to enable point-in-time recovery and achieve near-zero RPO.

3.2. Backup Scope

  • Operating Systems: Full images of critical servers.
  • Applications: Application binaries, configurations, and associated data.
  • Databases: Full database backups, transaction logs, and schema definitions.
  • User Data: All shared drives, user home directories, and critical endpoint data.
  • Configuration Files: Network device configurations, firewall rules, cloud configurations (Infrastructure as Code where applicable).

3.3. Backup Storage Locations (3-2-1 Rule)

Adherence to the 3-2-1 backup rule is mandatory:

  • 3 Copies of Data: Maintain at least three copies of your data (the original and two backups).
  • 2 Different Media Types: Store backups on at least two different types of storage media (e.g., local disk, tape, cloud storage).
  • 1 Offsite Copy: Keep at least one copy of the backup data in an offsite location.

Primary Storage: On-premises SAN/NAS, local disks.

Secondary Storage:

  • Nearline Storage: Separate on-premises storage array or dedicated backup server.
  • Offsite Storage (Cloud): Encrypted backups stored in a geographically distinct cloud region (e.g., AWS S3 Glacier, Azure Blob Storage). This serves as the primary disaster recovery site for data.
  • Offsite Storage (Physical - for critical archives): Secure, fireproof vault for long-term archival tapes or hard drives, if applicable.

3.4. Retention Policies

| Data Type/System | On-site Retention | Off-site Retention | Archival Retention |

| :--------------- | :---------------- | :----------------- | :----------------- |

| Mission-Critical | 7 days (daily incrementals, weekly fulls) | 30 days (daily incrementals, weekly fulls) | 7 years (monthly/quarterly snapshots) |

| Business-Critical | 7 days (daily incrementals, weekly fulls) | 14 days (daily incrementals, weekly fulls) | 3 years (monthly snapshots) |

| Business-Support | 3 days (daily incrementals, weekly fulls) | 7 days (weekly fulls) | 1 year (quarterly snapshots) |

Encryption: All backup data, both in transit and at rest, must be encrypted using industry-standard protocols (e.g., AES-256).

Integrity Checks: Regular automated checks will be performed on backup sets to verify their integrity and recoverability.


4. Failover and Recovery Procedures

This section details the step-by-step procedures for failover to redundant systems and the subsequent recovery of services.

4.1. Disaster Declaration and Initial Assessment

  1. Detection: Incident detection via monitoring systems, user reports, or external alerts.
  2. Declaration: Designated DRP Coordinator (e.g., Head of IT, CIO) declares a disaster based on predefined criteria (e.g., loss of critical service for > RTO, major data breach).
  3. Severity Assessment: Assess the scope and impact of the disaster to determine which systems are affected and the required recovery strategy.
  4. Team Activation: Activate the Disaster Recovery Team (DRT) and relevant sub-teams.

4.2. Recovery Site Strategy

  • Hot Site (Primary DR Strategy): Utilize cloud-based replication (e.g., AWS CloudFormation, Azure Site Recovery, VMware SRM) to maintain near real-time copies of critical systems in a separate geographical region. This allows for rapid failover with minimal downtime.
  • Warm Site (Secondary DR Strategy): For less critical systems, maintain standby hardware and network infrastructure in a secondary data center or cloud region, requiring some configuration and data restoration post-disaster.
  • Cold Site (Fallback): In extreme cases, a cold site provides basic infrastructure (space, power, cooling) where equipment and data would need to be procured and installed. This is a last resort.

4.3. Failover Procedures (High-Level)

  1. Isolate Affected Systems: Disconnect compromised systems to prevent further damage or propagation.
  2. Activate DR Environment: Initiate automated failover scripts or manual procedures to bring up replicated systems in the recovery site.

* Verify network connectivity to the DR environment.

* Bring up virtual machines/containers in the correct boot order.

* Restore database backups and apply transaction logs to achieve RPO.

  1. DNS Updates: Update DNS records to point to the IP addresses of the services running in the recovery site.
  2. Service Verification: Thoroughly test all critical applications and services in the DR environment.

* Connectivity tests.

* Application functionality tests.

* User acceptance testing (UAT) with a small group of business users.

  1. User Redirection: Once verified, redirect all users to the recovered services.

4.4. Failback Procedures (High-Level)

  1. Restore Primary Site: Repair or rebuild the primary data center/environment.
  2. Synchronize Data: Replicate data from the DR environment back to the restored primary site. This must be carefully managed to avoid data loss.
  3. Scheduled Failback: Plan a controlled failback during a low-impact window.
  4. DNS Updates: Update DNS records back to the primary site.
  5. Service Verification: Verify all services in the primary site.
  6. Deactivate DR Environment: Once failback is complete and stable, decommission the DR environment or revert it to standby mode.

Detailed Runbooks: Comprehensive, step-by-step runbooks will be developed and maintained for each critical system's failover and failback procedures. These will include dependencies, contact information, and troubleshooting steps.


5. Communication Plan

Effective communication is paramount during a disaster to manage expectations, coordinate efforts, and maintain confidence among stakeholders.

5.1. Crisis Communication Team

  • DRP Coordinator: Overall lead, primary decision-maker.
  • IT Lead: Technical recovery efforts.
  • Business Operations Lead: Business impact, operational continuity.
  • Communications Lead: External and internal communications.
  • Legal/Compliance Lead: Regulatory and legal implications.

5.2. Communication Channels

  • Primary: Dedicated crisis communication platform (e.g., Slack channel, Microsoft Teams, emergency notification system), conference bridge.
  • Secondary: Email (using a separate, resilient email service if primary is affected), SMS, pre-defined call trees.
  • External: Company website (static page for updates), social media (controlled messaging), press releases.

5.3. Communication Cadence and Content

| Audience | Frequency | Content |

| :---------------- | :------------ | :--------------------------------------------------------------------------------------------------------------- |

| DR Team | Hourly/As Needed | Status updates, assigned tasks, technical challenges, next steps. |

| Executive Mgmt. | 2-4 Hours | High-level status, business impact, projected recovery times, critical decisions required. |

| Employees | 4-8 Hours | Operational status, impact on work, instructions for remote work/alternative sites, HR guidance. |

| Customers | 4-12 Hours | Service status, impact on services, estimated resolution time, alternative contact methods. (Tailored by tier of customer) |

| Vendors/Partners | As Needed | Impact on shared services, coordination of recovery efforts, supply chain implications. |

| Regulators | As Required | Compliance with reporting requirements, nature of the incident, steps taken for recovery. (Legal team guidance) |

Pre-approved Templates: Standardized communication templates will be prepared in advance for various scenarios to ensure consistent and timely messaging.

Contact Lists: Up-to-date contact lists for all internal and external stakeholders will be maintained and stored securely in both digital (off-site) and physical (hard copy) formats.


6. Testing Schedules and Maintenance

Regular testing and ongoing maintenance are crucial to ensure the DRP remains effective, up-to-date, and aligned with evolving business needs and technology.

6.1. Testing Objectives

  • Validate the effectiveness of backup and recovery procedures.
  • Identify gaps, weaknesses, and single points of failure in the plan.
  • Familiarize DR team members with their roles and responsibilities.
  • Verify RTO/RPO targets can be met.
  • Confirm the accuracy of documentation and runbooks.

6.2. Testing Types and Frequency

  • Desktop Walkthroughs (Annual): Review the DRP document, runbooks, and procedures with the DR team to identify logical flaws or outdated information.
  • Component Testing (Quarterly): Test individual components of the DR plan, such as restoring a single server from backup, testing network connectivity to the DR site, or verifying specific application functionality.
  • Full Simulation (Bi-Annual): Conduct a comprehensive end-to-end test, simulating a major disaster scenario. This involves failing over critical systems to the DR site, operating from there for a period, and potentially performing a failback.

* Unannounced Drills: Periodically conduct unannounced drills to test the team's readiness and the robustness of alert mechanisms.

  • Data Restoration Drills (Monthly): Select random files/databases and attempt a full restoration to verify backup integrity and restore procedures.

6.3. Post-Test Review and Update Process

  1. Debriefing: Immediately after each test, conduct a debriefing session with all participating teams.
  2. Gap Analysis: Document all issues encountered, deviations from the plan, and areas for improvement.
  3. Action Plan: Develop a detailed action plan with assigned owners and deadlines for addressing identified gaps.
  4. DRP Update: Revise the DRP document, runbooks, and associated procedures based on lessons learned from the test.
  5. Communication: Communicate updates and changes to all relevant stakeholders.

6.4. Plan Maintenance

  • Annual Review: The entire DRP will be formally reviewed and approved by senior management annually, or whenever significant changes occur (e.g., new systems, major architectural changes, organizational restructuring).
  • Documentation Updates: All DRP documentation, including contact lists, system inventories, network diagrams, and runbooks, will be kept current.
  • Technology Updates: Monitor and update DR technologies and services to align with industry best practices and evolving threats.
  • Training: Provide regular training to DR team members on their roles, responsibilities, and the latest DR procedures.

7. Roles and Responsibilities

A clearly defined organizational structure and assigned responsibilities are essential for effective disaster recovery.

  • Executive Sponsor: Provides overall governance, budget, and strategic direction. Approves the DRP.
  • DRP Coordinator: Manages the DRP lifecycle, leads disaster declaration, coordinates recovery efforts, and chairs the DR team.
  • Disaster Recovery Team (DRT):

* Infrastructure Team: Server, network, storage, virtualization recovery.

* Application Team: Application restoration, configuration, and testing.

* Database Team: Database recovery, consistency checks.

* Security Team: Incident response, threat mitigation, security monitoring in DR.

* End-User Support Team: User assistance, endpoint recovery.

  • Business Unit Representatives: Provide business context, assist with UAT, and manage business process recovery.

8. Conclusion

This Disaster Recovery Plan provides a robust framework for responding to and recovering from disruptive events. Its success hinges on continuous improvement through regular testing, diligent maintenance, and clear communication. By adhering to this plan, the organization can significantly enhance its resilience and ensure

gemini Output

This document outlines the comprehensive Disaster Recovery Plan (DRP) for [Organization Name], designed to ensure the swift and effective recovery of critical IT systems and business operations in the event of a disaster. It covers RTO/RPO targets, backup strategies, failover procedures, communication protocols, and a robust testing schedule to minimize downtime and data loss.


Disaster Recovery Plan - [Organization Name]

Document Version: 1.0

Date: October 26, 2023

Prepared By: [Your Name/Department]

Approved By: [Approving Authority/Stakeholder]


1. Executive Summary

This Disaster Recovery Plan (DRP) provides a structured approach for [Organization Name] to respond to and recover from disruptive events that could impact our critical IT infrastructure and business operations. The primary objective is to restore essential services within defined Recovery Time Objectives (RTOs) and minimize data loss within defined Recovery Point Objectives (RPOs), thereby ensuring business continuity, protecting assets, and maintaining stakeholder confidence. This plan encompasses strategies for data backup, system failover, communication protocols, and regular testing to ensure readiness.

2. Introduction

2.1 Purpose

The purpose of this DRP is to:

  • Minimize the impact of disruptions on critical business functions.
  • Provide clear, actionable procedures for responding to various disaster scenarios.
  • Ensure the timely recovery of critical IT systems and data.
  • Establish communication protocols for internal and external stakeholders during a crisis.
  • Define roles and responsibilities for all personnel involved in disaster recovery efforts.
  • Protect the organization's reputation, financial stability, and legal compliance.

2.2 Scope

This DRP covers all critical IT infrastructure, applications, data, and associated business processes essential for [Organization Name]'s operations. This includes:

  • Primary data center facilities (on-premise and/or cloud infrastructure).
  • Key business applications (e.g., ERP, CRM, financial systems, communication platforms).
  • Critical data repositories (databases, file shares, cloud storage).
  • Network infrastructure (LAN, WAN, VPN, internet connectivity).
  • User workstations and remote access capabilities.
  • Personnel and communication channels.

2.3 Objectives

Upon activation, the DRP aims to achieve the following objectives:

  • Safety: Ensure the safety of all personnel.
  • Notification: Promptly notify relevant personnel and stakeholders.
  • Containment: Limit the damage and scope of the disaster.
  • Recovery: Restore critical business functions and IT services within predefined RTOs and RPOs.
  • Communication: Maintain effective communication with employees, customers, vendors, and other stakeholders.
  • Resumption: Re-establish normal business operations.
  • Review: Conduct post-incident reviews to identify lessons learned and improve the plan.

2.4 Assumptions

This DRP is based on the following assumptions:

  • Key personnel are available and trained on their DRP responsibilities.
  • Off-site backups are recoverable and accessible.
  • Necessary vendor support and third-party services are available.
  • Recovery site infrastructure (if applicable) is maintained and ready for activation.
  • All critical assets, applications, and data have been identified and documented.

3. Roles and Responsibilities

Effective disaster recovery relies on clearly defined roles and responsibilities.

3.1 Disaster Recovery Coordinator

  • Role: Overall leadership and coordination of DRP activation and execution.
  • Responsibilities:

* Declare a disaster and activate the DRP.

* Authorize recovery efforts and resource allocation.

* Act as the primary point of contact for executive management.

* Oversee communication with all stakeholders.

* Lead post-incident review and plan updates.

3.2 Incident Response Team (IRT)

  • Role: Initial assessment, containment, and notification.
  • Responsibilities:

* Detect and assess the nature and scope of the incident.

* Initiate incident response procedures.

* Notify the DR Coordinator and relevant recovery teams.

* Document all incident details and actions taken.

3.3 Recovery Teams (Led by Team Leads)

3.3.1 IT Infrastructure Recovery Team

  • Focus: Network, servers, storage, virtualization platforms.
  • Responsibilities: Restore core infrastructure services.

3.3.2 Application Recovery Team

  • Focus: Business-critical applications and databases.
  • Responsibilities: Restore, configure, and validate application functionality and data integrity.

3.3.3 Data Recovery Team

  • Focus: Data restoration from backups, data synchronization.
  • Responsibilities: Ensure data integrity and availability across systems.

3.3.4 Communications Team

  • Focus: Internal and external communication.
  • Responsibilities: Execute the communication plan, manage media relations, and keep stakeholders informed.

3.3.5 Facilities and Logistics Team

  • Focus: Physical site recovery, alternative workspace, equipment procurement.
  • Responsibilities: Secure and prepare recovery locations, manage physical resources.

4. Business Impact Analysis (BIA) Summary & Critical Systems

A BIA identifies critical business functions and their supporting IT systems, quantifying the impact of potential disruptions.

4.1 Critical Business Functions & Dependencies

The following table summarizes critical business functions and their primary IT system dependencies.

| Business Function | Supporting IT System(s) | Impact if Unavailable (High/Medium/Low) |

| :---------------------- | :---------------------- | :-------------------------------------- |

| Order Processing | ERP System | High |

| Financial Transactions | Financial System, ERP | High |

| Customer Support | CRM System, VoIP | High |

| Employee Communications | Email, Collaboration SW | Medium |

| Website/E-commerce | Web Servers, Database | High |

| Data Analytics | Data Warehouse | Medium |

| HR & Payroll | HRIS System | Medium |

4.2 Critical Systems Categorization

Systems are categorized based on their criticality to business operations:

  • Tier 0 (Mission Critical): Immediate and severe impact on core business functions; requires fastest recovery.
  • Tier 1 (Business Critical): Significant impact on business functions; requires rapid recovery.
  • Tier 2 (Business Important): Moderate impact; can tolerate longer downtime.
  • Tier 3 (Support/Non-Critical): Minimal impact; can be recovered last.

5. Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO)

RTO and RPO define the acceptable timeframes for recovery.

5.1 Definitions

  • Recovery Time Objective (RTO): The maximum tolerable duration of time that a computer system, network, or application can be down after a disaster or failure.
  • Recovery Point Objective (RPO): The maximum tolerable period in which data might be lost from an IT service due to a major incident.

5.2 RTO/RPO Targets for Critical Systems

| System/Application | Criticality Tier | RTO (Time to Restore Service) | RPO (Max Data Loss) |

| :---------------------- | :--------------- | :---------------------------- | :------------------ |

| ERP System | Tier 0 | 4 hours | 15 minutes |

| Financial System | Tier 0 | 4 hours | 15 minutes |

| CRM System | Tier 1 | 8 hours | 1 hour |

| E-commerce Platform | Tier 0 | 4 hours | 30 minutes |

| Email & Collaboration | Tier 1 | 12 hours | 4 hours |

| Core Database Servers | Tier 0 | 2 hours | 5 minutes |

| File Servers | Tier 1 | 12 hours | 4 hours |

| DNS/Directory Services | Tier 0 | 2 hours | 0 minutes |

6. Disaster Scenario Triggers

The DRP will be activated upon the occurrence of any event that significantly disrupts or threatens critical business operations or IT systems. Examples include:

  • Natural Disasters: Earthquake, flood, hurricane, severe weather, fire.
  • Technological Failures: Major hardware failure (server, storage), widespread network outage, power grid failure, critical software corruption.
  • Cybersecurity Incidents: Ransomware attack, data breach, denial-of-service (DoS) attack, system compromise.
  • Human Error: Accidental deletion of critical data, misconfiguration leading to widespread outage.
  • Infrastructure Failure: Data center failure, significant utility outage.

7. Incident Response and Activation Procedures

7.1 Detection and Assessment

  1. Detection: Incident detected by monitoring systems, user reports, or IRT personnel.
  2. Initial Assessment: IRT evaluates the incident's nature, scope, and potential impact on critical systems and business functions.
  3. Severity Classification: Incident classified based on its potential to cause significant disruption (e.g., Minor, Major, Catastrophic).

7.2 Declaration of Disaster

  1. Notification: IRT notifies the DR Coordinator immediately for Major or Catastrophic incidents.
  2. DR Coordinator Decision: Based on the assessment, the DR Coordinator, in consultation with executive management, decides whether to declare a disaster and activate the DRP.
  3. Formal Declaration: A formal declaration is made, triggering the execution of this plan.

7.3 Activation Steps

  1. Assemble Teams: DR Coordinator activates and convenes relevant recovery teams (IT Infrastructure, Application, Data, Communications, etc.).
  2. Establish Command Center: Identify and activate a primary (or alternate) command center for coordination.
  3. Initial Briefing: DR Coordinator briefs all team leads on the situation, confirmed impact, and immediate priorities.
  4. Secure Primary Site (if applicable): If the primary site is compromised, ensure physical safety and prevent further damage.
  5. Assess Recovery Site Readiness: Verify the readiness of the designated recovery site(s) (e.g., secondary data center, cloud region).
  6. Initiate Communication Plan: Begin internal and external communications as per Section 10.

8. Backup and Data Recovery Strategies

A robust backup strategy is fundamental to achieving RPO targets and enabling data recovery.

8.1 Data Classification

All data is classified based on its criticality, sensitivity, and regulatory requirements (e.g., Public, Internal, Confidential, Restricted). This dictates backup frequency, retention, and encryption.

8.2 Backup Types and Frequency

  • Full Backups: Complete copy of all data. Performed weekly for all critical systems.
  • Differential Backups: Copy of all data changed since the last full backup. Performed daily for all critical systems.
  • Incremental Backups: Copy of all data changed since the last full or incremental backup. Performed hourly for Tier 0/1 systems.
  • Database Transaction Logs: Continuously backed up or replicated for Tier 0 databases to achieve near-zero RPO.

8.3 Backup Storage and Retention

  • On-site Storage: Short-term backups (e.g., last 24-48 hours) are stored on-site for quick recovery of minor incidents.
  • Off-site Storage: Critical backups are replicated or transported to a secure off-site location (e.g., secondary data center, cloud storage) at least daily.
  • Cloud Storage: Utilize cloud providers (e.g., AWS S3, Azure Blob Storage) for highly durable, geographically redundant off-site storage.
  • Retention Policies:

* Daily Backups: Retained for 7 days.

* Weekly Backups: Retained for 4 weeks.

* Monthly Backups: Retained for 12 months.

* Annual Backups: Retained for 7 years (or as required by compliance).

8.4 Data Encryption

All backups, both in-transit and at-rest, are encrypted using AES-256 encryption to ensure data confidentiality.

8.5 Restoration Procedures

  1. Identify Data/System for Restoration: Determine the specific data or system requiring recovery.
  2. Locate Backup: Identify the appropriate backup set (full, differential, incremental) and recovery point.
  3. Restore to Target: Restore data/system to the designated recovery environment or original location.
  4. Verify Integrity: Perform data integrity checks and application functionality tests post-restoration.
  5. Documentation: Document the restoration process, including any challenges and resolutions.

9. System Recovery / Failover Procedures

These procedures detail the steps to restore IT services to operational status at the recovery site.

9.1 Infrastructure Recovery

  1. Network Infrastructure:

* Activate recovery site network equipment (routers, switches, firewalls).

* Update DNS records to point to recovery site IP addresses (if applicable).

* Establish VPN connectivity for remote users and site-to-site communication.

* Verify network connectivity and throughput.

  1. Virtualization Platform:

* Activate hypervisor hosts at the recovery site.

* Restore/replicate virtual machine images.

* Ensure resource allocation (CPU, RAM, storage) meets requirements.

gemini Output

Disaster Recovery Plan

Document Version: 1.0

Date: October 26, 2023

Prepared For: [Customer Name/Organization]

Prepared By: PantheraHive


1. Introduction and Purpose

This Disaster Recovery Plan (DRP) outlines the procedures, resources, and strategies required to restore critical business functions and IT systems in the event of a disruptive incident. The primary goal of this DRP is to minimize downtime, prevent significant data loss, ensure business continuity, and facilitate a swift and orderly recovery from various disaster scenarios. This plan aims to safeguard essential operations, data, and infrastructure, thereby protecting the organization's reputation, financial stability, and ability to serve its customers.

2. Scope

This DRP covers the recovery of critical IT infrastructure, applications, and data essential for the sustained operation of [Organization Name]'s core business functions. It encompasses:

  • All production servers (physical and virtual)
  • Network infrastructure (routers, switches, firewalls)
  • Critical business applications (e.g., ERP, CRM, Financial Systems, E-commerce Platforms)
  • Databases (SQL, NoSQL, Data Warehouses)
  • Core data repositories (file shares, document management systems)
  • Cloud-based services integrated into the primary infrastructure
  • Personnel and communication protocols required for disaster response and recovery.

3. Roles and Responsibilities

A dedicated Disaster Recovery Team (DRT) is established with clear roles and responsibilities to ensure an organized and efficient response.

  • DR Coordinator (Overall Lead):

* Declares a disaster and initiates the DRP.

* Oversees all recovery efforts.

* Primary liaison for executive management and external communications.

* Ensures adherence to RTO/RPO targets.

  • Technical Leads (Infrastructure, Network, Database, Application):

* Leads specific technical recovery tasks.

* Manages restoration of servers, network components, databases, and applications.

* Coordinates with vendors for technical support.

  • Business Operations Lead:

* Assesses business impact and prioritizes application recovery from a business perspective.

* Coordinates user acceptance testing (UAT) post-recovery.

* Manages essential non-IT business functions during recovery.

  • Communication Lead:

* Manages all internal and external communications during a disaster.

* Maintains emergency contact lists and communication templates.

* Coordinates with the DR Coordinator on official statements.

  • Security Lead:

* Ensures security protocols are maintained during recovery.

* Monitors for security breaches or vulnerabilities during and after recovery.

* Conducts post-incident security review.

Emergency Contact List: (Refer to Appendix A for full list)

  • DR Coordinator: [Name, Phone, Email]
  • Technical Leads: [Names, Phones, Emails]
  • Business Operations Lead: [Name, Phone, Email]
  • Communication Lead: [Name, Phone, Email]
  • Security Lead: [Name, Phone, Email]

4. Disaster Declaration Criteria

A disaster is formally declared when an incident significantly impairs or halts critical business operations and cannot be resolved within predefined service level agreements (SLAs) through standard operational procedures. Authority to declare a disaster rests primarily with the DR Coordinator, in consultation with executive management.

Examples of Disaster Scenarios:

  • Natural Disasters: Major fire, flood, earthquake, prolonged power outage affecting primary data center.
  • Cyber Attacks: Ransomware, widespread data corruption, denial-of-service (DoS) attacks causing prolonged outage.
  • Technical Failures: Catastrophic hardware failure (e.g., data center wide storage array failure), widespread network outage, major application software corruption.
  • Human Error: Accidental deletion or misconfiguration leading to widespread system failure.
  • Infrastructure Failure: Prolonged utility outage (power, internet) at the primary site.

5. Application and System Prioritization

Critical applications and systems are categorized based on their business impact and dependency to guide recovery efforts.

| Tier | Priority | Description | Example Applications |

| :--- | :------- | :------------------------------------------------------------------------------------------------------ | :-------------------------------------------------------------- |

| 0 | Critical | Essential for immediate business operations; immediate and severe impact if unavailable. | Core ERP, Primary E-commerce, Financial Transaction Systems, CRM |

| 1 | High | Important for daily operations; significant business impact if unavailable for extended periods. | Email, Collaboration Tools, Internal Web Applications, HRIS |

| 2 | Medium | Necessary for supporting business processes; moderate impact if unavailable for several days. | Development/Test Environments, Secondary Data Analytics |

| 3 | Low | Non-essential for immediate operations; minimal impact if unavailable for a week or more. | Archival Systems, Non-critical Reporting |

6. RTO and RPO Targets

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are defined for each application tier, dictating the maximum allowable downtime and data loss, respectively.

  • RTO (Recovery Time Objective): The maximum tolerable duration for restoring business functions after a disaster.
  • RPO (Recovery Point Objective): The maximum tolerable amount of data loss, measured in time, from the point of disaster to the last valid data backup.

| Tier | RTO Target | RPO Target |

| :--- | :------------------ | :----------------- |

| 0 | < 4 Hours | < 1 Hour |

| 1 | 4 - 24 Hours | 1 - 4 Hours |

| 2 | 24 - 72 Hours | 4 - 24 Hours |

| 3 | > 72 Hours / Best Effort | > 24 Hours / Best Effort |

7. Recovery Site Strategy

[Organization Name] utilizes a hybrid cloud-based Disaster Recovery as a Service (DRaaS) strategy for its primary recovery site, augmented by off-site physical backups for long-term retention.

  • Primary DR Site: [Cloud Provider, e.g., Azure Site Recovery / AWS CloudEndure] in [Geographic Region, e.g., US East 2].

* Configuration: Replicated virtual machines, databases, and network configurations are continuously synchronized from the primary data center to the cloud provider's infrastructure.

* Network: Dedicated VPN tunnels and ExpressRoute/Direct Connect ensure secure and high-bandwidth connectivity. DNS records are pre-configured for rapid failover.

* Capacity: The cloud environment is provisioned with sufficient compute, storage, and network resources to support Tier 0 and Tier 1 applications at full production capacity during a disaster.

  • Off-site Physical Backups: Encrypted tape or disk backups are stored at a secure, third-party vaulting service located at least [e.g., 50 miles] from the primary data center. These are used for long-term archival and catastrophic data center loss scenarios.

8. Backup Strategies

A multi-layered backup strategy ensures data integrity and availability across various recovery scenarios.

  • What to Backup:

* All critical databases (transaction logs, full backups).

* Application data and configuration files.

* Operating system images and virtual machine snapshots.

* Network device configurations.

* Critical user data (file shares, home directories).

  • How to Backup:

* Tier 0/1 Applications & Data: Continuous Data Protection (CDP) or near-CDP via replication to DRaaS platform. Transaction log backups every 15 minutes, full database backups daily.

* Tier 2/3 Applications & Data: Daily incremental backups, weekly full backups.

* System Configurations: Nightly configuration backups for network devices, firewalls, and critical servers.

  • Where to Backup (3-2-1 Rule):

* 3 Copies of Data: Primary copy, DRaaS replication, Off-site backup.

* 2 Different Media: Disk (primary storage, DRaaS) and Tape/Cloud Object Storage (off-site).

* 1 Off-site Copy: DRaaS in a separate region, physical media at a secure third-party vault.

  • Frequency:

* Continuous/Near-Continuous: Tier 0/1 application data, databases.

* Daily: Full system backups, other critical data.

* Weekly/Monthly: Full backups of less critical systems, long-term archives.

  • Retention Policies:

* Short-term: Daily backups retained for 7-14 days.

* Mid-term: Weekly full backups retained for 4-8 weeks.

* Long-term: Monthly full backups retained for 1 year, annual backups retained for 7 years (or as per regulatory requirements).

  • Encryption: All backups are encrypted at rest and in transit using AES-256 encryption.
  • Backup Verification:

* Regular (quarterly) testing of backup restoration procedures for a sample set of critical systems and data.

* Automated integrity checks on all backup sets.

9. Failover Procedures

These procedures detail the steps to activate the recovery site and restore operations.

9.1. Pre-Disaster Readiness:

  1. Replication Monitoring: Continuously monitor replication status between primary and DR sites.
  2. Configuration Synchronization: Ensure network configurations, security policies, and application settings are mirrored or documented for the DR environment.
  3. DR Site Readiness Checks: Regular automated checks of DR site resources (compute, storage, network paths).

9.2. Disaster Declaration & Initial Response:

  1. Incident Detection: Alert triggered by monitoring systems or manual report.
  2. Disaster Declaration: DR Coordinator declares a disaster based on predefined criteria.
  3. DRT Activation: All DRT members are notified via emergency communication channels (SMS, dedicated calling tree).
  4. Initial Assessment: DRT convenes to assess the disaster's scope, impact, and estimated duration.

9.3. Failover Activation (Tier 0 & 1 Systems):

  1. Primary Site Isolation (if applicable): If the primary site is compromised but partially operational, isolate it from the network to prevent further data corruption or security breaches.
  2. DRaaS Orchestration Initiation:

* Log into the [Cloud Provider] DRaaS portal.

* Initiate the pre-defined recovery plan for Tier 0 applications.

* This typically involves:

* Provisioning/starting replicated VMs/instances.

* Restoring databases to the latest RPO.

* Configuring network security groups and virtual networks.

  1. Network Reconfiguration:

* Update DNS records (e.g., A records, CNAMEs) to point to the DR site's IP addresses/load balancers. TTLs should be set low for rapid propagation.

* Activate VPN tunnels/Direct Connect to the DR site for internal users/partners.

* Configure external firewalls/load balancers to direct traffic to the DR site.

  1. Application Startup & Configuration:

* Start applications in the prioritized order (Tier 0 first, then Tier 1).

* Verify application services are running and accessible.

* Perform any necessary post-recovery configuration adjustments (e.g., connection strings, API endpoints).

  1. Data Synchronization Verification: Confirm that data restored is consistent and up-to-date according to RPO targets.
  2. Internal Access & Testing:

*

disaster_recovery_plan.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}