Disaster Recovery Plan
Run ID: 69cb695161b1021a29a88cfa2026-03-31Infrastructure
PantheraHive BOS
BOS Dashboard

Generate a comprehensive disaster recovery plan with RTO/RPO targets, backup strategies, failover procedures, communication plans, and testing schedules.

Disaster Recovery Plan (DRP)

Conflicting Request Note:

The prompt included two distinct requests: one for a "Disaster Recovery Plan" and another for a "Marketing Strategy." Given the explicit workflow context "Disaster Recovery Plan" and the step description, this output focuses solely on generating the comprehensive Disaster Recovery Plan. If a marketing strategy is required, please submit a separate request.


1. Introduction and Purpose

This Disaster Recovery Plan (DRP) outlines the procedures and strategies to ensure the rapid recovery and continuity of critical business operations and IT systems in the event of a disaster. The primary goal is to minimize downtime, prevent data loss, and restore services within defined Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs), thereby reducing the overall impact on the business.

2. Scope

This DRP covers all critical IT infrastructure, applications, data, and associated business processes essential for core operations. This includes, but is not limited to:

  • Primary data center infrastructure (servers, storage, networking).
  • Key business applications (e.g., ERP, CRM, financial systems, customer-facing portals).
  • Databases supporting critical applications.
  • Network connectivity and security infrastructure.
  • End-user computing environments (as applicable for critical personnel).
  • Communication systems (email, collaboration tools).

3. Objectives

The objectives of this DRP are to:

  • Minimize Downtime: Restore critical systems and applications within defined RTOs.
  • Prevent Data Loss: Limit data loss to within defined RPOs.
  • Ensure Business Continuity: Enable essential business functions to resume operations quickly.
  • Provide Clear Guidance: Establish clear, actionable procedures for disaster response and recovery teams.
  • Protect Assets: Safeguard data, hardware, and intellectual property.
  • Maintain Compliance: Adhere to relevant regulatory and legal requirements.
  • Facilitate Communication: Ensure timely and accurate communication to stakeholders.

4. Key Definitions

  • Disaster Recovery (DR): The process of restoring an organization's IT infrastructure and operations after a disruptive event.
  • Business Continuity (BC): The ability of an organization to maintain essential business functions during and after a disaster.
  • Recovery Time Objective (RTO): The maximum acceptable duration of time for a business process or system to be unavailable following a disaster.
  • Recovery Point Objective (RPO): The maximum acceptable amount of data loss measured in time (e.g., 1 hour of data loss).
  • Business Impact Analysis (BIA): A process that identifies critical business functions and the impact that a disruption to these functions might have.
  • Failover: The process of switching to a redundant or standby system upon the failure or abnormal termination of the previously active system.
  • Failback: The process of restoring systems and operations to the primary data center after a disaster has been resolved and the primary site is operational again.

5. Disaster Scenarios and Triggers

This DRP is designed to address a range of potential disaster scenarios, including:

  • Natural Disasters: Earthquakes, floods, severe storms, fires.
  • Technological Failures: Major hardware failures, power outages, network outages, software corruption, data center cooling failure.
  • Cybersecurity Incidents: Ransomware attacks, data breaches, denial-of-service (DoS) attacks.
  • Human Error: Accidental deletion of critical data, misconfiguration.
  • Supply Chain Disruptions: Unavailability of critical components or services.

DRP Activation Triggers:

  • Declaration by Executive Management.
  • Prolonged outage of a critical system exceeding its defined RTO.
  • Major data loss event.
  • Physical damage to primary data center facilities.
  • Confirmed cybersecurity incident impacting critical systems.

6. Business Impact Analysis (BIA) Summary & RTO/RPO Targets

Based on the latest BIA, the following critical systems and their respective RTO/RPO targets have been identified:

| Critical Application/System | Business Function | Impact of Downtime | RTO (Hours) | RPO (Minutes) | Recovery Tier |

| :-------------------------- | :---------------- | :------------------ | :---------- | :------------ | :----------- |

| ERP System | Order Processing, Finance, Inventory | High financial loss, customer dissatisfaction, regulatory non-compliance | 4 | 15 | Tier 1 |

| CRM System | Sales, Customer Support | Loss of sales opportunities, degraded customer service | 8 | 60 | Tier 2 |

| Customer Portal/Website | Online Sales, Information Access | High revenue loss, reputational damage | 2 | 0 | Tier 1 |

| Database Servers (Prod) | Data Storage for ERP, CRM | Data corruption, application failure | 4 | 15 | Tier 1 |

| Email System | Internal/External Communication | Operational paralysis, communication breakdown | 12 | 120 | Tier 3 |

| File Shares/Collaboration | Document Management, Teamwork | Productivity loss, project delays | 24 | 240 | Tier 3 |

| Network Infrastructure | Connectivity for all systems | Complete operational halt | 2 | N/A | Tier 1 |

Recovery Tiers:

  • Tier 1 (Critical): RTO < 4 hours, RPO < 1 hour. Requires active-passive or active-active replication.
  • Tier 2 (Important): RTO < 8 hours, RPO < 4 hours. Requires frequent backups and replication.
  • Tier 3 (Supporting): RTO < 24 hours, RPO < 24 hours. Requires daily backups.

7. Disaster Recovery Team and Roles

A dedicated Disaster Recovery Team will be responsible for executing this plan. Each role has specific responsibilities during a disaster.

| Role | Primary Responsibilities | Backup/Alternate |

| :--------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- |

| DR Coordinator (CIO/IT Director) | Overall command and control, DRP activation/deactivation, communication with executive management, external stakeholders. | Head of Operations |

| Infrastructure Lead | Network, server, storage recovery, failover/failback procedures, infrastructure verification. | Senior Network Engineer |

| Applications Lead | Application recovery, database restoration, application configuration, functional testing, data integrity verification. | Senior Application Developer |

| Data Recovery Specialist | Data restoration from backups, database recovery, data synchronization, data integrity checks. | Database Administrator |

| Communications Lead | Internal/external communication execution, status updates, media liaison. | HR Director |

| Security Lead | Security monitoring during recovery, incident response coordination, access control, vulnerability management. | Senior Security Analyst |

| Business Unit Liaisons | Represent specific business units, assist with application testing, confirm business functionality, communicate specific business impacts. | Department Managers |

Contact List: A detailed contact list for all DR team members, including primary and alternate numbers (mobile, home), and email addresses, will be maintained in a secure, offsite location (e.g., encrypted USB, cloud storage).

8. Backup and Recovery Strategies

Our strategy focuses on a multi-layered approach to data protection and system recovery.

8.1 Data Backup Strategy

  • Frequency:

* Critical Data (Tier 1): Continuous Data Protection (CDP) or hourly snapshots/transaction log shipping.

* Important Data (Tier 2): Incremental backups every 4 hours, full backup daily.

* Supporting Data (Tier 3): Daily incremental backups, weekly full backup.

  • Retention:

* Daily backups: 30 days

* Weekly full backups: 12 weeks

* Monthly full backups: 1 year

* Yearly full backups: 7 years (for regulatory compliance)

  • Storage Locations:

* On-Premise: Local storage for immediate recovery (disk-to-disk).

* Offsite: Encrypted backups replicated to a geographically separate cloud storage provider (e.g., AWS S3, Azure Blob Storage) or a secondary data center.

* Immutable Backups: Critical data backups are stored in an immutable format to protect against ransomware and accidental deletion.

  • Backup Verification: Regular test restores (monthly) to ensure data integrity and recoverability.

8.2 System Replication Strategy

  • Virtual Machines (VMs):

* Tier 1 Systems: Synchronous or asynchronous replication to a warm/hot standby environment in a secondary data center or cloud region. This ensures near-zero data loss (low RPO) and rapid failover (low RTO).

* Tier 2 Systems: Asynchronous replication or regular VM snapshots replicated offsite.

  • Databases:

* Tier 1 Databases: Database-specific replication technologies (e.g., SQL Server AlwaysOn Availability Groups, Oracle Data Guard) configured for high availability and disaster recovery.

* Tier 2 Databases: Log shipping or snapshot replication.

  • Configuration Backups: Regular backups of network device configurations, firewall rules, hypervisor configurations, and application settings.

9. Failover Procedures

The failover process is designed to be systematic and controlled, executed by the DR Team.

9.1 Pre-Disaster Preparedness

  • Maintain up-to-date documentation (network diagrams, IP schemes, application dependencies, recovery runbooks).
  • Regularly test backup and replication mechanisms.
  • Ensure DR site infrastructure is patched and ready.
  • Review and update DR team contact information.

9.2 DRP Activation and Assessment

  1. Incident Detection: Automatic alerts or manual reporting of a disaster event.
  2. Initial Assessment: DR Coordinator, with Infrastructure Lead, assesses the scope and severity of the disaster, confirming if DRP activation is necessary.
  3. DRP Activation: DR Coordinator declares a disaster, formally activates the DRP, and notifies the DR Team and Executive Management.
  4. Team Mobilization: DR Team members are contacted and instructed to report to a designated command center (physical or virtual).

9.3 Step-by-Step Failover Process

(This section will contain detailed, system-specific runbooks. Below is a high-level overview.)

  1. Isolate Primary Site (if compromised): Disconnect primary network segments to prevent further damage (e.g., ransomware spread).
  2. Establish DR Site Connectivity:

* Activate DR site network infrastructure (firewalls, routers, VPNs).

* Update DNS records to point to DR site IP addresses (TTL should be low for critical systems).

* Verify external and internal network connectivity to the DR site.

  1. Recover Core Infrastructure:

* Restore/activate critical directory services (e.g., Active Directory) if not replicated.

* Bring up shared storage and compute resources at the DR site.

  1. Restore/Failover Databases (Tier 1 first):

* Initiate database failover procedures for replicated databases (e.g., promote replica to primary).

* For systems reliant on backups, restore the latest viable backup to the DR database servers.

* Verify database integrity and data consistency.

  1. Restore/Failover Applications (Tier 1 first):

* Activate replicated application servers or restore applications from VM backups.

* Configure applications to connect to the DR database instances.

* Perform application-specific configuration adjustments (e.g., licensing, external integrations).

  1. Functional Testing and Verification:

* DR Team and Business Unit Liaisons perform comprehensive functional testing of all recovered applications.

* Verify data integrity, user access, and system performance.

* Ensure external integrations (e.g., payment gateways, shipping APIs) are functioning.

  1. User Access Restoration:

* Notify end-users of system availability.

* Provide instructions for accessing applications from the DR site (e.g., new URLs, VPN instructions).

10. Failback Procedures

Once the primary site is restored and deemed stable, a controlled failback process will be initiated to return operations to the primary data center.

  1. Assess Primary Site Readiness:

* Infrastructure Lead confirms the primary site is fully repaired, stable, and ready to resume operations.

* All necessary hardware, power, cooling, and network components are validated.

* Security systems are fully operational.

  1. Plan Failback Window:

* Coordinate with business units to schedule a failback window that minimizes disruption, typically during off-peak hours.

* Communicate the plan to all stakeholders.

  1. Data Synchronization:

* Establish replication from the DR site back to the primary site.

* Synchronize all changes made during the disaster operation from the DR site to the primary site, ensuring no data loss during the transition.

  1. Pre-Failback Verification:

* Verify that the primary site's systems are up-to-date with the DR site's data.

* Perform pre-failback health checks on primary systems.

  1. Execute Failback:

* Switch user traffic from the DR site to the primary site (e.g., update DNS, reconfigure load balancers).

* Initiate application and database failback procedures.

  1. Post-Failback Verification:

* Perform comprehensive functional and performance testing on the primary site.

* Monitor systems closely for stability and performance.

  1. Deactivate DR Site: Once the primary site is fully operational and stable, the DR site resources can be scaled down or deactivated.

11. Communication Plan

Effective communication is critical during a disaster.

11.1 Internal Communication

  • DR Team:

* Initial Notification: SMS, dedicated chat channel (e.g., Microsoft Teams, Slack), phone calls.

* Regular Updates: Scheduled conference calls, shared status dashboards.

  • Employees:

* Notification of Outage: Email (from an external provider if internal email is down), SMS, company intranet (if accessible), dedicated status page.

* Status Updates: Regular updates on recovery progress, estimated time

gemini Output

Disaster Recovery Plan

Document Title: [Organization Name] Disaster Recovery Plan

Version: 1.0

Date: October 26, 2023

Author: PantheraHive AI Assistant

Approver: [DR Coordinator / Senior Leadership]


1. Executive Summary

This Disaster Recovery Plan (DRP) outlines the strategies, procedures, and responsibilities necessary to ensure the swift and effective recovery of critical IT systems and data following a disruptive event. The primary objective is to minimize downtime, data loss, and operational impact, enabling [Organization Name] to maintain essential business functions and restore full operations within predefined Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs). This plan addresses various disaster scenarios, from minor system failures to major data center outages, and provides a structured approach for incident response, recovery, and communication.

2. Introduction

2.1. Purpose

The purpose of this Disaster Recovery Plan is to:

  • Provide a clear, actionable framework for responding to and recovering from IT-related disasters.
  • Define roles, responsibilities, and procedures for the Disaster Recovery Team.
  • Ensure the availability and integrity of critical IT systems and data.
  • Minimize the financial, reputational, and operational impact of disruptive events.
  • Establish communication protocols for internal and external stakeholders during a disaster.
  • Support business continuity by restoring essential IT services within acceptable timeframes.

2.2. Scope

This DRP covers all critical IT infrastructure, applications, and data hosted within [Organization Name]'s primary data centers, cloud environments, and remote offices. It encompasses the recovery of:

  • Core business applications (e.g., ERP, CRM, financial systems)
  • Database systems
  • Network infrastructure (LAN, WAN, VPN)
  • Server infrastructure (physical and virtual)
  • Storage systems
  • End-user computing environments (as applicable)
  • Cloud-based services and integrations

Out-of-scope for this document are detailed Business Continuity Plans (BCP) for non-IT-related business processes, though this DRP serves as a critical component supporting the overall BCP.

2.3. Objectives

Upon activation, this DRP aims to achieve the following:

  • Restore critical business functions and IT services within defined RTOs.
  • Recover data with minimal loss, adhering to defined RPOs.
  • Ensure the safety and well-being of personnel.
  • Maintain accurate and timely communication with all stakeholders.
  • Restore operations to the primary production environment efficiently and securely.

3. Key Definitions

  • Disaster Recovery (DR): The process of restoring data and IT infrastructure after a natural or human-induced disaster.
  • Business Continuity Plan (BCP): A comprehensive plan to ensure that critical business functions continue during and after a disaster.
  • Recovery Time Objective (RTO): The maximum tolerable duration of time that a computer, system, network, or application can be down after a failure or disaster.
  • Recovery Point Objective (RPO): The maximum tolerable amount of data that can be lost from an IT service due to a major incident.
  • Critical System: An IT system or application whose unavailability would result in unacceptable disruption to business operations.
  • Failover: The process of switching to a redundant or standby system upon the failure or abnormal termination of the previously active system.
  • Failback: The process of restoring operations to the primary production environment after a failover and successful recovery of the primary site.

4. Roles and Responsibilities

Effective disaster recovery requires a dedicated team with clear roles.

4.1. Disaster Recovery Coordinator (DRC) - [Name/Title]

  • Overall responsibility for the DRP.
  • Declares a disaster and authorizes DRP activation.
  • Manages and oversees the DR team.
  • Liaises with senior management and external parties.
  • Approves DRP updates and testing schedules.

4.2. Incident Response Lead - [Name/Title]

  • First point of contact for incident detection and initial assessment.
  • Coordinates immediate response actions.
  • Escalates incidents to the DRC when disaster declaration is warranted.

4.3. Communication Lead - [Name/Title]

  • Manages all internal and external communications during a disaster.
  • Maintains contact lists and communication channels.
  • Prepares and distributes status updates.

4.4. Technical Recovery Teams (Team Leads - [Name/Title])

  • Infrastructure Team: Servers, virtualization, storage, physical facilities.
  • Network Team: Connectivity, firewalls, VPNs, DNS.
  • Database Team: Database restoration, replication, integrity.
  • Application Team: Application deployment, configuration, testing.
  • Security Team: Data integrity, access control, threat monitoring during recovery.

5. Business Impact Analysis (BIA) Summary & Recovery Objectives

Based on the Business Impact Analysis, the following systems have been identified as critical, with their corresponding RTO and RPO targets:

| Critical System/Application | Business Function Supported | RTO (Recovery Time Objective) | RPO (Recovery Point Objective) | Dependencies (e.g., Database, Network) |

| :-------------------------- | :-------------------------- | :---------------------------- | :----------------------------- | :------------------------------------- |

| ERP System (e.g., SAP/Oracle) | Core Business Operations, Finance, Supply Chain | 4 hours | 1 hour | Database, Application Servers, Network |

| CRM System (e.g., Salesforce) | Customer Management, Sales, Marketing | 8 hours | 4 hours | Database, Web Servers, Network |

| Primary Database Servers | All critical applications | 2 hours | 15 minutes | Storage, Network, Power |

| Email Services (e.g., Exchange/O365) | Internal & External Communication | 12 hours | 1 hour | Directory Services, Network |

| File Servers / Collaboration | Document Storage, Team Collaboration | 12 hours | 4 hours | Storage, Network |

| Web Server Farm | Public-facing websites, APIs | 6 hours | 4 hours | Database, Network, Load Balancers |

Note: The RTO and RPO targets are maximum thresholds. The aim is always to restore services as quickly as possible.

6. Disaster Scenario Identification

This plan is designed to address a range of potential disruptive events, including but not limited to:

  • Natural Disasters: Fires, floods, earthquakes, severe weather, power outages.
  • Cyber Attacks: Ransomware, data breaches, denial-of-service (DoS) attacks.
  • Hardware Failures: Major server, storage, or network equipment failures.
  • Software Failures: Critical application or operating system corruption.
  • Human Error: Accidental data deletion, misconfigurations, or operational mistakes.
  • Utility Failure: Prolonged power outages, communication line disruptions.

7. Incident Response and Activation Procedures

7.1. Detection and Notification

  • Monitoring Systems: Utilize network monitoring tools, server health checks, and application performance management (APM) to detect anomalies.
  • Alerts: Automated alerts via email, SMS, or ticketing systems notify the Incident Response Lead.
  • Manual Reporting: Personnel can report incidents to the IT Help Desk.

7.2. Assessment and Declaration

  1. Initial Assessment (Incident Response Lead): Evaluate the incident's scope, impact, and potential for escalation.
  2. Escalation: If the incident is deemed a potential disaster (e.g., critical system outage exceeding RTO thresholds, data loss, widespread service disruption), the Incident Response Lead escalates to the DRC.
  3. Disaster Declaration (DRC): The DRC, in consultation with senior leadership, declares a disaster and authorizes the activation of this DRP. This decision is based on:

* Severity of impact on critical business functions.

* Estimated duration of disruption.

* Potential for data loss.

* Inability to resolve the incident using standard operational procedures.

7.3. Plan Activation

Upon disaster declaration:

  1. DR Team Assembly: The DRC activates the DR team via emergency contact methods (e.g., conference bridge, dedicated communication channel).
  2. Command Center: Establish a temporary command center (physical or virtual) for coordination.
  3. Initial Briefing: DRC briefs the DR team on the situation, known impacts, and initial recovery objectives.
  4. Communication Plan Activation: The Communication Lead initiates internal and external communication protocols.

8. Recovery Strategies

8.1. Data Backup and Restoration Strategies

  • Critical Data Identification: All data supporting systems identified in Section 5 is critical.
  • Backup Types:

* Full Backups: Weekly, performed on [Day of Week], stored off-site.

* Differential Backups: Daily, performed on [Time], stored on-site and replicated off-site.

* Incremental Backups: Hourly/Continuous, for very critical data (e.g., transaction logs), replicated to DR site.

  • Backup Frequencies:

* Databases: Transaction logs every 15 minutes, full backup daily.

* Application Servers: Daily.

* File Servers: Hourly with snapshots, daily full backup.

  • Backup Storage Locations:

* On-site: Primary storage for quick recovery of recent data.

* Off-site: Encrypted, air-gapped storage at a secure third-party vault for long-term retention and disaster recovery.

* Cloud: Encrypted storage in [Cloud Provider, e.g., AWS S3, Azure Blob] with geo-redundancy.

  • Retention Policies:

* Daily backups: 30 days.

* Weekly full backups: 90 days.

* Monthly full backups: 1 year.

* Yearly full backups: 7 years (or as per regulatory requirements).

  • Encryption and Security: All backups are encrypted at rest and in transit using AES-256. Access is restricted to authorized personnel only.
  • Restoration Procedures:

1. Identify required data/system.

2. Locate appropriate backup set (on-site, off-site, cloud).

3. Verify integrity of backup.

4. Restore to target system/location.

5. Validate restored data/system functionality.

8.2. System and Application Recovery Strategies

  • Primary Data Center: [Location of Primary Data Center]
  • Secondary/DR Site: [Location of DR Site] - This is a Warm Site strategy.

* Infrastructure: The DR site at [Location] contains pre-provisioned hardware (servers, storage, network) and critical software licenses. Virtual machines (VMs) for critical systems are replicated in near real-time from the primary site.

* Data Replication: Critical databases and application data are asynchronously replicated to the DR site.

* Network Connectivity: Dedicated VPN tunnels and redundant internet links ensure connectivity between primary and DR sites.

  • Cloud-based DR: For select non-critical systems or burst capacity, cloud-based DR (e.g., using [Cloud Provider] Disaster Recovery as a Service) is utilized.
  • Virtualization Strategies: Extensive use of virtualization (e.g., VMware, Hyper-V) enables rapid provisioning and recovery of server instances at the DR site. VM snapshots and replication are key components.
  • Database Replication:

* Synchronous Replication: For databases requiring zero RPO (e.g., financial transaction logs), synchronous replication to a secondary database instance within the primary data center or a very low-latency DR site.

* Asynchronous Replication: For most critical databases, asynchronous replication (e.g., SQL Server AlwaysOn Availability Groups, Oracle Data Guard) to the warm DR site.

8.3. Network Recovery Strategies

  • Redundancy: Redundant ISPs, firewalls, and core switches at both primary and DR sites.
  • VPN Access: Pre-configured VPN access for remote users and DR team members to the DR site network.
  • DNS Updates: Automated or manual DNS updates (e.g., changing A records, CNAMEs) to redirect traffic to the DR site's IP addresses. TTL (Time-to-Live) values for critical DNS records are set low (e.g., 5 minutes) to facilitate quick propagation.

8.4. Hardware and Infrastructure Recovery

  • Vendor Agreements: Maintain active maintenance contracts and support agreements with all critical hardware vendors (servers, storage, network equipment) for rapid replacement or repair.
  • Spare Parts: Keep a minimal inventory of critical spare parts (e.g., network cards, power supplies) on-site.
  • Inventory Management: Maintain an up-to-date inventory of all hardware, including serial numbers, warranty information, and vendor contact details (see Appendix C).

9. Failover and Failback Procedures

9.1. Failover Procedures (General Steps)

Detailed, system-specific runbooks for failover are maintained in Appendix D. The general steps are:

  1. Pre-Failover Checks:

* Verify disaster declaration and authorization.

* Confirm DR site readiness (power, network, basic infrastructure).

* Ensure all data replication to DR site is up-to-date (last RPO check).

* Notify stakeholders of impending failover.

  1. Isolate Primary Site (if applicable): Disconnect primary site from network to prevent data corruption or split-brain scenarios
gemini Output

Disaster Recovery Plan (DRP)

Executive Summary

This document outlines the Disaster Recovery Plan (DRP) for [Your Organization Name], designed to ensure the rapid and effective recovery of critical IT systems and data in the event of a disruptive incident. The primary objective is to minimize downtime, prevent data loss, and maintain business continuity, thereby protecting our operations, reputation, and customer trust. This plan details recovery objectives, strategies for data backup and restoration, failover procedures, communication protocols, and a robust testing schedule to ensure readiness.

1. Introduction

1.1 Purpose

The purpose of this Disaster Recovery Plan (DRP) is to provide a structured, actionable framework for responding to and recovering from disruptive events that impact critical IT infrastructure and services. It aims to restore essential business functions and data within defined recovery time and point objectives (RTO/RPO).

1.2 Scope

This DRP covers all critical IT systems, applications, data, and associated infrastructure supporting core business operations at [Your Organization Name]'s primary data center and key operational sites. It addresses various disaster scenarios, including natural disasters, major equipment failures, cyberattacks, and widespread power outages.

1.3 Objectives

  • Minimize business disruption and financial losses following a disaster.
  • Ensure the safety and well-being of personnel.
  • Restore critical IT services and data within established RTO/RPO targets.
  • Maintain data integrity and prevent significant data loss.
  • Facilitate effective communication with internal and external stakeholders.
  • Comply with relevant regulatory requirements and industry best practices.

1.4 Key Definitions

  • Disaster Recovery (DR): The process, policies, and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after a natural or human-induced disaster.
  • Business Continuity Plan (BCP): A comprehensive plan that ensures an organization can continue to operate during and after a disaster. DR is a component of BCP.
  • Recovery Time Objective (RTO): The maximum tolerable duration of time that a system or application can be down after a disaster before significant damage is incurred.
  • Recovery Point Objective (RPO): The maximum tolerable amount of data that can be lost from a system or application due to a disaster.
  • Critical System: An IT system or application whose unavailability would cause unacceptable business impact.
  • Failover: The process of switching to a redundant or standby system upon the failure or abnormal termination of the previously active system.
  • Failback: The process of restoring operations to the primary system after it has been repaired or recovered from a disaster.

2. Recovery Objectives (RTO/RPO)

Based on the Business Impact Analysis (BIA) conducted, the following RTO and RPO targets have been established for critical systems. These targets reflect the maximum acceptable downtime and data loss to avoid severe business impact.

2.1 RTO/RPO Targets for Critical Systems

| System/Application | Description | RTO (Time) | RPO (Data Loss) | Priority |

| :--------------------------------- | :--------------------------------------------- | :--------- | :-------------- | :------- |

| CRM System (e.g., Salesforce) | Customer relationship management | 4 hours | 1 hour | Critical |

| ERP System (e.g., SAP) | Enterprise resource planning | 8 hours | 2 hours | Critical |

| Financial Accounting System | General Ledger, Accounts Payable/Receivable | 4 hours | 1 hour | Critical |

| E-commerce Platform | Online sales and customer transactions | 2 hours | 15 minutes | Critical |

| Email & Collaboration (e.g., O365) | Internal/External communication, document sharing | 6 hours | 2 hours | High |

| Database Servers (Core) | Primary data storage for critical applications | 4 hours | 1 hour | Critical |

| File Servers (Shared Drives) | General user files and shared documents | 12 hours | 4 hours | Medium |

| Active Directory Services | User authentication and network services | 6 hours | 2 hours | High |

| DNS Services | Domain Name Resolution | 2 hours | 0 hours | Critical |

Note: RTO and RPO targets are reviewed and updated annually or after significant system changes.

3. Disaster Recovery Team and Roles

A dedicated Disaster Recovery Team is essential for effective plan execution. Each member has specific responsibilities during a disaster.

3.1 DR Team Structure

  • DR Coordinator (Overall Lead): [Name/Role, e.g., CIO/IT Director]

* Responsibilities: Declare disaster, activate plan, oversee all recovery efforts, make executive decisions, manage communication with senior leadership.

  • Technical Lead (Infrastructure/Systems): [Name/Role, e.g., Senior Systems Administrator]

* Responsibilities: Lead technical assessment, manage server/network recovery, data restoration, application deployment.

  • Network & Security Lead: [Name/Role, e.g., Network Engineer/Security Manager]

* Responsibilities: Restore network connectivity, configure firewalls/VPNs, implement security measures at recovery site, monitor for threats.

  • Application Lead: [Name/Role, e.g., Senior Application Developer/Manager]

* Responsibilities: Oversee application-specific recovery, configuration, and testing; liaise with business users.

  • Data Lead: [Name/Role, e.g., Database Administrator/Data Architect]

* Responsibilities: Manage database recovery, data integrity checks, ensure RPO targets are met.

  • Communications Lead: [Name/Role, e.g., Head of Communications/HR Manager]

* Responsibilities: Manage internal and external communications, draft and disseminate updates, coordinate with media (if necessary).

  • Logistics & Administration Lead: [Name/Role, e.g., Office Manager/Facilities Manager]

* Responsibilities: Secure recovery site resources (power, connectivity), manage supplies, coordinate personnel needs.

3.2 Key Contact Information (Emergency Contacts)

| Role | Name | Primary Phone | Secondary Phone | Email |

| :--------------------------- | :-------- | :------------ | :-------------- | :--------------------- |

| DR Coordinator | [Name] | [Number] | [Number] | [Email] |

| Technical Lead | [Name] | [Number] | [Number] | [Email] |

| Network & Security Lead | [Name] | [Number] | [Number] | [Email] |

| Application Lead | [Name] | [Number] | [Number] | [Email] |

| Data Lead | [Name] | [Number] | [Number] | [Email] |

| Communications Lead | [Name] | [Number] | [Number] | [Email] |

| Logistics & Administration Lead | [Name] | [Number] | [Number] | [Email] |

| External Contacts | | | | |

| Cloud Provider Support | [Provider]| [Number] | | |

| Internet Service Provider | [ISP] | [Number] | | |

| Hardware Vendor Support | [Vendor] | [Number] | | |

| Emergency Services (Local) | 911 / [Local] | | | |

Note: A full, regularly updated contact list will be maintained as an appendix to this plan.

4. Backup and Data Recovery Strategies

Robust backup strategies are fundamental to meeting RPO targets and ensuring data availability.

4.1 Data Classification

Data is classified based on criticality, sensitivity, and regulatory requirements (e.g., Public, Internal, Confidential, Restricted). This classification dictates backup frequency, retention, and encryption levels.

4.2 Backup Types and Frequency

  • Critical Systems (CRM, ERP, Financial, E-commerce Databases):

* Full Backup: Weekly (e.g., Sunday night).

* Differential Backup: Daily (Monday-Saturday nights).

* Transaction Log Backups: Every 15 minutes (for databases requiring near-zero RPO).

  • High Priority Systems (Email, Active Directory, Core Applications):

* Full Backup: Weekly.

* Incremental Backup: Daily.

  • Medium Priority Systems (File Servers, Non-critical Applications):

* Full Backup: Bi-weekly.

* Incremental Backup: Daily.

4.3 Backup Retention Policies

  • Critical Data:

* Daily backups: 30 days.

* Weekly full backups: 90 days.

* Monthly full backups: 1 year.

* Yearly full backups: 7 years (or as per regulatory requirements).

  • High Priority Data:

* Daily backups: 14 days.

* Weekly full backups: 60 days.

* Monthly full backups: 6 months.

  • Medium Priority Data:

* Daily backups: 7 days.

* Weekly full backups: 30 days.

4.4 Backup Storage Locations

  • On-site Storage: Short-term backups for immediate recovery of minor incidents (e.g., local disk arrays, NAS).
  • Off-site Storage: Encrypted backups stored at a secure, geographically separate location. This is the primary source for disaster recovery. [Specify location, e.g., Iron Mountain, Cloud Provider Data Center in a different region].
  • Cloud Storage: Leveraging [e.g., AWS S3, Azure Blob Storage, Google Cloud Storage] for immutable, highly durable, and geographically redundant storage of critical backups.

4.5 Backup Verification Procedures

  • Daily Monitoring: Automated alerts for backup job failures.
  • Weekly Verification: Randomly select and restore a small subset of files/databases to a test environment to confirm integrity and recoverability.
  • Quarterly Full Restore Test: Perform a full system restore for a critical application to a segregated test environment.

4.6 Data Encryption

All backups, both in transit and at rest, are encrypted using [e.g., AES-256 encryption]. Encryption keys are managed securely and stored separately from the encrypted data.

5. Disaster Detection and Activation Procedures

Swift detection and activation are crucial for minimizing disaster impact.

5.1 Detection Mechanisms

  • Automated Monitoring Systems: [e.g., Nagios, SolarWinds, Datadog] for server health, network performance, application availability, and unusual activity.
  • Security Information and Event Management (SIEM): [e.g., Splunk, ELK Stack] for detecting security incidents, unauthorized access, or malware outbreaks.
  • Environmental Sensors: For temperature, humidity, and power fluctuations in data centers.
  • Vendor Notifications: Alerts from cloud providers or managed service providers.
  • Manual Reporting: Employees reporting widespread service outages or physical damage.

5.2 Criteria for Disaster Declaration

A disaster is declared if any of the following conditions are met:

  • Unavailability of one or more critical systems exceeding RTO.
  • Significant data loss impacting critical business processes.
  • Physical damage to the primary data center or critical infrastructure making it unusable.
  • Widespread network outage affecting core business operations.
  • Major cybersecurity incident (e.g., ransomware, data breach) requiring full system rebuild.
  • Declaration by emergency services that the primary facility is unsafe.

5.3 Activation Steps

  1. Initial Assessment (0-1 hour):

* DR Coordinator receives notification of incident.

* DR Coordinator, Technical Lead, and Network & Security Lead perform a rapid assessment of the incident's scope and impact.

* Confirm if the incident meets disaster declaration criteria.

  1. Declare Disaster & Activate Team (1-2 hours):

* DR Coordinator formally declares a disaster.

* Activate the full DR Team via emergency communication channels (SMS, dedicated conference bridge).

* Notify senior management.

  1. Convene DR Command Center:

* Establish a virtual (e.g., Microsoft Teams, Zoom) or physical (off-site location) command center.

* DR Coordinator officially initiates the DRP.

6. Failover and Recovery Procedures

This section details the step-by-step procedures for recovering IT operations at the designated recovery site.

6.1 Recovery Site Strategy

[Your Organization Name] utilizes a Warm Site strategy for critical applications and data, supplemented by Cloud-based Recovery for specific services.

  • Warm Site: A dedicated facility with pre-installed hardware, networking, and basic infrastructure, requiring data restoration and application configuration. This is located at [Recovery Site Location, e.g., 123 Disaster Recovery Blvd, Anytown, State].
  • Cloud Recovery: Leveraging [e.g., AWS/Azure/GCP] services for rapid provisioning of compute and storage resources, particularly for highly critical, virtualized workloads.

6.2 Phase 1: Damage Assessment & Initial Response (0-2 hours from disaster declaration)

  1. Safety First: Ensure personnel safety. Do not enter unsafe areas.

2.

disaster_recovery_plan.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}