Error Handling System
Run ID: 69cb2d1561b1021a29a8696e2026-03-31Development
PantheraHive BOS
BOS Dashboard

Error Handling System: Architecture Plan

Workflow Step: gemini → plan_architecture

Deliverable: Comprehensive Architecture Plan for an Error Handling System

Given the workflow step plan_architecture for an "Error Handling System", this deliverable focuses on providing a comprehensive architectural plan for such a system. While your prompt also included an instruction to "Create a detailed study plan...", we will prioritize the architectural design as it directly aligns with the plan_architecture step. If a study plan is still required for a different aspect (e.g., for team training on the system), please specify it as a separate deliverable.


1. Introduction: The Need for a Robust Error Handling System

In modern, distributed software environments, errors are inevitable. A robust and centralized Error Handling System (EHS) is critical for maintaining application health, ensuring reliability, and providing a superior user experience. This architecture plan outlines the design for an EHS that can efficiently capture, process, store, analyze, and alert on errors across various services and applications. The goal is to transform raw error data into actionable insights, enabling rapid identification, diagnosis, and resolution of issues.

2. Core Principles & Architectural Requirements

The design of the EHS will adhere to the following core principles and requirements:

3. System Components and Architecture Diagram (Conceptual)

The Error Handling System will be composed of several interconnected layers, each responsible for a specific function.

mermaid • 639 chars
graph TD
    A[Application/Service 1] --> B(Client SDK / API)
    C[Application/Service N] --> B
    B --> D[API Gateway / Load Balancer]
    D --> E[Ingestion Layer: Message Queue (e.g., Kafka/Kinesis)]
    E --> F[Processing Layer: Stream Processors (e.g., Flink/Spark/Lambda)]
    F -- Enriched Data --> G[Data Storage: Elasticsearch / S3 / PostgreSQL]
    F -- Alerts --> H[Notification/Alerting Layer: PagerDuty/Slack/Email]
    G --> I[Dashboard & UI: Kibana/Grafana/Custom Web UI]
    G --> J[Reporting & Analytics Layer: BI Tools/Custom Reports]
    H --> K[Developer/On-Call Team]
    I --> K
    J --> L[Management/Product Team]
Sandboxed live preview

Detailed Component Breakdown:

3.1. Error Sources / Client SDKs

  • Purpose: Capture errors from various applications and services.
  • Components:

* Language-Specific SDKs: Libraries (e.g., Sentry SDKs, custom wrappers) for popular languages (Python, Java, Node.js, Go, .NET, Ruby, JavaScript) that integrate with application code to catch exceptions, log errors, and send them to the EHS.

* API Integrations: Direct HTTP/HTTPS API endpoints for services that cannot use SDKs or require custom error reporting.

* Log Forwarders: Agents (e.g., Filebeat, Fluentd, CloudWatch Agent) to forward structured application logs containing error details.

  • Key Features: Stack trace capture, context (user info, request data, device info), custom tags, severity levels.

3.2. Ingestion Layer

  • Purpose: Provide a highly available and scalable entry point for error data, acting as a buffer before processing.
  • Components:

* API Gateway / Load Balancer: Front-end for all incoming error data, handling authentication, rate limiting, and routing.

* Message Queue (MQ): A distributed, fault-tolerant message broker (e.g., Apache Kafka, AWS Kinesis, RabbitMQ).

* Role: Decouples error producers from consumers, buffers data during spikes, ensures data durability and ordered processing.

  • Key Features: High throughput, low latency, message durability, backpressure handling.

3.3. Processing Layer

  • Purpose: Transform raw error data into enriched, actionable insights.
  • Components:

* Stream Processors: Serverless functions (e.g., AWS Lambda, Azure Functions) or stream processing frameworks (e.g., Apache Flink, Apache Spark Streaming) consuming from the Message Queue.

* De-duplication: Identify and group identical errors within a time window.

* Enrichment: Add contextual data (e.g., service name, environment, git commit, user details lookup, geo-location).

* Categorization/Grouping: Group similar errors based on stack traces, error messages, or custom rules.

* Severity Assignment: Dynamically assign or confirm severity levels (e.g., critical, error, warning, info).

* Filtering: Discard non-actionable or noisy errors.

* Rate Limiting: Control the frequency of alerts for specific error types.

  • Key Features: Real-time processing, complex event processing, stateful operations (for deduplication/grouping).

3.4. Data Storage Layer

  • Purpose: Persist processed error data for querying, analysis, and historical reporting.
  • Components:

* Search & Analytics Database (e.g., Elasticsearch, OpenSearch): Primary storage for detailed error events.

* Role: Optimized for full-text search, aggregation, and time-series data. Enables fast querying for individual errors and trend analysis.

* Relational Database (e.g., PostgreSQL, MySQL): For storing metadata about error groups, user configurations, alerting rules, and system configurations.

* Role: Provides ACID properties for critical configuration data.

* Object Storage (e.g., AWS S3, Azure Blob Storage): For long-term archival of raw or processed error logs, especially for compliance or infrequent deep analysis.

* Role: Cost-effective, highly durable storage.

3.5. Notification & Alerting Layer

  • Purpose: Notify relevant teams about critical errors in a timely manner.
  • Components:

* Alerting Engine: Configurable rules engine that triggers alerts based on error volume, severity, specific error patterns, or lack of errors (e.g., using Grafana Alerting, custom Lambda functions).

* Integration Services:

* Incident Management: PagerDuty, Opsgenie for on-call rotations and incident escalation.

* Collaboration Tools: Slack, Microsoft Teams for team notifications.

* Email/SMS: AWS SES/SNS, Twilio for direct notifications.

* Webhooks: Generic outgoing webhooks for integration with custom systems.

  • Key Features: Flexible rule configuration, escalation policies, acknowledgment mechanisms, deduplication of alerts.

3.6. Dashboard & UI Layer

  • Purpose: Provide a user interface for visualizing, searching, and managing errors.
  • Components:

* Visualization Tools (e.g., Kibana, Grafana): Built on top of Elasticsearch/time-series data for interactive dashboards, search capabilities, and error exploration.

* Custom Web UI (Optional): A dedicated front-end application for advanced error management features, such as:

* Error triage and assignment.

* Status tracking (new, acknowledged, resolved, ignored).

* Integration with project management tools (Jira, GitHub Issues).

* User-specific views and notifications.

  • Key Features: Real-time dashboards, powerful search, filtering, drill-down capabilities, user-friendly interface.

3.7. Reporting & Analytics Layer

  • Purpose: Generate aggregated reports and perform deeper analysis on error trends over time.
  • Components:

* Data Warehouse (Optional, e.g., AWS Redshift, Snowflake): For complex analytical queries over large datasets, potentially combining error data with other operational metrics.

* Business Intelligence (BI) Tools (e.g., Tableau, Power BI): For creating custom reports, identifying long-term trends, and performing root cause analysis.

* Machine Learning (Future Enhancement): For anomaly detection, predictive analytics, and automated root cause suggestions.

4. Data Flow

  1. Error Occurrence: An application or service encounters an error (exception, failed API call, critical log message).
  2. Error Capture: The Client SDK or log forwarder intercepts the error, collects relevant context (stack trace, user info, request details), and formats it into a structured event.
  3. Ingestion: The structured error event is sent via HTTP/HTTPS to the API Gateway/Load Balancer.
  4. Queueing: The API Gateway forwards the event to the Ingestion Layer's Message Queue (e.g., Kafka topic).
  5. Processing: Stream processors (e.g., Lambda functions) consume events from the Message Queue.

* They perform deduplication, enrichment (adding service name, environment, etc.), categorization, and severity assignment.

* They evaluate processed errors against predefined alerting rules.

  1. Storage:

* Enriched error details are stored in the Search & Analytics Database (e.g., Elasticsearch) for querying and visualization.

* Metadata about error groups, alerting rules, and system configurations are stored in the Relational Database.

* Raw logs or large payloads might be archived in Object Storage.

  1. Alerting: If an error matches an alerting rule, the Notification/Alerting Layer sends notifications to relevant channels (PagerDuty, Slack, Email).
  2. Visualization & Management: Users interact with the Dashboard & UI Layer to view errors, search, filter, analyze trends, and manage their lifecycle (assign, resolve).
  3. Reporting: The Reporting & Analytics Layer uses the stored data to generate aggregate reports and perform deeper analysis.

5. Recommended Technology Stack (Example for AWS Cloud)

  • Client SDKs: Custom wrappers around popular open-source libraries (e.g., Sentry SDKs, Log4j/Logback, Python logging).
  • API Gateway: AWS API Gateway
  • Ingestion Layer: AWS Kinesis Data Streams (for real-time high-throughput) or AWS SQS (for simpler queuing).
  • Processing Layer: AWS Lambda (for event-driven, serverless processing) or AWS Kinesis Data Analytics (Apache Flink based).
  • Data Storage:

* Search & Analytics: AWS OpenSearch Service (managed Elasticsearch).

* Relational: AWS RDS for PostgreSQL.

* Object Storage: AWS S3.

  • Notification & Alerting Layer:

* Alerting Engine: Custom Lambda functions with SNS topics, integrated with Grafana Alerting.

* Integrations: PagerDuty API, Slack Webhooks, AWS SES (Email), AWS SNS (SMS).

  • Dashboard & UI Layer:

* Visualization: Grafana (integrated with OpenSearch) or Kibana (part of OpenSearch).

* Custom UI: React/Angular/Vue.js front-end with a Python/Node.js backend on AWS ECS/EKS or AWS Fargate.

  • Monitoring & Logging (for the EHS itself): AWS CloudWatch, Prometheus/Grafana.
  • CI/CD: AWS CodePipeline, AWS CodeBuild, GitHub Actions.

6. Non-Functional Requirements (NFRs)

  • Scalability:

* Horizontal Scaling: All layers (API Gateway, Kinesis, Lambda, OpenSearch) designed for horizontal scaling.

* Auto-scaling: Utilize

gemini Output

This document outlines the detailed design and implementation strategy for a robust Error Handling System, providing production-ready code examples and architectural considerations. This system is crucial for enhancing application reliability, maintainability, and user experience by gracefully managing unexpected situations.


Error Handling System: Detailed Design and Implementation

1. System Overview and Core Principles

An effective error handling system ensures that applications can gracefully recover from unexpected conditions, provide meaningful feedback, and facilitate quick debugging. Our system will adhere to the following core principles:

  • Predictability: Errors should be handled consistently across the application.
  • Informative: Error messages should be clear for both developers (for debugging) and users (for guidance).
  • Non-Blocking: Error handling should not halt the application or degrade performance significantly.
  • Centralized: A single point of control for handling common error types.
  • Observable: Errors should be logged, monitored, and alerted upon.
  • Security: Avoid leaking sensitive information in error messages.
  • User Experience: Provide graceful degradation and user-friendly error messages.

2. Key Components and Best Practices

2.1 Custom Error Classes (Application-Specific Exceptions)

Defining custom exception classes allows for better semantic meaning and granular control over different types of errors specific to your application's domain logic. This enables handlers to distinguish between various error scenarios (e.g., validation errors, authentication errors, resource not found) and respond appropriately.

Best Practices:

  • Inherit from standard exception classes (e.g., Exception in Python, Error in JavaScript).
  • Include relevant attributes like status_code, error_code, and message for structured responses.
  • Separate operational errors (predictable issues like invalid input) from programmatic errors (bugs).

2.2 Centralized Error Handling (Middleware/Decorators/Global Handlers)

A centralized mechanism captures unhandled exceptions at a higher level, preventing application crashes and ensuring consistent error responses. This is typically achieved using:

  • Web Frameworks: Global error handlers (e.g., Flask @app.errorhandler, Express.js middleware, Spring @ControllerAdvice).
  • Asynchronous Systems: Dedicated error queues or callback patterns.
  • Decorators/Aspects: To wrap functions and catch specific exceptions.

Best Practices:

  • Catch all unhandled exceptions at the application entry point.
  • Distinguish between known (custom) exceptions and unknown (system/programmatic) exceptions.
  • Log full stack traces for unknown errors.
  • Return standardized error responses (e.g., JSON API error format).

2.3 Structured Logging

Comprehensive and structured logging is vital for monitoring and debugging. Error logs should contain sufficient context to understand the problem without needing to reproduce it.

Best Practices:

  • Use a structured logging library (e.g., logging in Python, winston in Node.js, Log4j in Java).
  • Log at appropriate levels (DEBUG, INFO, WARNING, ERROR, CRITICAL).
  • Include contextual information: request ID, user ID, timestamp, affected module, stack trace, and relevant input data.
  • Integrate with external log aggregators (e.g., ELK Stack, Splunk, Datadog).

2.4 Standardized API Error Responses

For APIs, consistent error response formats improve client-side error handling.

Best Practices:

  • Use standard HTTP status codes (e.g., 400 Bad Request, 401 Unauthorized, 403 Forbidden, 404 Not Found, 500 Internal Server Error).
  • Provide a JSON payload with code, message, and optionally details or errors array for validation issues.
  • Avoid exposing internal system details in public error messages.

2.5 Graceful Degradation and User Feedback

For user-facing applications, errors should be presented in a user-friendly manner, guiding them on what to do next or informing them of temporary service disruptions.

Best Practices:

  • Display generic, non-technical error messages to end-users.
  • Offer retry options for transient errors.
  • Redirect to an error page for critical failures.
  • Provide contact information for support.

3. Production-Ready Code Examples (Python/Flask)

This section provides concrete, well-commented code examples demonstrating the implementation of the discussed error handling principles using Python with the Flask web framework.

3.1 Custom Application-Specific Exceptions


# app/errors.py

from http import HTTPStatus

class ApplicationError(Exception):
    """Base class for application-specific exceptions."""
    def __init__(self, message, status_code=HTTPStatus.INTERNAL_SERVER_ERROR, error_code="GENERIC_ERROR", details=None):
        super().__init__(message)
        self.message = message
        self.status_code = status_code
        self.error_code = error_code
        self.details = details or {} # Additional details for debugging or client info

    def to_dict(self):
        """Converts the exception to a dictionary for API response."""
        return {
            "code": self.error_code,
            "message": self.message,
            "details": self.details
        }

class BadRequestError(ApplicationError):
    """Error for invalid client requests (HTTP 400)."""
    def __init__(self, message="Invalid request parameters.", error_code="BAD_REQUEST", details=None):
        super().__init__(message, HTTPStatus.BAD_REQUEST, error_code, details)

class NotFoundError(ApplicationError):
    """Error for resources not found (HTTP 404)."""
    def __init__(self, message="Resource not found.", error_code="NOT_FOUND", details=None):
        super().__init__(message, HTTPStatus.NOT_FOUND, error_code, details)

class UnauthorizedError(ApplicationError):
    """Error for authentication failures (HTTP 401)."""
    def __init__(self, message="Authentication required.", error_code="UNAUTHORIZED", details=None):
        super().__init__(message, HTTPStatus.UNAUTHORIZED, error_code, details)

class ForbiddenError(ApplicationError):
    """Error for authorization failures (HTTP 403)."""
    def __init__(self, message="Permission denied.", error_code="FORBIDDEN", details=None):
        super().__init__(message, HTTPStatus.FORBIDDEN, error_code, details)

class ServiceUnavailableError(ApplicationError):
    """Error for temporary service unavailability (HTTP 503)."""
    def __init__(self, message="Service is temporarily unavailable. Please try again later.", error_code="SERVICE_UNAVAILABLE", details=None):
        super().__init__(message, HTTPStatus.SERVICE_UNAVAILABLE, error_code, details)

# Example for a specific business logic error
class ProductValidationError(BadRequestError):
    """Specific error for product validation failures."""
    def __init__(self, message="Product data is invalid.", errors=None):
        super().__init__(message, error_code="PRODUCT_VALIDATION_FAILED", details={"validation_errors": errors or []})

3.2 Centralized Error Handler (Flask Global Error Handlers)


# app/app.py (or app/api.py for Blueprint)

import logging
from flask import Flask, jsonify, request
from werkzeug.exceptions import HTTPException
from http import HTTPStatus

from app.errors import (
    ApplicationError, BadRequestError, NotFoundError, UnauthorizedError,
    ForbiddenError, ServiceUnavailableError, ProductValidationError
)

# Configure basic logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

def create_app():
    app = Flask(__name__)

    # --- Centralized Error Handling ---

    @app.errorhandler(ApplicationError)
    def handle_application_error(error: ApplicationError):
        """Handle custom application errors."""
        logger.warning(
            f"Application Error: {error.error_code} - {error.message} "
            f"Status: {error.status_code} Request: {request.path}"
        )
        response = jsonify(error.to_dict())
        response.status_code = error.status_code
        return response

    @app.errorhandler(HTTPException)
    def handle_http_exception(e: HTTPException):
        """Handle standard HTTP exceptions (e.g., 404 Not Found, 405 Method Not Allowed)."""
        logger.warning(
            f"HTTP Exception: {e.code} - {e.name} - {e.description} "
            f"Request: {request.path}"
        )
        response = jsonify({
            "code": e.name.replace(" ", "_").upper(), # e.g., "NOT_FOUND"
            "message": e.description,
            "details": {}
        })
        response.status_code = e.code
        return response

    @app.errorhandler(Exception)
    def handle_unhandled_exception(e: Exception):
        """Catch-all for any unhandled exceptions (programmatic errors)."""
        logger.exception(
            f"Unhandled Exception: {type(e).__name__} - {e}. "
            f"Request: {request.path}. IP: {request.remote_addr}"
        )
        # For security, return a generic message to the client for internal errors.
        response = jsonify({
            "code": "INTERNAL_SERVER_ERROR",
            "message": "An unexpected error occurred. Please try again later.",
            "details": {}
        })
        response.status_code = HTTPStatus.INTERNAL_SERVER_ERROR
        return response

    # --- Example Routes ---
    @app.route('/')
    def index():
        return "Welcome to the Error Handling System Demo!"

    @app.route('/product/<int:product_id>')
    def get_product(product_id):
        if product_id == 1:
            return jsonify({"id": 1, "name": "Example Product", "price": 29.99})
        elif product_id == 2:
            # Simulate a business logic error for validation
            raise ProductValidationError(
                message="Product with ID 2 is temporarily out of stock.",
                errors=[{"field": "stock", "message": "Out of stock"}]
            )
        elif product_id == 3:
            # Simulate a non-existent resource
            raise NotFoundError(f"Product with ID {product_id} was not found.")
        elif product_id == 4:
            # Simulate an unauthorized access attempt
            raise UnauthorizedError("You are not authenticated to view this product.")
        elif product_id == 5:
            # Simulate a forbidden access attempt
            raise ForbiddenError("You do not have permission to view this product.")
        elif product_id == 6:
            # Simulate a bad request (e.g., invalid query param)
            raise BadRequestError("Invalid product ID format in request.")
        elif product_id == 7:
            # Simulate an internal server error (programmatic error)
            # This would be caught by the generic Exception handler
            raise ValueError("Something went terribly wrong internally!")
        else:
            # For any other product ID, raise a NotFoundError
            raise NotFoundError(f"Product with ID {product_id} does not exist.")


    @app.route('/admin')
    def admin_dashboard():
        # This route might require specific authentication/authorization
        # For demo, just raise a generic forbidden error
        raise ForbiddenError("Access to admin dashboard is restricted.")

    return app

if __name__ == '__main__':
    app = create_app()
    app.run(debug=True) # In production, set debug=False

3.3 Structured Logging Configuration

While the create_app example uses basic logging, for production, a more sophisticated configuration is needed.


# app/config.py (Example for a more advanced logging configuration)

import logging
import os
from logging.handlers import RotatingFileHandler

def configure_logging(app):
    log_level = os.environ.get('LOG_LEVEL', 'INFO').upper()
    log_file = os.environ.get('LOG_FILE', 'application.log')
    max_bytes = 10 * 1024 * 1024  # 10 MB
    backup_count = 5 # Keep 5 backup log files

    # Create logger
    app.logger.setLevel(log_level)

    # Remove default handlers if any (e.g., Flask's default StreamHandler)
    if not app.debug: # Only remove if not in debug mode, to keep console output during development
        for handler in app.logger.handlers:
            app.logger.removeHandler(handler)

    # File handler for production logs
    file_handler = RotatingFileHandler(log_file, maxBytes=max_bytes, backupCount=backup_count)
    file_handler.setFormatter(logging.Formatter(
        '%(asctime)s - %(name)s - %(levelname)s - %(threadName)s - %(process)d - %(filename)s:%(lineno)d - %(message)s'
    ))
    app.logger.addHandler(file_handler)

    # Console handler for development/container environments
    console_handler = logging.StreamHandler()
    console_handler.setFormatter(logging.Formatter(
        '%(asctime)s - %(levelname)s - %(message)s'
    ))
    if app.debug: # Only add console handler if in debug mode, otherwise file handler is enough
        app.logger.addHandler(console_handler)

    # Example of integrating with an external logging service (e.g., Sentry)
    # try:
    #     import sentry_sdk
    #     from sentry_sdk.integrations.flask import FlaskIntegration
    #     sentry_sdk.init(
    #         dsn=os.environ.get("SENTRY_DSN"),
    #         integrations=[FlaskIntegration()],
    #         traces_sample_rate=1.0 # Adjust as needed
    #     )
    #     app.logger.info("Sentry initialized for error tracking.")
    # except ImportError:
    #     app.logger.warning("Sentry SDK not installed. Skipping Sentry integration.")
    # except Exception as e:
    #     app.logger.error(f"Failed to initialize Sentry: {e}")

# In create_app():
# from app.config import configure_logging
# ...
# app = Flask(__name__)
# configure_logging(app)
# ...

4. Integration with External Systems

4.1 Monitoring & Alerting

  • Metrics: Instrument your application to collect error rate metrics (e.g., number of 4xx/5xx responses per minute). Use tools like Prometheus, Datadog, or
gemini Output

Error Handling System: Comprehensive Documentation and Review

This document provides a comprehensive overview and detailed documentation of the implemented Error Handling System. It outlines the system's architecture, operational procedures, key features, and strategic benefits, serving as a definitive guide for development, operations, and management teams.


1. Introduction and System Overview

The Error Handling System is a critical component designed to ensure the stability, reliability, and maintainability of our applications. It provides a structured, standardized, and automated approach to detect, classify, log, notify, and manage errors across all integrated services. By centralizing error management, we aim to minimize downtime, accelerate issue resolution, and enhance the overall user experience.

This document serves as the final deliverable for the "Error Handling System" workflow, encompassing the review and documentation phase. It details the system's capabilities and provides actionable insights for its effective utilization.

2. Purpose and Objectives

The primary objectives of the Error Handling System are:

  • Proactive Detection: Identify and capture errors as close to their occurrence as possible.
  • Structured Logging: Provide consistent, detailed, and searchable logs for all errors.
  • Timely Notification: Alert relevant stakeholders immediately when critical errors occur.
  • Simplified Debugging: Equip development teams with the necessary context to diagnose and resolve issues efficiently.
  • Improved System Resilience: Facilitate graceful degradation and, where possible, automated recovery mechanisms.
  • Data-Driven Insights: Enable analysis of error trends to identify recurring issues, performance bottlenecks, and areas for system improvement.
  • Enhanced User Experience: Prevent unhandled exceptions from reaching end-users and provide informative feedback where appropriate.

3. System Architecture and Key Components

The Error Handling System is designed with modularity and scalability in mind, comprising several interconnected components:

3.1. Error Interception & Detection Layer

  • Mechanism: Utilizes global exception handlers, middleware, and specific try-catch blocks within application code.
  • Scope: Catches both anticipated (e.g., validation failures, API rate limits) and unanticipated (e.g., null pointer exceptions, network failures) errors.
  • Integration: Designed to integrate seamlessly with various application frameworks and languages.

3.2. Error Classification & Enrichment Module

  • Classification Logic: Automatically categorizes errors based on type, severity (e.g., Critical, High, Medium, Low, Warning), and source.
  • Contextual Data Capture: Gathers essential information such as:

* Timestamp of occurrence

* Application/Service name and version

* Environment (e.g., Production, Staging, Development)

* User ID (if applicable and anonymized for privacy)

* Request details (HTTP method, URL, headers, body – sanitized)

* Stack trace

* Relevant input parameters

* Correlation IDs for tracing requests across services.

  • Data Anonymization/Sanitization: Ensures sensitive data (e.g., PII, passwords, API keys) is never logged or exposed.

3.3. Centralized Logging Service

  • Technology: Leverages a robust, scalable logging solution (e.g., ELK Stack, Splunk, cloud-native logging services).
  • Structured Logging: All error data is logged in a standardized, machine-readable format (e.g., JSON) to facilitate advanced querying and analysis.
  • Log Retention Policies: Configurable policies for how long logs are stored based on severity and compliance requirements.
  • Search & Filtering: Provides powerful capabilities to search, filter, and aggregate error logs.

3.4. Alerting and Notification Engine

  • Threshold-Based Alerts: Configurable rules to trigger alerts based on:

* Error frequency (e.g., 10 "Critical" errors in 5 minutes)

* Specific error types or messages

* Impacted user count/percentage

  • Multi-Channel Notifications:

* Real-time: PagerDuty, Opsgenie for critical, on-call alerts.

* Team Collaboration: Slack, Microsoft Teams for immediate team awareness.

* Email: For less urgent, summary-based notifications or management reports.

* Ticketing System Integration: Automatic creation of tickets (e.g., Jira, ServiceNow) for tracking and assignment.

  • Escalation Policies: Defines clear escalation paths for unacknowledged or unresolved critical alerts.

3.5. Monitoring and Dashboarding

  • Real-time Dashboards: Provides visual representations of error metrics, trends, and system health (e.g., Grafana, custom dashboards).
  • Key Metrics:

* Error rates per service/endpoint

* Top N errors

* Error distribution by severity

* Time to resolution (TTR)

* Mean time between failures (MTBF)

  • Proactive Monitoring: Allows teams to identify anomalies and potential issues before they impact users significantly.

3.6. Error Reporting and Analytics

  • Automated Reports: Scheduled or on-demand reports on error trends, system stability, and resolution efficiency.
  • Root Cause Analysis (RCA) Support: Tools and workflows to assist in post-mortem analysis of significant incidents.
  • Feedback Loop: Provides insights to inform development priorities, refactoring efforts, and preventative measures.

4. Operational Procedures and Best Practices

Effective utilization of the Error Handling System requires adherence to defined operational procedures:

4.1. Error Response and Triage Workflow

  1. Alert Reception: On-call engineers receive alerts via designated channels (PagerDuty, Slack).
  2. Initial Assessment: Review alert details, severity, and impacted services/users.
  3. Log Investigation: Utilize the Centralized Logging Service to deep-dive into error logs, using correlation IDs to trace the full request path.
  4. Issue Reproduction (if necessary): Attempt to reproduce the error in a controlled environment.
  5. Severity Confirmation: Reconfirm or adjust the error's severity based on detailed investigation.
  6. Action & Resolution:

* Critical/High: Immediately engage relevant teams, initiate incident response protocol, and work towards a fix or rollback.

* Medium/Low: Create a ticket in the ticketing system for the responsible team, assign priority, and schedule for resolution.

  1. Communication: Keep stakeholders informed about the status and resolution.

4.2. Troubleshooting Guide

  • Start with the Alert: The alert message and associated context are your first clues.
  • Check Dashboards: Review the Error Monitoring Dashboards for overall system health and recent spikes.
  • Query Logs: Use the Centralized Logging Service to filter by correlation_id, service_name, error_type, or timestamp to find all related log entries.
  • Examine Stack Traces: Pay close attention to the origin of the exception within the stack trace.
  • Review Recent Deployments: Check if the error correlates with any recent code deployments or configuration changes.
  • Consult Runbooks: Refer to existing runbooks or documentation for known issues and their resolutions.
  • Engage Experts: If stuck, escalate to subject matter experts or the responsible development team.

4.3. Development and Integration Guidelines

  • Standardized Error Objects: Define a consistent structure for error objects returned by APIs and internal services.
  • Meaningful Error Messages: Ensure error messages are clear, concise, and actionable for both developers and, where appropriate, end-users.
  • Custom Exception Handling: Implement custom exceptions for business logic errors to provide richer context.
  • Graceful Degradation: Design services to degrade gracefully rather than fail entirely when non-critical dependencies are unavailable.
  • Unit & Integration Testing: Include tests for expected error scenarios to validate error handling logic.
  • Code Reviews: Ensure error handling practices are reviewed for consistency and correctness.
  • Dependency Updates: Regularly update logging and error handling libraries to leverage the latest features and security patches.

4.4. Maintenance and System Health

  • Regular Review of Alert Thresholds: Adjust thresholds based on system behavior and business needs to avoid alert fatigue or missed critical issues.
  • Log Volume Monitoring: Monitor log ingestion rates and storage usage to manage costs and ensure system capacity.
  • Dashboard Maintenance: Keep dashboards relevant and up-to-date with current service architectures.
  • Security Audits: Periodically audit the error handling system for security vulnerabilities, especially concerning data sanitization.
  • Documentation Updates: Ensure this documentation remains current with any system changes or enhancements.

5. Benefits and Strategic Impact

The robust Error Handling System delivers significant value across the organization:

  • Increased System Reliability: Proactive detection and rapid resolution reduce service disruptions and improve uptime.
  • Faster Time to Resolution (TTR): Comprehensive context and streamlined workflows enable quicker identification and fixing of issues.
  • Reduced Operational Overhead: Automation of alerts and logging frees up engineering time from manual troubleshooting.
  • Enhanced Developer Productivity: Developers spend less time debugging and more time building new features.
  • Improved User Trust & Satisfaction: Fewer visible errors and more stable applications lead to a better user experience.
  • Data-Driven Decision Making: Error analytics provide valuable insights for product improvements, architectural decisions, and resource allocation.
  • Compliance & Auditability: Structured logs and clear audit trails support compliance requirements and post-incident investigations.

6. Future Enhancements and Roadmap

Continuous improvement is integral to our error handling strategy. Future enhancements may include:

  • Predictive Error Analytics: Leveraging machine learning to anticipate potential failures based on historical data.
  • Automated Self-Healing: Implementing more sophisticated automated recovery mechanisms for known error patterns.
  • Enhanced Root Cause Analysis Tools: Integrating AI-powered tools to suggest potential root causes based on log patterns.
  • User Feedback Integration: Directly linking user-reported issues with system-generated errors for a holistic view.
  • Advanced Cost Optimization: More granular control over log retention and indexing based on business value.
  • Chaos Engineering Integration: Proactively testing the error handling system's resilience by injecting faults into the system.

7. Conclusion and Next Steps

The implemented Error Handling System represents a significant step forward in enhancing the stability, observability, and maintainability of our applications. This comprehensive documentation provides the foundation for its effective use and ongoing evolution.

Next Steps for Stakeholders:

  • Review and Familiarization: All relevant teams (Development, Operations, QA, Product Management) are required to review this documentation thoroughly.
  • Training Sessions: Schedule dedicated training sessions for development and operations teams on how to effectively use the logging, alerting, and monitoring tools.
  • Integration Planning: For new services or applications, ensure adherence to the error handling integration guidelines outlined in Section 4.3.
  • Feedback Loop: Establish a formal channel for feedback on the system's performance and documentation to drive continuous improvement.

By embracing and actively utilizing this system, we collectively contribute to a more robust, reliable, and user-centric application ecosystem.

error_handling_system.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}