API Rate Limiter
Run ID: 69cb57f761b1021a29a8824d2026-03-31Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter: Architecture Plan & Implementation Study Guide

This document outlines a comprehensive architecture plan for a robust and scalable API Rate Limiter, followed by a detailed study plan to guide its understanding and implementation. This deliverable is designed to provide a clear technical foundation and a practical learning roadmap for developing a high-performance rate limiting solution.


1. API Rate Limiter: Architecture Plan

1.1 Introduction

An API Rate Limiter is a critical component in modern web services, designed to control the rate at which users or clients can access an API. Its primary goals are:

1.2 Core Requirements

1.2.1 Functional Requirements

1.2.2 Non-Functional Requirements

1.3 High-Level Architecture

The API Rate Limiter will be implemented as a distributed system, likely deployed as an API Gateway plugin, a sidecar proxy, or a dedicated microservice.

text • 4,609 chars
**Components:**

1.  **API Gateway/Reverse Proxy:** Intercepts incoming requests before they reach the backend services. This is the ideal location for the rate limiting logic.
2.  **Rate Limiter Logic Module:** The core component that applies rate limiting algorithms, queries the data store, and decides whether to allow or deny a request.
3.  **Distributed Data Store:** A high-performance, in-memory data store (e.g., Redis) to maintain rate limiting counters and timestamps across multiple instances of the Rate Limiter Logic Module.
4.  **Configuration Service:** Manages rate limiting rules and policies, allowing for dynamic updates.
5.  **Monitoring & Logging:** Collects metrics on throttled requests, latency, and system health.

### 1.4 Detailed Component Design

#### 1.4.1 Request Interception / API Gateway Integration

*   **Placement:** The rate limiter should be positioned at the edge of the system, typically within an API Gateway (e.g., Nginx, Envoy, Kong, AWS API Gateway, Azure API Management) or as a dedicated proxy.
*   **Identification:** For each incoming request, identify the client based on:
    *   **IP Address:** Source IP of the request.
    *   **API Key/Token:** Extracted from headers (e.g., `Authorization`, `X-API-Key`).
    *   **User ID:** Extracted from authentication tokens (e.g., JWT claims).
*   **Metadata Extraction:** Extract relevant request metadata:
    *   Endpoint path (`/api/v1/users`)
    *   HTTP Method (`GET`, `POST`)
    *   Custom headers
*   **Pre-Processing:** Normalize identifiers (e.g., hash IP addresses, validate API keys).

#### 1.4.2 Rate Limiting Logic Module

This module is responsible for applying the chosen algorithm and interacting with the data store.

*   **Key Generation:** Based on the policy, generate a unique key for the data store (e.g., `ip:<ip_address>:endpoint:<path>:method:<method>`, `user:<user_id>`).
*   **Algorithm Selection:**
    *   **Fixed Window:** Simplest. Track requests within a fixed time window.
        *   *Pros:* Easy to implement, low memory.
        *   *Cons:* "Burst problem" at window boundaries.
    *   **Sliding Window Log:** Stores timestamps of all requests.
        *   *Pros:* Most accurate.
        *   *Cons:* High memory usage, performance degrades with high limits.
    *   **Sliding Window Counter:** Approximation. Combines current window count with a weighted average of the previous window.
        *   *Pros:* Good balance of accuracy and memory efficiency.
        *   *Cons:* Still an approximation.
    *   **Token Bucket:** Client consumes "tokens" for each request. Tokens are refilled at a fixed rate.
        *   *Pros:* Allows bursts up to bucket capacity, smooths out traffic.
        *   *Cons:* Slightly more complex to implement.
    *   **Leaky Bucket:** Requests are added to a queue (bucket) and processed at a fixed rate.
        *   *Pros:* Smooths out traffic, effective for queueing.
        *   *Cons:* Introduces latency for queued requests.
*   **Atomic Operations:** Crucial for distributed counters in the data store (e.g., Redis `INCR`, `EXPIRE`, Lua scripting for complex logic).
*   **Decision Logic:** Based on the algorithm's outcome, return `ALLOW` or `DENY`.
*   **Header Generation:** For `DENY` decisions, generate `429 Too Many Requests` status and rate limit headers.

#### 1.4.3 Distributed Data Store (e.g., Redis)

*   **Purpose:** Store and retrieve rate limiting counters, timestamps, and bucket states across all instances of the Rate Limiter Logic Module.
*   **Data Structure Examples:**
    *   **Fixed Window/Sliding Window Counter:** Hash maps or simple key-value pairs (`key:count`, `key:timestamp`). Redis `INCR` and `EXPIRE` commands are ideal.
    *   **Sliding Window Log:** Redis `ZSET` (sorted set) to store timestamps, allowing efficient range queries and removal of old entries.
    *   **Token Bucket/Leaky Bucket:** Redis `HSET` to store `tokens_available` and `last_refill_time` for a given client key. Lua scripts can implement the logic atomically.
*   **Persistence:** Redis can be configured for persistence (RDB/AOF) for durability, though for rate limiting, an in-memory setup with replication is often sufficient as temporary loss of counts is acceptable for short periods.
*   **High Availability:** Redis Cluster or Sentinel for failover and sharding.

#### 1.4.4 Configuration Management

*   **Source:** Configuration can be stored in a centralized configuration service (e.g., Consul, etcd, AWS AppConfig), a database, or YAML/JSON files.
*   **Schema:** Define a clear schema for rate limiting rules:
    
Sandboxed live preview
  • Dynamic Updates: The Rate Limiter Logic Module should be able to fetch and apply configuration updates without requiring restarts. This can be achieved through polling the configuration service or receiving push notifications.

1.4.5 Monitoring & Alerting

  • Metrics:

* Total requests processed.

* Throttled requests (by policy, by client ID).

* Latency of rate limiter checks.

* Data store performance (read/write latency, connections).

* Error rates.

  • Tools: Prometheus for metrics collection, Grafana for visualization, Alertmanager for alerts.
  • Logs: Detailed logs for throttled requests, configuration changes, and errors. Use a centralized logging solution (e.g., ELK stack, Splunk, Datadog).
  • Alerts: Configure alerts for:

* High throttling rates for legitimate traffic.

* Rate limiter service errors or latency spikes.

* Data store connectivity or performance issues.

1.5 Deployment Considerations

  • Scalability: Deploy multiple instances of the Rate Limiter Logic Module behind a load balancer. The distributed data store (e.g., Redis Cluster) handles state synchronization.
  • High Availability: Use redundant instances for all components. Implement health checks and automatic failover mechanisms.
  • Deployment Strategy:

* Sidecar: Deploy as a sidecar proxy alongside each backend service instance (e.g., using Envoy and Istio in Kubernetes).

* API Gateway Plugin: Integrate directly into an existing API Gateway (e.g., Nginx Lua module, Kong plugin).

Dedicated Microservice: A standalone service that backend APIs call before* processing requests (introduces an extra hop).

* Library: Embed the logic directly into application code (least recommended for distributed systems due to state management complexity).

  • Containerization: Use Docker and Kubernetes for easy deployment, scaling, and management.

1.6 API Design / Integration Points

  • HTTP Headers: The standard way to communicate rate limit information.

* X-RateLimit-Limit: The maximum number of requests allowed in the current window.

* X-RateLimit-Remaining: The number of requests remaining in the current window.

* X-RateLimit-Reset: The UTC epoch seconds when the current rate limit window resets.

* Retry-After: For 429 responses, indicates how long the client should wait before making another request.

  • Error Response: For throttled requests, return HTTP 429 Too Many Requests with a clear message and Retry-After header.

1.7 Security Considerations

  • Bypass Prevention: Ensure the rate limiter cannot be easily bypassed (e.g., by spoofing IP addresses or using different API keys rapidly).
  • API Key/Token Security: Store and transmit API keys securely (e.g., environment variables, secret management services).
  • Configuration Access: Restrict access to rate limiter configuration.
  • Data Store Security: Secure Redis instances with strong passwords, network segmentation, and encryption.

1.8 Technology Stack Recommendations

  • Language: Go (for performance and concurrency), Python (for rapid development), Java (for enterprise environments).
  • API Gateway: Nginx, Envoy, Kong, AWS API Gateway.
  • Data Store: Redis (primary recommendation due to performance, atomic operations, and data structures).
  • Configuration: Consul, etcd, Kubernetes ConfigMaps.
  • Monitoring: Prometheus, Grafana.
  • Logging: ELK Stack (Elasticsearch, Logstash, Kibana).
  • Deployment: Docker, Kubernetes.

1.9 Trade-offs and Considerations

  • Algorithm Choice: Fixed Window is simple but prone to bursts. Sliding Window Log is accurate but resource-intensive. Sliding Window Counter offers a good balance. Token Bucket provides burst tolerance. The choice depends on specific use cases and acceptable accuracy vs. resource trade-offs.
  • Consistency vs. Performance: Eventual consistency for rate limit counters is generally acceptable. Strict consistency would introduce significant latency.
  • Deployment Location: API Gateway integration is generally preferred for ease of deployment and minimal impact on backend services.
  • Complexity: A highly configurable and distributed rate limiter can be complex to build and maintain. Start simple and add complexity as needed.

2. Study Plan for API Rate Lim

gemini Output

API Rate Limiter - Code Generation

This deliverable provides a comprehensive, production-ready code implementation for an API Rate Limiter. The solution is designed for robustness, scalability, and ease of integration into existing Python-based microservices, leveraging Redis for distributed rate limiting.


1. Introduction to API Rate Limiting

API Rate Limiting is a crucial mechanism used to control the number of requests a client can make to an API within a given timeframe. It serves several vital purposes:

  • Preventing Abuse and DDoS Attacks: Limits the impact of malicious users or bots attempting to overload your servers.
  • Ensuring Fair Usage: Prevents a single user or small group of users from monopolizing server resources, ensuring availability for all legitimate users.
  • Controlling Costs: For cloud-based services, limiting requests can help manage infrastructure costs by preventing excessive resource consumption.
  • Maintaining System Stability: Protects backend services from being overwhelmed, leading to better performance and reliability.
  • Monetization: Can be used to enforce different service tiers (e.g., free tier with lower limits, premium tier with higher limits).

This implementation focuses on a Fixed Window Counter algorithm, which is straightforward to understand and implement, especially with a powerful in-memory data store like Redis.


2. Key Rate Limiting Algorithms (Brief Overview)

While our implementation will focus on the Fixed Window Counter, it's beneficial to understand other common algorithms:

  • Fixed Window Counter:

* Divides time into fixed-size windows (e.g., 1 minute).

* Counts requests within each window.

* Pros: Simple to implement, low memory usage.

* Cons: Can suffer from "bursty" traffic at the window edges (e.g., a user making N requests just before the window resets and N requests just after).

  • Sliding Window Log:

* Stores a timestamp for every request made by a user.

* When a new request arrives, it counts how many timestamps fall within the last 'N' seconds.

* Pros: Highly accurate, handles bursty traffic well.

* Cons: High memory usage for storing all timestamps, especially for high-volume APIs.

  • Sliding Window Counter:

* A hybrid approach that addresses the burstiness of Fixed Window without the memory overhead of Sliding Window Log.

* Combines the current window's count with a weighted count from the previous window.

* Pros: Good balance of accuracy and efficiency.

* Cons: More complex to implement than Fixed Window.

  • Token Bucket:

* Clients accumulate "tokens" at a fixed rate. Each request consumes a token.

* If the bucket is empty, requests are denied until new tokens are generated.

* Pros: Smooths out traffic, allows for bursts up to the bucket size.

* Cons: Requires careful parameter tuning (fill rate, bucket size).

  • Leaky Bucket:

* Requests are put into a queue (the bucket) and processed at a fixed rate (the leak rate).

* If the bucket overflows, new requests are dropped.

* Pros: Smooths out traffic, ensures a constant output rate.

* Cons: Introduces latency due to queuing, can drop legitimate requests if the bucket overflows.


3. Technology Stack Choice and Implementation Strategy

For this implementation, we've chosen:

  • Language: Python
  • Web Framework: Flask (lightweight and widely used for APIs)
  • Data Store: Redis (for its speed, atomic operations, and suitability for distributed rate limiting counters)
  • Algorithm: Fixed Window Counter (with a focus on atomic Redis operations for accuracy in concurrent environments).

Implementation Strategy:

  1. Rate Limiter Class: A dedicated Python class (RateLimiter) will encapsulate the rate limiting logic, interacting directly with Redis.
  2. Flask Decorator: A Flask decorator will be created to easily apply rate limits to specific API endpoints.
  3. Redis for Counters: Redis INCR and EXPIRE commands will be used atomically to manage request counts within a window.
  4. Client Identification: Rate limits will be applied per client, identified by their IP address (though this can be extended to API keys, user IDs, etc.).

4. Code Implementation

This section provides the complete, runnable code for the API Rate Limiter.

4.1. Project Structure


.
├── docker-compose.yml
├── requirements.txt
├── app.py
└── rate_limiter.py

4.2. requirements.txt

This file lists the Python dependencies.


Flask==2.3.2
redis==4.5.5

4.3. docker-compose.yml

This file simplifies setting up both the Redis server and the Flask application within Docker containers.


version: '3.8'

services:
  redis:
    image: redis:7.0-alpine
    container_name: api-rate-limiter-redis
    ports:
      - "6379:6379"
    command: redis-server --appendonly yes # Persist data
    volumes:
      - redis_data:/data # Volume for Redis data persistence

  api:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: api-rate-limiter-app
    ports:
      - "5000:5000"
    environment:
      # Use the service name 'redis' as the hostname for the Redis server
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_DB: 0
    depends_on:
      - redis
    volumes:
      - .:/app # Mount current directory into container for live code changes (optional for production)
    # Command to run the Flask application inside the container
    command: gunicorn --bind 0.0.0.0:5000 app:app
    # For development, you might use:
    # command: python -m flask run --host=0.0.0.0 --port=5000

volumes:
  redis_data:

4.4. Dockerfile (for the api service)

This Dockerfile builds the image for our Flask application.


# Use an official Python runtime as a parent image
FROM python:3.9-slim-buster

# Set the working directory in the container
WORKDIR /app

# Install system dependencies for gunicorn (if needed, typically not for slim-buster)
# RUN apt-get update && apt-get install -y --no-install-recommends \
#     build-essential \
#     && rm -rf /var/lib/apt/lists/*

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Expose port 5000 for the Flask application
EXPOSE 5000

# The command to run the application is specified in docker-compose.yml
# CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]

4.5. rate_limiter.py

This file contains the core RateLimiter class and the Flask decorator.


import redis
import time
import os
from functools import wraps
from flask import request, jsonify, current_app

class RateLimiter:
    """
    A class to implement a distributed API Rate Limiter using Redis.
    It uses the Fixed Window Counter algorithm.
    """

    def __init__(self, host='localhost', port=6379, db=0):
        """
        Initializes the RateLimiter with Redis connection details.
        """
        try:
            self.redis_client = redis.StrictRedis(
                host=host,
                port=port,
                db=db,
                decode_responses=True # Decode Redis responses to Python strings
            )
            self.redis_client.ping() # Test connection
            print(f"Connected to Redis at {host}:{port}/{db}")
        except redis.exceptions.ConnectionError as e:
            print(f"Could not connect to Redis: {e}")
            self.redis_client = None # Set to None to indicate connection failure

    def _get_key(self, client_id, name):
        """
        Generates a unique Redis key for a given client and rate limit name.
        e.g., "rate_limit:ip:192.168.1.1:login_attempt"
        """
        return f"rate_limit:{client_id}:{name}"

    def allow_request(self, client_id: str, limit: int, window_size_seconds: int, name: str = "default") -> bool:
        """
        Checks if a request from a client is allowed based on the defined rate limit.
        Uses the Fixed Window Counter algorithm with Redis atomic operations.

        :param client_id: Unique identifier for the client (e.g., IP address, API key).
        :param limit: Maximum number of requests allowed within the window.
        :param window_size_seconds: The size of the time window in seconds.
        :param name: A unique name for this specific rate limit (e.g., "login", "data_fetch").
        :return: True if the request is allowed, False otherwise.
        """
        if not self.redis_client:
            # If Redis connection failed, decide on a fallback strategy.
            # For production, you might want to log this and potentially allow all requests
            # or deny all requests based on your risk tolerance.
            current_app.logger.warning("Redis connection not available. Allowing request by default.")
            return True # Fallback: allow requests if Redis is down (can be configured)

        key = self._get_key(client_id, name)
        current_time = int(time.time())

        # Using Redis Pipeline for atomic execution of multiple commands
        pipe = self.redis_client.pipeline()

        # 1. Increment the counter for the current window
        # This is atomic and safe for concurrent requests.
        pipe.incr(key)

        # 2. Set the expiry for the key if it's new (or hasn't been set yet for the current window)
        # `EXPIRE` returns 1 if the timeout was set, 0 if it was not (key already had an expiry).
        # We only want to set expiry if it's a new key for the window.
        # This is crucial for the fixed window: the counter for a window should expire at the end of that window.
        pipe.expire(key, window_size_seconds, nx=True) # nx=True ensures it only sets expiry if it doesn't exist

        # Execute all commands in the pipeline atomically
        count, _ = pipe.execute() # We only need the count, the expire result isn't directly used here

        # Check if the request count exceeds the limit
        if count > limit:
            current_app.logger.info(f"Rate limit exceeded for client {client_id} on {name}: {count}/{limit}")
            return False
        else:
            current_app.logger.debug(f"Request allowed for client {client_id} on {name}: {count}/{limit}")
            return True

    def get_remaining_requests(self, client_id: str, limit: int, name: str = "default") -> int:
        """
        Gets the number of remaining requests for a client within the current window.
        """
        if not self.redis_client:
            return limit # Fallback: if Redis is down, assume all requests are remaining

        key = self._get_key(client_id, name)
        current_count = int(self.redis_client.get(key) or 0)
        return max(0, limit - current_count)

    def get_time_to_reset(self, client_id: str, name: str = "default") -> int:
        """
        Gets the time in seconds until the current rate limit window resets.
        """
        if not self.redis_client:
            return 0 # Fallback

        key = self._get_key(client_id, name)
        # TTL returns -2 if key does not exist, -1 if key exists but has no expire
        ttl = self.redis_client.ttl(key)
        return max(0, ttl)


def rate_limit_decorator(limit: int, window_size_seconds: int, name: str = "default"):
    """
    Decorator for Flask routes to apply rate limiting.
    """
    def decorator(f):
        @wraps(f)
        def wrapper(*args, **kwargs):
            # Get the RateLimiter instance from Flask's current_app context
            # It's assumed the RateLimiter is initialized and stored in app.config or similar
            rate_limiter_instance = current_app.config.get('RATE_LIMITER_INSTANCE')
            if not rate_limiter_instance:
                current_app.logger.error("RateLimiter instance not found in Flask app config. Check initialization.")
                return jsonify({"message": "Internal server error: Rate Limiter not configured."}), 500

            # Identify
gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Strategy

This document provides a comprehensive overview of API Rate Limiting, outlining its importance, key concepts, design considerations, implementation strategies, and best practices. This information is critical for ensuring the stability, security, and fair usage of your API services.


1. Executive Summary

API Rate Limiting is a fundamental mechanism used to control the number of requests a client can make to an API within a defined timeframe. By imposing limits, organizations can protect their backend services from abuse, ensure fair resource allocation, maintain system stability, and manage operational costs. This document details the strategic benefits and technical considerations for implementing a robust API rate limiting solution.


2. Why Implement API Rate Limiting?

Implementing API rate limiting offers several critical advantages for any API provider:

  • Security & Abuse Prevention:

* DDoS/Brute Force Protection: Mitigates denial-of-service attacks by preventing a single client or IP from overwhelming the server with excessive requests.

* Credential Stuffing Prevention: Slows down attempts to validate stolen credentials by limiting login attempts.

* Data Scraping Prevention: Makes it harder for malicious actors to rapidly extract large volumes of data.

  • System Stability & Performance:

* Resource Protection: Prevents a few clients from monopolizing server resources (CPU, memory, database connections), ensuring availability for all legitimate users.

* Consistent User Experience: Maintains predictable API response times and overall service quality by preventing overload.

* Prevents Cascading Failures: Localizes issues by preventing an overwhelmed service from impacting other dependent services.

  • Cost Management:

* Infrastructure Cost Control: Reduces the need for over-provisioning infrastructure to handle unpredictable spikes in traffic.

* Bandwidth Cost Reduction: Limits excessive data transfer that can incur significant charges.

  • Fair Usage & Monetization:

* Equitable Resource Distribution: Ensures all legitimate users receive a fair share of API resources.

* Tiered Service Levels: Enables the creation of different service tiers (e.g., Free, Basic, Premium) with varying rate limits, supporting monetization strategies.

* Discourages Misuse: Promotes responsible API consumption by discouraging clients from making unnecessary or inefficient requests.


3. Key Concepts and Terminology

Understanding the following terms is crucial for designing and discussing API rate limiting:

  • Rate Limit: The maximum number of requests allowed within a specific time window (e.g., 100 requests per minute).
  • Quota: A broader limit, often over a longer period (e.g., 10,000 requests per day) or for specific resources.
  • Throttling: The act of intentionally slowing down or rejecting requests when a rate limit is exceeded.
  • Burst Rate: The maximum number of requests allowed in a very short period (e.g., 10 requests within 1 second), allowing for temporary spikes above the sustained rate.
  • Sustained Rate: The average number of requests that can be handled over a longer period.
  • Grace Period: A short interval during which requests might still be processed even after a limit is technically reached, often before outright rejection.
  • Retry-After Header: An HTTP response header (e.g., Retry-After: 60) indicating how many seconds a client should wait before making another request after being rate-limited.
  • HTTP 429 Too Many Requests: The standard HTTP status code returned when a client exceeds the rate limit.

4. Common Rate Limiting Algorithms

Different algorithms offer varying trade-offs in terms of accuracy, fairness, and complexity.

  • Fixed Window Counter:

* Description: Divides time into fixed-size windows (e.g., 60 seconds). Each request increments a counter. If the counter exceeds the limit within the window, requests are rejected.

* Pros: Simple to implement, low overhead.

* Cons: Can suffer from "bursty" traffic at the window edges (e.g., 100 requests at the end of window 1 and 100 requests at the beginning of window 2, effectively 200 requests in a very short time).

  • Sliding Window Log:

* Description: Stores a timestamp for every request made by a client. When a new request arrives, it counts the number of timestamps within the last N seconds.

* Pros: Very accurate, no "edge problem."

* Cons: High memory consumption, especially for high request volumes, as it stores individual timestamps.

  • Sliding Window Counter:

* Description: A hybrid approach. It combines the current fixed window's count with a weighted count from the previous window to estimate the current rate.

* Pros: Good balance of accuracy and memory efficiency.

* Cons: Slightly more complex than fixed window, less precise than sliding window log.

  • Token Bucket:

* Description: A "bucket" holds tokens. Tokens are added at a fixed rate. Each request consumes one token. If the bucket is empty, the request is rejected or queued.

* Pros: Allows for bursts of traffic (up to the bucket size) while maintaining a steady average rate.

* Cons: Requires careful tuning of bucket size and refill rate.

  • Leaky Bucket:

* Description: Similar to a bucket with a hole in the bottom. Requests are added to the bucket (queue). Requests "leak out" at a constant rate. If the bucket overflows, new requests are rejected.

* Pros: Smooths out bursty traffic, ensures a constant output rate.

* Cons: Requests might experience latency if the bucket is full.


5. Design Considerations for API Rate Limiter (Actionable Decisions)

When designing your API rate limiting solution, consider the following key aspects:

  • 5.1. Scope of Limiting:

* Global: A single limit for all requests across the entire API. (Least flexible)

* Per User/Client: Limits tied to a specific API key, OAuth token, or user ID. (Most common and effective)

* Per IP Address: Limits based on the source IP. Useful for unauthenticated endpoints but vulnerable to NAT/proxy issues.

* Per Endpoint: Different limits for different API endpoints (e.g., /login might have a stricter limit than /data).

* Per Resource: Limits specific actions on specific resources.

* Recommendation: A combination of Per User/Client and Per Endpoint is generally most robust, with Per IP as a fallback for unauthenticated access.

  • 5.2. Granularity of Limits:

* Time Window: Define the duration (e.g., per second, per minute, per hour, per day).

* Request Type: Differentiate between read (GET) and write (POST/PUT/DELETE) operations, often applying stricter limits to write operations.

* Recommendation: Start with per-minute or per-hour limits, and refine based on usage patterns and endpoint sensitivity.

  • 5.3. Client Identification:

* API Key: Most common for programmatic access.

* OAuth Token: For authenticated users.

* User ID: For internal APIs or specific user-based limits.

* IP Address: For unauthenticated public APIs or as a secondary layer.

* Recommendation: Prioritize authenticated identifiers (API Key, OAuth Token, User ID) for accuracy and user-specific control.

  • 5.4. Enforcement Point:

* API Gateway/Proxy (e.g., Nginx, Envoy, AWS API Gateway, Azure API Management, Apigee):

* Pros: Decoupled from application logic, centralized management, high performance.

* Cons: Can be less granular (harder to apply business logic-specific limits).

* Application Layer (In-App):

* Pros: Highly granular, allows for complex business logic, can differentiate between request types within an endpoint.

* Cons: Adds overhead to application code, needs to be consistently implemented across all services.

* Load Balancer:

* Pros: Very early enforcement, good for initial DDoS protection.

* Cons: Limited intelligence, usually IP-based.

* Recommendation: A multi-layered approach is often best: API Gateway for initial, broad limits, and application layer for fine-grained, business-logic-driven limits.

  • 5.5. State Storage:

* Distributed Cache (e.g., Redis, Memcached):

* Pros: High performance, low latency, scalable, suitable for shared state across multiple API instances.

* Cons: Requires careful management of cache consistency and eviction policies.

* Database:

* Pros: Persistent, highly consistent.

* Cons: Slower, higher overhead for high-volume requests.

* In-Memory (Single Instance):

* Pros: Fastest.

* Cons: Not suitable for distributed systems, state is lost on restart.

* Recommendation: Use a distributed cache like Redis for storing rate limit counters due to its speed and support for atomic operations.

  • 5.6. Response to Exceeding Limits:

* HTTP Status Code: Always return HTTP 429 Too Many Requests.

* Headers: Include X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset (or Retry-After) in responses to inform clients.

* Error Body: Provide a clear, human-readable error message in the response body explaining the issue and suggesting next steps.

* Recommendation: Adhere to HTTP standards (429, Retry-After) and provide informative headers/body.

  • 5.7. Edge Cases & Advanced Scenarios:

* Distributed Systems: Ensure counters are atomic and synchronized across all API instances.

* Clock Skew: Account for potential time differences between servers if using time-based algorithms.

* Burst Tolerance: Design algorithms (like Token Bucket) that allow for short bursts without rejecting all requests.

* Whitelisting: Allow certain trusted clients (e.g., internal tools, partners) to bypass or have higher limits.

* Overriding Limits: Provide mechanisms for administrators to temporarily adjust limits for specific clients or situations.

  • 5.8. Monitoring & Alerting:

* Metrics: Track rate limit hits, rejections, and remaining limits.

* Dashboards: Visualize API usage patterns and rate limit effectiveness.

* Alerts: Set up alerts for sustained high rejection rates or potential abuse.

* Recommendation: Integrate rate limit metrics into your existing monitoring stack for proactive management.


6. Implementation Strategies

The choice of implementation strategy depends on your existing infrastructure, scale, and specific requirements.

  • 6.1. API Gateway/Reverse Proxy Based:

* Approach: Configure rate limiting rules directly on your API Gateway or reverse proxy.

* Examples:

* Nginx/Nginx Plus: Use limit_req_zone and limit_req directives.

* Envoy Proxy: Configure HTTP rate limit filters.

* Cloud API Gateways (AWS API Gateway, Azure API Management, Google Cloud Apigee): Leverage their built-in rate limiting features.

* Actionable: Identify your current API Gateway solution and consult its documentation for native rate limiting capabilities. This is often the quickest and most performant initial approach.

  • 6.2. Application-Level Libraries/Frameworks:

* Approach: Implement rate limiting logic within your application code using dedicated libraries or custom solutions.

* Examples:

* Python: Flask-Limiter, django-ratelimit.

* Node.js: express-rate-limit, rate-limiter-flexible.

* Java: Guava's RateLimiter, custom interceptors.

* Actionable: For fine-grained control or limits based on complex business logic, integrate a suitable library into your application framework. This requires careful testing and deployment.

  • 6.3. Dedicated Rate Limiting Service:

* Approach: Deploy a separate, specialized service (often using Redis or another distributed cache) that all API instances consult before processing requests.

* Examples: Custom microservice, open-source solutions like ratelimit.js or Gorilla/mux with Redis backend.

* Actionable: Consider this for large-scale, highly distributed systems where a centralized, high-performance rate limiting service is critical and can be shared across multiple APIs.


7. Best Practices for API Rate Limiting

To ensure an effective and user-friendly rate limiting experience:

  • 7.1. Communicate Limits Clearly:

* Actionable: Document your rate limits prominently in your API documentation, including specific limits per endpoint, identification methods, and how exceeding limits is handled.

  • 7.2. Provide Informative Headers:

Actionable: Always include X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset (or Retry-After) in all* API responses, not just when limits are hit. This allows clients to proactively manage their requests.

  • 7.3. Use Retry-After Appropriately:

* Actionable: When returning HTTP 429, always include the Retry-After header to guide clients on when they can safely retry.

  • 7.4. Encourage Exponential Backoff:

* Actionable: Recommend and document that clients implement

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}