API Rate Limiter
Run ID: 69ccccab3e7fb09ff16a53082026-04-01Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter: Architecture Plan

This document outlines a detailed architecture plan for an API Rate Limiter system. The goal is to design a robust, scalable, and efficient solution that protects backend services from abuse, ensures fair usage, and maintains service availability.


1. Introduction and Objectives

An API Rate Limiter is a critical component in any modern web service architecture. It controls the rate at which clients can make requests to an API, preventing resource exhaustion, mitigating DDoS attacks, and ensuring a fair distribution of service capacity among users.

Key Objectives:


2. Core Requirements

The API Rate Limiter must fulfill the following core requirements:


3. High-Level Architecture

The API Rate Limiter will typically sit as a proxy or gateway in front of the actual API services. All incoming API requests will first pass through the rate limiter.

text • 1,390 chars
+----------------+       +-------------------+       +---------------------+
|    Client      | ----> | API Gateway/Proxy | ----> | API Rate Limiter    |
| (Web/Mobile)   |       | (e.g., Nginx,     |       | (Dedicated Service) |
+----------------+       |   API Gateway)    |       +----------+----------+
                         +---------+---------+                  |
                                   |                            | (Rate Limit Check)
                                   |                            |
                                   |                            V
                                   |                  +---------------------+
                                   |                  | Distributed Cache   |
                                   |                  | (e.g., Redis)       |
                                   |                  | - Rate Counters     |
                                   |                  | - Policy Config     |
                                   +------------------+---------------------+
                                   |
                                   | (Allowed Request)
                                   V
                         +---------------------+
                         | Backend API Service |
                         | (e.g., Microservices, DB) |
                         +---------------------+
Sandboxed live preview

Components:

  1. API Gateway/Proxy: The entry point for all client requests. It forwards requests to the API Rate Limiter and can also handle other concerns like authentication, authorization, and routing.
  2. API Rate Limiter Service: A dedicated service responsible for evaluating rate limits for each incoming request. It interacts with a distributed cache to maintain and update rate counters.
  3. Distributed Cache: Stores the current state of rate limit counters and potentially rate limit policies. Essential for achieving consistency and scalability in a distributed environment.
  4. Backend API Services: The actual business logic services that process requests once they have passed the rate limiter.

4. Detailed Design Considerations

4.1. Placement in Request Flow

The rate limiter should be placed as early as possible in the request processing chain to shed load before it reaches more resource-intensive backend services.

  • Option 1: Edge Proxy/Gateway (Recommended): Integrate rate limiting logic directly into an API Gateway (e.g., AWS API Gateway, Kong, Apigee) or a reverse proxy (e.g., Nginx, Envoy). This is highly efficient as it intercepts requests before they hit any backend service.
  • Option 2: Standalone Service: Deploy a dedicated rate limiting service that acts as a proxy, forwarding requests to backend services after verification. This offers more flexibility and isolation but adds an extra network hop.
  • Option 3: In-Application Middleware: Implement rate limiting logic within each backend application. This can lead to inconsistent behavior and higher operational overhead, generally not recommended for large-scale systems.

4.2. Rate Limiting Algorithms

Several algorithms can be used, each with pros and cons:

  • Fixed Window Counter:

* Description: Divides time into fixed-size windows (e.g., 1 minute). Each request increments a counter. If the counter exceeds the limit within the window, requests are rejected.

* Pros: Simple to implement, low memory usage.

* Cons: Can allow a "burst" of requests at the window boundaries (e.g., 2N requests in 2 consecutive windows if requests come at the very end of one window and the very beginning of the next).

* Use Case: Basic rate limiting where burstiness at window edges is acceptable.

  • Sliding Window Log:

* Description: Stores a timestamp for each request made by a client. When a new request arrives, it removes all timestamps older than the current window and counts the remaining timestamps.

* Pros: Very accurate, no "burst" problem at window boundaries.

* Cons: High memory usage (stores all timestamps), computationally expensive for high request rates.

* Use Case: Highly accurate rate limiting where memory and CPU are less of a concern.

  • Sliding Window Counter:

* Description: A hybrid approach. It uses two fixed windows: the current window and the previous window. It estimates the count for the current sliding window by taking a weighted average of the previous window's count and the current window's count.

* Pros: More accurate than Fixed Window, less memory than Sliding Window Log, good balance of accuracy and efficiency.

* Cons: Still an approximation, not perfectly accurate.

* Use Case: General-purpose, highly recommended for most scenarios due to its balance.

  • Token Bucket:

* Description: A "bucket" holds a fixed number of "tokens." Tokens are added to the bucket at a constant rate. Each request consumes one token. If the bucket is empty, requests are rejected.

* Pros: Allows for bursts of requests up to the bucket size, smooths out traffic.

* Cons: More complex to implement, stateful (requires managing token count and refill rate).

* Use Case: APIs that need to allow occasional bursts without exceeding an average rate.

  • Leaky Bucket:

* Description: Requests are added to a queue (the bucket) and processed at a constant rate. If the queue is full, new requests are rejected.

* Pros: Smooths out bursty traffic, ensures a steady output rate.

* Cons: Can introduce latency if the queue is long, queue size needs careful tuning.

* Use Case: When the goal is to enforce a very steady processing rate on the backend.

Recommendation:

For a general-purpose, scalable rate limiter, a combination of Sliding Window Counter (for its balance of accuracy and performance) and Token Bucket (for allowing controlled bursts) is often ideal. The choice can be configurable per policy.

4.3. Data Storage for Counters and Policies

A distributed, high-performance data store is crucial for maintaining rate limit states across multiple instances of the rate limiter.

  • Redis (Recommended):

* Pros: In-memory data store, extremely fast reads/writes, supports atomic operations (INCR, EXPIRE), ideal for counters and timestamps. Pub/Sub for configuration updates. High availability with Redis Cluster or Sentinel.

* Cons: Can be memory-intensive for very large numbers of unique keys/clients.

* Implementation: Use Redis INCR for fixed window counters, ZADD and ZREMRANGEBYSCORE for sliding window log timestamps, or a combination of INCR and EXPIRE for sliding window counter logic (e.g., storing current_window_counter and previous_window_counter with their respective expiry times).

  • Memcached:

* Pros: Simple, fast key-value store.

* Cons: Lacks atomic operations like INCRBY in a way that's easily usable for complex rate limiting, no persistence, less feature-rich than Redis. Generally less suitable.

  • Database (e.g., PostgreSQL, Cassandra):

* Pros: Persistence, strong consistency (for relational DBs).

Cons: Much higher latency for read/write operations compared to in-memory stores, not suitable for high-frequency counter updates. Can be used for storing rate limit policies* but not for real-time counters.

Recommendation: Redis is the de-facto standard for rate limiting state management due to its speed, atomic operations, and data structures.

4.4. Distributed System Challenges and Consistency

  • Race Conditions: Multiple rate limiter instances might try to update the same counter concurrently. Redis's atomic operations (INCR, SETNX, Lua scripts) are essential to prevent race conditions and ensure accurate counts.
  • Eventual Consistency: While Redis provides strong consistency for single operations, a distributed Redis cluster might have eventual consistency for specific scenarios. For rate limiting, minor eventual consistency issues (e.g., allowing one extra request during a cluster failover) are usually acceptable compared to the performance overhead of strict global consistency.
  • Synchronization of Policies: Rate limit policies (e.g., "100 req/min for user type A") need to be consistent across all rate limiter instances.

* Approach 1 (Recommended): Store policies in a persistent database (e.g., PostgreSQL) or a configuration service (e.g., Consul, Etcd). Rate limiter instances fetch policies on startup and periodically refresh them.

* Approach 2: Use Redis Pub/Sub for policy updates (less robust for initial sync).

4.5. Scalability and Performance

  • Horizontal Scaling: The API Rate Limiter service itself should be stateless (or near-stateless, offloading state to Redis) to allow for easy horizontal scaling by adding more instances behind a load balancer.
  • Redis Cluster: For high-throughput scenarios, deploy Redis in a clustered mode to distribute data and read/write load across multiple nodes.
  • Caching: Cache frequently accessed rate limit policies locally within the rate limiter service to reduce database lookups.
  • Optimized Data Structures: Use Redis data structures efficiently (e.g., STRING for simple counters, ZSET for sliding window logs).
  • Minimal Logic: Keep the rate limiting logic as lean and fast as possible.

4.6. Reliability and Fault Tolerance

  • Redundant Rate Limiter Instances: Deploy multiple instances of the rate limiter service behind a load balancer. If one instance fails, others can take over.
  • Redis High Availability: Use Redis Sentinel for automatic failover in a master-replica setup or Redis Cluster for distributed high availability.
  • Graceful Degradation:

* Fail-Open: If the rate limiter service or Redis becomes unavailable, allow requests to pass through (with a warning/log) to prevent a complete service outage. This prioritizes availability over strict rate limiting.

* Fail-Closed: If the rate limiter fails, block all requests. This prioritizes protection over availability.

* Recommendation: For most user-facing APIs, Fail-Open with robust monitoring is preferred to avoid complete outages. Critical internal APIs might opt for Fail-Closed.

  • Circuit Breakers: Implement circuit breakers between the API Gateway and the Rate Limiter, and between the Rate Limiter and Redis, to prevent cascading failures.

4.7. Configuration and Policy Management

  • Policy Definition: Define rate limits based on:

* Client Identifier: IP address, API Key, User ID (from JWT/session token).

* Endpoint: Specific API paths (e.g., /api/v1/users vs. /api/v1/reports).

* HTTP Method: GET, POST, PUT, DELETE.

* Request Headers/Query Parameters: Custom attributes.

  • Configuration Store: Store rate limit policies in a centralized, persistent store:

* Database (e.g., PostgreSQL): Good for complex policies, versioning, and audit trails.

* Configuration Service (e.g., Consul, Etcd, Kubernetes ConfigMaps): Distributed, real-time updates.

* YAML/JSON Files: Simple for smaller systems, but requires redeployment for updates.

  • Dynamic Updates: The rate limiter service should be able to dynamically load and apply updated policies without requiring a restart. This can be achieved through periodic polling of the configuration store or by subscribing to change events.

4.8. Response Handling

When a request is rate-limited, the API Rate Limiter should respond with appropriate HTTP status codes and headers.

  • HTTP Status Code: 429 Too Many Requests.
  • Headers (RFC 6585):

* X-RateLimit-Limit: The maximum number of requests allowed in the current window.

* X-RateLimit-Remaining: The number of requests remaining in the current window.

* X-RateLimit-Reset: The UTC epoch seconds when the current rate limit window resets.

* Retry-After: The number of seconds the client should wait before making another request.

4.9. Monitoring and Alerting

Comprehensive monitoring is essential to understand the rate limiter's effectiveness and health.

  • Metrics:

* Total requests processed.

* Requests allowed vs. requests denied.

* Latency added by the rate limiter.

* Error rates from the rate limiter itself.

* Redis connection health, memory usage, hit/miss ratio.

* CPU/memory usage of rate limiter instances.

  • Tools: Prometheus for metrics collection, Grafana for dashboards, ELK Stack (Elasticsearch, Logstash, Kibana) for logs.
  • Alerting: Configure alerts for:

* High rate of denied requests (indicates potential attack or misconfigured limits).

* Rate limiter service errors or unhealthiness.

* Redis connection issues or high latency.

* Resource exhaustion (CPU, memory) on rate limiter instances.

4.10. Security Considerations

  • IP Spoofing: Ensure the rate limiter correctly identifies the client's true IP address, especially when behind load balancers (e.g., using X-Forwarded-For header).
  • Bypass Attempts: Guard against clients trying to bypass the rate limiter by rotating IP addresses or using different API keys if not tied to a strong user identity.
  • Internal vs. External APIs: Apply different, often stricter, limits for external-facing APIs compared to internal ones.

5. Technology Stack Recommendations

  • API Gateway/Proxy:

* Nginx/Nginx Plus: Powerful, highly performant reverse proxy with built-in rate limiting modules (can handle basic fixed-window). Can be extended with Lua for more complex logic.

* Envoy Proxy: Cloud-native, highly extensible proxy often used in service mesh architectures. Excellent for advanced routing, load balancing, and can integrate with external rate limiting services (e.g., Envoy's ratelimit service).

* Kong/Tyk/Apigee: Full-

gemini Output

The following output details the generation and implementation of an API Rate Limiter, a critical component for robust and scalable API services. This deliverable includes a comprehensive overview, design considerations, production-ready code examples, and guidance for testing and deployment.


API Rate Limiter: Code Generation & Implementation

1. Introduction to API Rate Limiting

An API Rate Limiter controls the number of requests a user or client can make to an API within a defined timeframe. It's an essential mechanism for maintaining API stability, preventing abuse, and ensuring fair resource allocation.

Why it's important:

  • Preventing Abuse and DDoS Attacks: Limits the impact of malicious actors attempting to overwhelm the API with excessive requests.
  • Ensuring Fair Usage: Prevents a single user or small group from monopolizing API resources, ensuring availability for all legitimate users.
  • Cost Control: For APIs that incur costs per request, rate limiting helps manage and predict expenses.
  • System Stability: Protects backend services from being overloaded, preventing crashes and performance degradation.
  • Monetization: Can be used to enforce different access tiers (e.g., free vs. premium with higher limits).

2. Core Rate Limiting Algorithms

Several algorithms can be used to implement rate limiting, each with its own characteristics:

  • Fixed Window Counter: The simplest approach. Requests are counted within a fixed time window (e.g., 60 seconds). If the count exceeds the limit, further requests are blocked until the next window starts.

Pros:* Simple to implement.

Cons:* Can suffer from a "bursty" problem at the window edges, where a client can make a full burst of requests at the end of one window and another full burst at the beginning of the next, effectively doubling the rate in a short period.

  • Sliding Window Log: Stores a timestamp for each request. When a new request arrives, it counts the number of timestamps within the current window (e.g., last 60 seconds).

Pros:* Highly accurate, smooth rate limiting.

Cons:* Can be memory-intensive, especially for high request volumes, as it stores individual timestamps.

  • Sliding Window Counter: A hybrid approach. It combines the fixed window counter with a weighted average of the previous window's count. This mitigates the "bursty" problem of the fixed window counter while being more memory-efficient than the sliding window log.

Pros:* Good balance of accuracy and efficiency, mitigates edge-case bursts.

Cons:* Slightly more complex than fixed window.

  • Token Bucket: A "bucket" holds tokens. Requests consume tokens. If the bucket is empty, requests are denied. Tokens are added to the bucket at a fixed rate, up to a maximum capacity.

Pros:* Allows for bursts up to the bucket capacity, smooths out traffic.

Cons:* Requires careful tuning of bucket size and refill rate.

  • Leaky Bucket: Requests are added to a queue (the "bucket"). Requests "leak" out of the bucket at a constant rate. If the bucket overflows, new requests are dropped.

Pros:* Smooths out traffic, fixed output rate.

Cons:* Can introduce latency if the queue fills up, doesn't handle bursts well without dropping requests.

Chosen Implementation: Sliding Window Counter with Redis

For this deliverable, we will implement a Sliding Window Counter using Redis. This approach offers an excellent balance of accuracy, efficiency, and distributed system compatibility, making it suitable for most production environments. Redis provides atomic operations and high performance, which are crucial for a reliable rate limiter.

3. Design Considerations for Production Systems

Before diving into the code, consider these factors for a robust API rate limiter:

  • Distributed vs. Single-Node: For microservices or horizontally scaled applications, the rate limiter must be distributed. Redis is an excellent choice for this, as it acts as a central, shared store for rate limit counts.
  • Storage Mechanism:

* In-memory: Simple for single-node applications but doesn't scale.

* Redis: High-performance, supports atomic operations, ideal for distributed systems.

* Database: Can be used but often too slow for high-throughput rate limiting.

  • Burst Handling: How aggressively should bursts be allowed? Token Bucket is good for this. Sliding Window Counter also helps mitigate extreme bursts compared to Fixed Window.
  • Granularity:

* Per User: Based on user_id, API key, or authentication token.

* Per IP Address: Common for unauthenticated requests.

* Per Endpoint: Different limits for different API endpoints (e.g., /login vs. /data).

* Combined: E.g., per user, per endpoint.

  • Error Handling and Response: When a request is rate-limited, the API should return a 429 Too Many Requests HTTP status code and ideally include a Retry-After header indicating when the client can retry.
  • Concurrency: The chosen storage mechanism must handle concurrent updates safely (Redis atomic operations are key here).
  • Configuration: Rate limits (e.g., 100 requests per 60 seconds) should be easily configurable and potentially dynamic.
  • Monitoring: Track allowed vs. blocked requests, identify patterns of abuse, and monitor the performance of the rate limiter itself.

4. Code Implementation: Python with Redis (Sliding Window Counter)

This implementation uses Python and Redis to create a distributed Sliding Window Counter rate limiter.

Prerequisites

  1. Redis Server: Ensure a Redis server is running and accessible.

Installation (Ubuntu):* sudo apt update && sudo apt install redis-server

Installation (Docker):* docker run --name my-redis -p 6379:6379 -d redis

  1. Python Libraries: Install redis-py.

* pip install redis

Configuration Example


# config.py
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_DB = 0

# Default rate limits (can be overridden per client/endpoint)
DEFAULT_LIMIT = 100  # Number of requests
DEFAULT_WINDOW_SECONDS = 60 # Time window in seconds

RateLimiter Class Implementation


import time
import redis
import math

class RateLimiter:
    """
    Implements a distributed API Rate Limiter using the Sliding Window Counter algorithm
    with Redis as the backend.

    This approach mitigates the "bursty" problem of the Fixed Window counter by
    considering a weighted count from the previous window.

    Attributes:
        redis_client (redis.Redis): The Redis client instance.
        limit (int): The maximum number of requests allowed within the window.
        window_seconds (int): The duration of the sliding window in seconds.
        prefix (str): A prefix for Redis keys to avoid collisions.
    """

    def __init__(self, redis_client: redis.Redis, limit: int, window_seconds: int, prefix: str = "rate_limit"):
        """
        Initializes the RateLimiter.

        Args:
            redis_client: An initialized Redis client instance.
            limit: The maximum number of requests allowed within the window.
            window_seconds: The duration of the sliding window in seconds.
            prefix: A prefix for Redis keys to avoid collisions.
        """
        if not isinstance(redis_client, redis.Redis):
            raise TypeError("redis_client must be an instance of redis.Redis")
        if not isinstance(limit, int) or limit <= 0:
            raise ValueError("limit must be a positive integer")
        if not isinstance(window_seconds, int) or window_seconds <= 0:
            raise ValueError("window_seconds must be a positive integer")

        self.redis_client = redis_client
        self.limit = limit
        self.window_seconds = window_seconds
        self.prefix = prefix

    def _get_key(self, client_id: str) -> str:
        """Generates a unique Redis key for a given client."""
        return f"{self.prefix}:{client_id}"

    def _get_current_window_start(self, current_time: float) -> int:
        """Calculates the start timestamp of the current fixed window."""
        return math.floor(current_time / self.window_seconds) * self.window_seconds

    def _get_previous_window_start(self, current_time: float) -> int:
        """Calculates the start timestamp of the previous fixed window."""
        return self._get_current_window_start(current_time) - self.window_seconds

    def allow_request(self, client_id: str) -> tuple[bool, int]:
        """
        Determines if a request from the given client_id should be allowed.

        Uses the Sliding Window Counter algorithm:
        1. Calculates the current time and the start times of the current and previous windows.
        2. Retrieves the counts for the current and previous windows from Redis.
        3. Calculates a weighted count from the previous window based on the overlap
           with the current sliding window.
        4. Sums the current window's count and the weighted previous window's count.
        5. If the total count is within the limit, increments the current window's count
           and allows the request.
        6. Sets an expiry for the current window's key to ensure cleanup.

        Args:
            client_id: A unique identifier for the client (e.g., user ID, API key, IP address).

        Returns:
            A tuple (bool, int):
                - True if the request is allowed, False otherwise.
                - The number of seconds to wait until the next window starts if blocked,
                  or 0 if allowed.
        """
        current_time = time.time()
        
        # Calculate the start timestamps for the current and previous fixed windows
        current_window_start = self._get_current_window_start(current_time)
        previous_window_start = self._get_previous_window_start(current_time)

        # Calculate the overlap between the current sliding window and the previous fixed window
        # Example: if window_seconds is 60, current_time is 65, current_window_start is 60.
        # The current sliding window starts at (65 - 60) = 5 seconds into the current fixed window.
        # The remaining part of the previous fixed window that falls into the current sliding window
        # is (60 - (65 - 60)) = 55 seconds.
        # This is equivalent to (current_window_start + window_seconds - current_time).
        # More simply, it's `current_time - previous_window_start - window_seconds` relative to the previous window end.
        # Or, the fraction of the previous window that is still relevant to the current sliding window.
        # `(current_window_start + self.window_seconds) - current_time` is the time remaining in the current fixed window.
        # `current_time - current_window_start` is the time elapsed in the current fixed window.
        
        # The fraction of the previous window that is still "active" in the current sliding window.
        # For example, if window_seconds = 60, current_time = 65.
        # current_window_start = 60. previous_window_start = 0.
        # The current sliding window is [5, 65].
        # The part of the previous window [0, 60] that overlaps with [5, 65] is [5, 60], which is 55 seconds.
        # The fraction is 55/60.
        # This can be calculated as `(self.window_seconds - (current_time - current_window_start)) / self.window_seconds`
        # Or more simply, `1 - ((current_time - current_window_start) / self.window_seconds)`
        
        # Let's use the standard formula for previous window weight:
        # (time_remaining_in_current_window_from_previous_perspective) / window_seconds
        # This is `(previous_window_start + self.window_seconds - current_time) / self.window_seconds`
        # if `current_time` is within `[previous_window_start, previous_window_start + self.window_seconds]`
        # For simplicity, let's use the proportion of time elapsed in the *current* window.
        # The overlap is `self.window_seconds - (current_time - current_window_start)`
        # The fraction of the *previous window* that contributes to the *current sliding window*
        # is `1 - ( (current_time - current_window_start) / self.window_seconds )`
        # This fraction is applied to the count of the *previous* fixed window.
        
        # Example: window_seconds=60, current_time=65
        # current_window_start = 60
        # previous_window
gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Guide

This document provides a detailed, professional overview of API Rate Limiters, covering their importance, common strategies, implementation considerations, and best practices. This information is designed to serve as a foundational guide for understanding and effectively deploying API rate limiting mechanisms.


1. Introduction: Understanding API Rate Limiting

An API Rate Limiter is a mechanism that controls the number of requests a client can make to an API within a given timeframe. Its primary purpose is to ensure the stability, reliability, and security of API services by preventing abuse, resource exhaustion, and denial-of-service (DoS) attacks.

Key Objectives:

  • Prevent Abuse & Misuse: Stop malicious actors from overwhelming the system or scraping data excessively.
  • Ensure Fair Usage: Distribute API resources equitably among all consumers.
  • Maintain System Stability: Protect backend services from being overloaded, leading to improved uptime and performance.
  • Cost Control: Reduce infrastructure costs associated with handling an excessive volume of requests.
  • Enhance Security: Mitigate certain types of DoS attacks and brute-force attempts.

2. Core Concepts & Terminology

  • Rate Limit: The maximum number of requests allowed within a specific time window (e.g., 100 requests per minute).
  • Quota: A broader limit, often applied over longer periods or for specific resources (e.g., 10,000 requests per month for a premium tier).
  • Burst Limit: Allows a temporary spike in requests above the steady rate, useful for accommodating sudden, legitimate traffic increases without immediately penalizing the client.
  • Throttling: The act of intentionally slowing down or rejecting requests that exceed defined limits.
  • Grace Period: A short window after a rate limit is hit, where requests might still be processed before full rejection, often combined with a slight delay.
  • Backoff Strategy: A client-side approach where, upon receiving a rate limit error, the client waits for an increasing amount of time before retrying the request.

3. Benefits of Implementing an API Rate Limiter

Implementing a robust API Rate Limiter provides significant advantages for both API providers and consumers:

  • For API Providers:

* Increased Reliability: Prevents server overload, ensuring consistent API performance.

* Enhanced Security: Mitigates common attack vectors like DoS, DDoS, and brute-force attacks.

* Fair Resource Allocation: Ensures that no single client monopolizes system resources.

* Operational Cost Reduction: Lowers infrastructure costs by managing traffic spikes and preventing unnecessary resource scaling.

* Improved User Experience: Consistent performance for legitimate users.

* Monetization Opportunities: Enables tiered service offerings based on different rate limits.

  • For API Consumers:

* Predictable Performance: Consistent API response times when operating within limits.

* Clear Expectations: Understandable rules for API usage.

* Robust Error Handling: Clear error codes and headers for efficient client-side management.


4. Common Rate Limiting Strategies/Algorithms

Different algorithms offer varying trade-offs in terms of accuracy, memory usage, and how they handle bursts.

  1. Fixed Window Counter:

* How it works: A simple counter is maintained for each client within a fixed time window (e.g., 60 seconds). All requests within that window increment the counter. Once the window expires, the counter resets.

* Pros: Easy to implement, low memory usage.

* Cons: "Burstiness" at window edges. A client can make N requests at the very end of one window and N requests at the very beginning of the next, effectively making 2N requests in a short period (e.g., 200 requests in 2 seconds for a 100 req/min limit).

* Best for: Simple, less critical applications where edge-case burstiness is acceptable.

  1. Sliding Log:

* How it works: For each client, a timestamp of every request is stored in a sorted log (e.g., a Redis sorted set). When a new request arrives, all timestamps older than the current time minus the window duration are removed. The number of remaining timestamps is then compared against the limit.

* Pros: Highly accurate, no edge-case burstiness.

* Cons: High memory usage (stores every request timestamp), computationally intensive for large logs.

* Best for: Scenarios requiring high accuracy and strict enforcement, where memory is not a major constraint.

  1. Sliding Window Counter:

* How it works: Combines aspects of Fixed Window and Sliding Log. It divides the time into fixed windows but estimates the request count for the current window by using a weighted average of the current window's count and the previous window's count, based on how far into the current window we are.

* Pros: Offers a good balance between accuracy and memory efficiency. Reduces the fixed window's burstiness issue without storing every timestamp.

* Cons: Still an approximation, not as precise as Sliding Log.

* Best for: Most general-purpose API rate limiting where a good balance of accuracy and performance is needed.

  1. Token Bucket:

* How it works: Imagine a bucket with a fixed capacity for "tokens." Tokens are added to the bucket at a constant rate. Each API request consumes one token. If a request arrives and the bucket is empty, the request is rate-limited. The bucket capacity allows for bursts (requests can consume multiple tokens quickly if available).

* Pros: Allows for controlled bursts, simple to understand and implement, smooths out traffic.

* Cons: Can be challenging to tune bucket capacity and refill rate effectively.

* Best for: APIs that need to handle occasional bursts of traffic while maintaining a steady average rate.

  1. Leaky Bucket:

* How it works: Similar to a bucket, but requests are "poured into" the bucket and "leak out" at a constant rate. If the bucket is full, new requests are dropped. This smooths out bursts of requests into a steady stream.

* Pros: Excellent for smoothing out traffic, prevents resource exhaustion by ensuring a constant output rate.

* Cons: Does not allow for bursts (requests are queued or dropped), can introduce latency for bursts.

* Best for: Ensuring a very consistent processing rate for backend services, preventing overload.


5. Implementation Considerations

When designing or choosing a rate limiting solution, consider the following:

  • Granularity:

* Per User/Client: Based on API key, OAuth token, user ID.

* Per IP Address: Simple, but can be problematic for shared IPs (NAT, proxies) or dynamic IPs.

* Per Endpoint: Different limits for different API endpoints (e.g., read vs. write operations).

* Global: A total limit across all clients.

  • Distributed Systems: In a microservices architecture, rate limiting needs to be consistent across all instances. This typically requires a centralized store (e.g., Redis, Cassandra) for counters/logs.
  • Edge vs. Application Layer:

* Edge (API Gateway/Load Balancer): Often the first line of defense (e.g., NGINX, Envoy, AWS API Gateway). Good for basic, high-volume limits.

* Application Layer: Provides finer-grained control, allowing logic based on user roles, subscription tiers, or specific request parameters. Often used in conjunction with edge-level limiting.

  • Storage Mechanism:

* In-Memory: Fastest, but not suitable for distributed systems or persistent limits.

* Redis: Excellent for distributed rate limiting due to its speed, atomic operations, and data structures (counters, sorted sets).

* Database: Slower, generally not recommended for high-volume rate limiting.

  • Error Handling:

* HTTP Status Code: Use 429 Too Many Requests.

* Response Headers: Include X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset (see section 9).

* Clear Error Messages: Inform the client why the request was rejected and when they can retry.

  • Monitoring & Alerting: Track rate limit hits, blocked requests, and overall API traffic. Set up alerts for unusual patterns.
  • Bypass Mechanisms: Allow internal tools, trusted partners, or specific administrative requests to bypass limits.
  • Caching: Rate limit checks can be computationally intensive. Cache decisions for a short period where appropriate.

6. Key Metrics to Monitor

Effective monitoring is crucial for fine-tuning and managing your rate limiting strategy.

  • Rate Limit Hits: Number of times clients hit a rate limit.
  • Blocked Requests: Total number of requests rejected by the rate limiter.
  • Requests Per Client/IP: Track usage patterns to identify potential abuse or inform tier adjustments.
  • Average Request Latency: Ensure rate limiting doesn't introduce undue overhead.
  • Resource Utilization: Monitor CPU, memory, and network usage of your rate limiting components.
  • Error Rates (429s): Track the percentage of requests resulting in a 429 status code.

7. Best Practices for API Rate Limiting

  • Start Simple, Iterate: Begin with a basic strategy (e.g., Fixed Window or Sliding Window Counter) and refine based on observed traffic patterns and business needs.
  • Communicate Clearly: Document your rate limits prominently in your API documentation.
  • Provide Informative Responses: Use 429 Too Many Requests and include relevant X-RateLimit-* headers.
  • Encourage Client-Side Backoff: Advise clients to implement exponential backoff and jitter when encountering 429 errors.
  • Differentiate Limits: Apply different limits based on authentication status, user roles, subscription tiers, or endpoint sensitivity.
  • Consider Bursts: Allow for reasonable burst capacity to accommodate legitimate traffic spikes.
  • Monitor and Tune: Regularly review your rate limits, monitor their impact, and adjust as necessary.
  • Test Thoroughly: Simulate various traffic scenarios, including legitimate usage and abuse attempts, to validate your rate limiter's effectiveness.
  • Avoid Hardcoding: Make rate limit configurations easily adjustable without code deployments.

8. Example Use Cases

  • Public APIs: Limiting unauthenticated users to prevent scraping or denial-of-service.
  • Social Media Platforms: Limiting the number of posts, likes, or comments per user within a timeframe.
  • E-commerce APIs: Limiting product searches or order creation requests to prevent inventory scraping or order flooding.
  • Financial Services APIs: Strict limits on transaction requests to enhance security and prevent fraud.
  • IoT Devices: Managing the data reporting frequency from connected devices.
  • Search Engines: Limiting query rates to protect backend search infrastructure.

9. API Response Headers for Rate Limiting

When a client makes a request, the API should include specific headers to inform them about their current rate limit status, even if the request was successful. When a limit is exceeded, these headers become critical.

  • X-RateLimit-Limit: The maximum number of requests permitted in the current rate limit window.

Example:* X-RateLimit-Limit: 60

  • X-RateLimit-Remaining: The number of requests remaining in the current rate limit window.

Example:* X-RateLimit-Remaining: 55

  • X-RateLimit-Reset: The time (in UTC epoch seconds or a specific datetime string) at which the current rate limit window resets and X-RateLimit-Remaining will be reset to X-RateLimit-Limit.

Example (Epoch):* X-RateLimit-Reset: 1678886400 (Epoch seconds)

Example (Datetime):* X-RateLimit-Reset: 2023-03-15T12:00:00Z

  • Retry-After (for 429 responses): Indicates how long the user agent should wait before making a follow-up request. This is typically sent only when the 429 Too Many Requests status code is returned. Can be in seconds or a HTTP-date.

Example (Seconds):* Retry-After: 60 (Wait 60 seconds)

Example (HTTP-date):* Retry-After: Wed, 21 Oct 2023 07:28:00 GMT


10. Conclusion and Next Steps

API Rate Limiters are an indispensable component of any robust API ecosystem. By carefully selecting an appropriate strategy, considering implementation details, and continuously monitoring performance, you can ensure the stability, security, and scalability of your API services.

Recommended Next Steps:

  1. Define Requirements: Clearly outline the specific rate limits needed for different API endpoints, user types, and access tiers.
  2. Algorithm Selection: Choose the most suitable rate limiting algorithm(s) based on your accuracy, memory, and burst-handling requirements.
  3. Technology Stack Evaluation: Identify the best tools and technologies (e.g., API Gateway, Redis) for implementing your chosen strategy.
  4. Proof of Concept: Implement a small-scale proof of concept to validate the chosen approach.
  5. Integration & Testing: Integrate the rate limiter into your API architecture and conduct thorough load and stress testing.
  6. Documentation & Communication: Update your API documentation with clear rate limit policies and educate your API consumers.
  7. Monitor & Optimize: Continuously monitor the performance and effectiveness of your rate limits, making adjustments as needed.

This comprehensive guide provides a solid foundation for understanding and implementing effective API Rate Limiters. For specific implementation details or architecture discussions, please consult with your technical team.

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}