API Rate Limiter
Run ID: 69cc4e188f41b62a970c27232026-03-31Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter: Architectural Plan

This document outlines a comprehensive architectural plan for an API Rate Limiter system. The goal is to design a robust, scalable, and efficient solution that protects backend services from abuse, ensures fair usage, and maintains system stability under varying load conditions.


1. Introduction & Overview

An API Rate Limiter is a critical component in modern microservice architectures and public-facing APIs. Its primary function is to control the rate at which clients can send requests to an API within a defined time window. This prevents denial-of-service (DoS) attacks, brute-force attempts, and ensures fair resource allocation among all consumers, thereby safeguarding the stability and performance of the backend infrastructure.

This architectural plan details the components, algorithms, integration strategies, and operational considerations necessary to build a high-performance, distributed API Rate Limiter.

2. Core Requirements & Objectives

The API Rate Limiter must satisfy the following requirements and objectives:

3. Architectural Goals

4. High-Level System Architecture

The API Rate Limiter will be implemented as a distributed service, typically deployed either as an API Gateway plugin/middleware or a standalone microservice that API Gateways or applications can query.

4.1. Key Components

  1. Rate Limiter Enforcement Point (RLEP):

* This is the component that intercepts incoming API requests.

* It could be an API Gateway (e.g., NGINX, Envoy, AWS API Gateway), a load balancer, or application-level middleware.

* Its responsibility is to extract relevant client identifiers (e.g., API Key, IP, User ID) and forward the rate limit check request.

* It will enforce the decision received from the Rate Limiter Service.

  1. Rate Limiter Service (RLS):

* A dedicated microservice responsible for applying rate limiting logic.

* It receives rate limit check requests from RLEPs.

* It queries and updates the Rate Limit State Store.

* It returns a decision (ALLOW/DENY) along with relevant headers (e.g., X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset).

  1. Rate Limit State Store:

* A high-performance, distributed key-value store (e.g., Redis) used to maintain the current state of all rate limits (e.g., counters, timestamps, token counts).

* Crucial for consistency across multiple instances of the Rate Limiter Service.

  1. Configuration Store:

* A mechanism to store and retrieve rate limit rules (e.g., max_requests_per_minute, window_size).

* Could be a database, configuration service (e.g., Consul, etcd), or static files.

  1. Monitoring & Alerting:

* Tools to collect metrics (e.g., requests allowed, requests denied, latency), logs, and trigger alerts on anomalies.

4.2. Data Flow

  1. Client Request: A client sends an API request to the API Gateway/Load Balancer.
  2. Request Interception: The RLEP intercepts the request.
  3. Identifier Extraction: RLEP extracts client identifiers (e.g., API Key, User ID, IP address) and the target endpoint.
  4. Rate Limit Check Request: RLEP sends a request to the Rate Limiter Service (RLS) with the extracted identifiers and endpoint.
  5. RLS Logic:

* RLS retrieves the relevant rate limit rule from the Configuration Store based on the identifiers/endpoint.

* RLS queries the Rate Limit State Store (e.g., Redis) for the current state (counter, timestamp) for that client/rule.

* RLS applies the chosen rate limiting algorithm (e.g., Sliding Window Counter).

* RLS updates the state in the Rate Limit State Store.

  1. Decision & Headers: RLS returns a decision (ALLOW/DENY) and appropriate X-RateLimit-* headers to the RLEP.
  2. Enforcement:

* If ALLOWED, RLEP forwards the request to the backend service.

* If DENIED, RLEP immediately returns an HTTP 429 Too Many Requests response to the client with the rate limit headers.

  1. Logging & Metrics: Both RLEP and RLS emit logs and metrics for observability.
mermaid • 553 chars
graph TD
    A[Client] -->|API Request| B(API Gateway / Load Balancer - RLEP);
    B -->|Extract Identifiers, Endpoint| C{Rate Limiter Service - RLS};
    C -->|Get Rule| D[Configuration Store];
    C -->|Read/Write State| E[Rate Limit State Store (e.g., Redis)];
    E --> C;
    D --> C;
    C -->|Decision (ALLOW/DENY) & Headers| B;
    B -->|If ALLOWED| F[Backend Service];
    F -->|Response| B;
    B -->|API Response| A;
    B -->|If DENIED (429 Too Many Requests)| A;
    B -->|Logs/Metrics| G[Monitoring & Alerting];
    C -->|Logs/Metrics| G;
Sandboxed live preview

5. Detailed Design & Algorithm Selection

5.1. Common Rate Limiting Algorithms

  • Fixed Window Counter:

* Divides time into fixed windows (e.g., 1 minute).

* A counter increments for each request within the window. If the counter exceeds the limit, requests are denied until the next window.

* Pros: Simple to implement, low memory usage.

* Cons: Prone to "bursty" traffic at the start/end of a window, potentially allowing double the rate at window boundaries.

  • Sliding Window Log:

* Stores a timestamp for every request.

* When a new request arrives, it counts requests within the last N seconds (the window) by iterating through the stored timestamps.

* Pros: Highly accurate, handles bursts well.

* Cons: High memory usage (stores every request's timestamp), computationally expensive for large request volumes.

  • Token Bucket:

* A bucket holds "tokens." Tokens are added at a fixed rate.

* Each request consumes one token. If no tokens are available, the request is denied.

* Pros: Allows for some burstiness (up to the bucket size), smooths out traffic, simple to implement with a single counter and timestamp.

* Cons: Can be complex to tune refill_rate and bucket_size.

  • Leaky Bucket:

* Requests are added to a queue (the bucket).

* Requests "leak" out of the bucket at a constant rate.

* If the bucket is full, new requests are denied.

* Pros: Smooths out traffic, good for controlling egress rate.

* Cons: New requests might be delayed, complex to implement truly distributed queues.

  • Sliding Window Counter (Recommended):

* This algorithm combines the best aspects of Fixed Window and Sliding Window Log while mitigating their drawbacks.

* It uses two fixed windows: the current window and the previous window.

* When a request arrives, it calculates a weighted average of the counts from the previous window and the current window based on how much of the current window has elapsed.

* Pros: More accurate than Fixed Window, less memory/computationally intensive than Sliding Window Log, effectively mitigates the "burst at window boundary" issue.

* Cons: Slightly more complex than Fixed Window.

5.2. Recommended Algorithm(s) and Justification

We recommend a hybrid approach, primarily using the Sliding Window Counter algorithm due to its balance of accuracy, efficiency, and burst handling capabilities. For scenarios requiring strict smoothing (e.g., outbound queues), a Token Bucket might be considered as an alternative or supplementary mechanism.

Justification for Sliding Window Counter:

  • Accuracy: Provides a more accurate representation of the request rate over a sliding period compared to Fixed Window.
  • Efficiency: Uses fixed-size counters (two per key) rather than storing individual request timestamps, making it memory-efficient and fast for Redis operations.
  • Burst Tolerance: Reduces the impact of bursts occurring at window boundaries, which is a significant flaw of the Fixed Window approach.

5.3. Storage Mechanism

Redis is the recommended choice for the Rate Limit State Store due to its:

  • In-memory Performance: Extremely fast read/write operations.
  • Atomic Operations: Supports atomic increments (INCR), EXPIRE for time-to-live, and Lua scripting for complex multi-command operations, which are crucial for consistent rate limiting logic.
  • Distributed Nature: Can be deployed in a clustered mode (Redis Cluster) for high availability and horizontal scalability.
  • TTL Support: Keys can be set with an expiration time, automatically cleaning up old rate limit states.

Redis Data Structure for Sliding Window Counter:

Each rate limit key (e.g., user:123:api_calls) will store two counters and their associated window start times.

A Redis Hash or multiple INCR commands with EXPIRE could be used.

A more robust and atomic approach would be to use a Lua script to execute the logic:

  1. Get current timestamp.
  2. Calculate current window and previous window start times.
  3. Get counts for current and previous windows.
  4. Calculate weighted average.
  5. If within limit, increment current window counter and set/update its expiry.
  6. Return decision and updated counters.

5.4. Key Generation

The rate limit key must uniquely identify the entity being limited and the specific rule.

Key format examples:

  • ip:192.168.1.1:minute
  • user:john_doe:api_endpoint_X:hour
  • api_key:abcdef123:global:day

The key should be deterministic and generated consistently by the RLEP or RLS based on the configured rules.

5.5. Distributed System Considerations

  • Clock Skew: Relying on system clocks for time windows can be problematic in distributed systems. Using a centralized time source or relative timestamps (e.g., time.Now().Unix()) across all instances is crucial. Redis TIME command can also be used as a source of truth for time.
  • Race Conditions: Atomic operations (e.g., Redis INCR or Lua scripts) are essential to prevent race conditions when multiple RLS instances try to update the same rate limit counter concurrently.
  • Consistency: Redis Cluster provides eventual consistency for writes, which is generally acceptable for rate limiting. Stronger consistency might introduce higher latency.
  • Leader Election: Not strictly required for the RLS itself if Redis handles the state, but if RLS needs to maintain its own synchronized state, a leader election mechanism might be considered.

6. Integration Points

6.1. API Gateway Integration (Recommended)

Integrating the Rate Limiter at the API Gateway level offers several advantages:

  • Early Enforcement: Requests are denied before reaching backend services, saving compute resources.
  • Centralized Control: All APIs can leverage the same rate limiting logic.
  • Protocol Agnostic: Can apply limits regardless of the backend service's implementation language.
  • Examples:

* NGINX: Use ngx_http_limit_req_module (fixed window, less flexible) or custom Lua scripts with lua-nginx-module to interact with the RLS.

* Envoy Proxy: Leverage its native rate limiting filter, which can communicate with a gRPC-based RLS.

* Cloud API Gateways (AWS API Gateway, Azure API Management): Often have built-in rate limiting, but for complex or custom logic, they can be configured to call an external RLS.

6.2. Application Middleware Integration

For internal APIs or specific application-level limits, the Rate Limiter can be integrated as middleware within the application framework (e.g., Express.js, Spring Boot, Flask, Gin).

  • Pros: Fine-grained control, access to application-specific context (e.g., user roles, subscription tiers).
  • Cons: Duplication of logic across applications, higher resource consumption on backend services, harder to manage centrally.

Hybrid Approach: A common strategy is to use API Gateway for global/public-facing limits and application middleware for more specific, business-logic-driven limits.

7. Scalability

gemini Output

API Rate Limiter: Production-Ready Code Implementation

This deliverable provides a comprehensive and detailed implementation for API Rate Limiting, focusing on robust and scalable solutions suitable for production environments. We will explore two popular and effective strategies: the Sliding Window Counter and the Token Bucket algorithm, both leveraging Redis for distributed and high-performance rate limiting.

1. Introduction to API Rate Limiting

API rate limiting is a critical component for managing API usage, protecting backend services, and ensuring fair resource allocation. It prevents abuse, denial-of-service (DoS) attacks, and uncontrolled consumption of resources by restricting the number of requests a user or client can make within a specified timeframe.

Key Benefits:

  • System Stability: Prevents server overload by throttling excessive requests.
  • Resource Protection: Safeguards databases, external services, and CPU cycles.
  • Fair Usage: Ensures all legitimate users have access to the API.
  • Security: Mitigates certain types of DoS attacks and brute-force attempts.
  • Cost Control: Helps manage costs associated with resource usage in cloud environments.

2. Rate Limiting Strategies

We will implement two distinct strategies, each with its own advantages:

2.1. Sliding Window Counter (using Redis Sorted Sets)

The Sliding Window Counter algorithm offers a more accurate approach than the simple Fixed Window Counter by mitigating the "burst" problem at window edges. It works by:

  1. Storing timestamps of each request made by a client within a Redis Sorted Set.
  2. When a new request arrives, it removes all timestamps older than the current window from the Sorted Set.
  3. It then counts the number of remaining timestamps (which represent requests within the current window).
  4. If the count is within the allowed limit, the request is permitted, and its timestamp is added to the Sorted Set.

This method provides a relatively precise and robust rate limiting mechanism, especially when implemented with Redis's efficient Sorted Set operations.

2.2. Token Bucket (using Redis Hashes or Keys with Lua Scripting)

The Token Bucket algorithm is excellent for allowing short bursts of requests while maintaining a steady average rate. It works conceptually like this:

  1. A "bucket" has a maximum capacity.
  2. "Tokens" are added to the bucket at a fixed refill_rate.
  3. Each API request consumes one token.
  4. If the bucket has tokens, the request is allowed, and a token is removed.
  5. If the bucket is empty, the request is denied.

This approach is good for systems that need to handle occasional traffic spikes without being overly restrictive on the average rate. We will implement this using Redis to store the current number of tokens and the last refill timestamp.

3. Implementation Details and Prerequisites

To run the provided code, you will need:

  • Python 3.x: The code is written in Python.
  • Redis Server: A running Redis instance. You can install it locally or use a Docker container.
  • redis-py library: The Python client for Redis. Install via pip install redis.

Core Concepts with Redis:

  • Sliding Window Counter:

* Uses ZADD to add request timestamps to a Sorted Set.

* Uses ZREMRANGEBYSCORE to remove old timestamps.

* Uses ZCARD to get the count of requests within the window.

* MULTI/EXEC (transactions) or Lua scripting can be used to ensure atomicity for multiple Redis commands.

  • Token Bucket:

* Uses a Redis key (e.g., a hash or simple string) to store the current number of tokens and the last refill timestamp.

* INCRBYFLOAT is used to update token counts.

* Lua scripting is highly recommended for atomicity and complex logic involving multiple reads and writes to Redis keys.

4. Code Implementation

Below are the Python classes for implementing both rate limiting strategies using Redis.


import time
import uuid
from typing import Optional

import redis

# --- Configuration ---
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_DB = 0

class RateLimiter:
    """
    Base class for API Rate Limiters.
    """
    def __init__(self, redis_client: redis.Redis):
        self.redis_client = redis_client

    def _get_key(self, client_id: str, resource: str) -> str:
        """Generates a unique Redis key for the given client and resource."""
        return f"rate_limit:{client_id}:{resource}"

    def allow_request(self, client_id: str, resource: str) -> bool:
        """
        Determines if a request is allowed based on the implemented rate limiting strategy.
        Must be implemented by subclasses.
        """
        raise NotImplementedError("Subclasses must implement allow_request method.")

    def get_status(self, client_id: str, resource: str) -> dict:
        """
        Provides status information about the rate limit for a given client and resource.
        Must be implemented by subclasses.
        """
        raise NotImplementedError("Subclasses must implement get_status method.")


class SlidingWindowRateLimiter(RateLimiter):
    """
    Implements a Sliding Window Counter rate limiting strategy using Redis Sorted Sets.
    This strategy is more accurate than fixed window and prevents burst issues at window edges.
    """
    def __init__(self, redis_client: redis.Redis, limit: int, window_size_seconds: int):
        """
        Initializes the SlidingWindowRateLimiter.

        Args:
            redis_client: An initialized Redis client instance.
            limit: The maximum number of requests allowed within the window_size_seconds.
            window_size_seconds: The duration of the sliding window in seconds.
        """
        super().__init__(redis_client)
        if not (limit > 0 and window_size_seconds > 0):
            raise ValueError("Limit and window_size_seconds must be positive integers.")
        self.limit = limit
        self.window_size_seconds = window_size_seconds
        # Lua script for atomic operations
        self._lua_script = """
            local key = KEYS[1]
            local current_time = tonumber(ARGV[1])
            local window_size = tonumber(ARGV[2])
            local limit = tonumber(ARGV[3])
            local request_id = ARGV[4]

            local trim_time = current_time - window_size

            -- Remove old entries outside the window
            redis.call('ZREMRANGEBYSCORE', key, '-inf', trim_time)

            -- Add the current request's timestamp (score) and a unique ID (member)
            redis.call('ZADD', key, current_time, request_id)

            -- Set expiration for the key to clean up old data
            redis.call('EXPIRE', key, window_size * 2) -- Set expiry to twice the window size for buffer

            -- Get the current count of requests in the window
            local count = redis.call('ZCARD', key)

            -- Return 1 if allowed, 0 if denied, and the current count
            if count <= limit then
                return {1, count}
            else
                return {0, count}
            end
        """
        self._lua_sha = self.redis_client.script_load(self._lua_script)

    def allow_request(self, client_id: str, resource: str) -> bool:
        """
        Checks if a request is allowed based on the sliding window counter.

        Args:
            client_id: A unique identifier for the client (e.g., user ID, IP address, API key).
            resource: The specific API resource being accessed (e.g., "/api/v1/users").

        Returns:
            True if the request is allowed, False otherwise.
        """
        key = self._get_key(client_id, resource)
        current_time = time.time()
        request_id = str(uuid.uuid4()) # Unique ID for this specific request

        # Execute the Lua script atomically
        result = self.redis_client.evalsha(
            self._lua_sha,
            1, # Number of keys
            key,
            current_time,
            self.window_size_seconds,
            self.limit,
            request_id
        )
        
        is_allowed = bool(result[0])
        current_count = result[1]
        
        # If the request was denied, remove its timestamp as it shouldn't count
        if not is_allowed:
            self.redis_client.zrem(key, request_id)

        return is_allowed

    def get_status(self, client_id: str, resource: str) -> dict:
        """
        Provides status information for the sliding window rate limit.

        Args:
            client_id: A unique identifier for the client.
            resource: The specific API resource.

        Returns:
            A dictionary containing 'allowed', 'limit', 'remaining', 'window_size_seconds'.
        """
        key = self._get_key(client_id, resource)
        current_time = time.time()
        
        # We need to re-run part of the logic to get an accurate count without adding a new request
        # This is for status only, so we don't need atomicity with ZADD
        trim_time = current_time - self.window_size_seconds
        self.redis_client.zremrangebyscore(key, '-inf', trim_time)
        
        current_count = self.redis_client.zcard(key)
        remaining = max(0, self.limit - current_count)
        
        return {
            "allowed": current_count <= self.limit,
            "limit": self.limit,
            "remaining": remaining,
            "window_size_seconds": self.window_size_seconds,
            "current_count": current_count
        }


class TokenBucketRateLimiter(RateLimiter):
    """
    Implements a Token Bucket rate limiting strategy using Redis.
    This strategy allows for bursts up to the bucket capacity while maintaining an average refill rate.
    """
    def __init__(self, redis_client: redis.Redis, capacity: float, refill_rate_per_second: float):
        """
        Initializes the TokenBucketRateLimiter.

        Args:
            redis_client: An initialized Redis client instance.
            capacity: The maximum number of tokens the bucket can hold.
            refill_rate_per_second: The rate at which tokens are added to the bucket per second.
        """
        super().__init__(redis_client)
        if not (capacity > 0 and refill_rate_per_second > 0):
            raise ValueError("Capacity and refill_rate_per_second must be positive.")
        self.capacity = float(capacity)
        self.refill_rate_per_second = float(refill_rate_per_second)
        
        # Lua script for atomic token bucket operations
        # This script ensures that checking tokens, refilling, and consuming are atomic.
        self._lua_script = """
            local key = KEYS[1]
            local capacity = tonumber(ARGV[1])
            local refill_rate = tonumber(ARGV[2])
            local current_time = tonumber(ARGV[3])
            local requested_tokens = tonumber(ARGV[4])

            -- Get current tokens and last refill time from Redis hash
            local bucket_info = redis.call('HMGET', key, 'tokens', 'last_refill_time')
            local current_tokens = tonumber(bucket_info[1])
            local last_refill_time = tonumber(bucket_info[2])

            if not current_tokens then
                -- Initialize bucket if it doesn't exist
                current_tokens = capacity
                last_refill_time = current_time
            end

            -- Calculate tokens to add based on time elapsed
            local time_elapsed = current_time - last_refill_time
            local tokens_to_add = time_elapsed * refill_rate

            -- Refill the bucket, capping at capacity
            current_tokens = math.min(capacity, current_tokens + tokens_to_add)

            local allowed = 0
            if current_tokens >= requested_tokens then
                -- Consume tokens if available
                current_tokens = current_tokens - requested_tokens
                allowed = 1
            end

            -- Update bucket state in Redis
            redis.call('HMSET', key, 'tokens', current_tokens, 'last_refill_time', current_time)
            
            -- Set expiration for the key to clean up old data if not used
            -- Set expiry to a reasonable multiple of the time it takes to refill the bucket
            local expiry_seconds = math.ceil(capacity / refill_rate) * 2
            redis.call('EXPIRE', key, expiry_seconds)

            return {allowed, current_tokens}
        """
        self._lua_sha = self.redis_client.script_load(self._lua_script)

    def allow_request(self, client_id: str, resource: str, tokens_needed: int = 1) -> bool:
        """
        Checks if a request is allowed based on the token bucket.

        Args:
            client_id: A unique identifier for the client.
            resource: The specific API resource.
            tokens_needed: The number of tokens required for this request (default is 1).

        Returns:
            True if the request is allowed, False otherwise.
        """
        key = self._get_key(client_id, resource)
        current_time = time.time()

        result = self.redis_client.evalsha(
            self._lua_sha,
            1, # Number of keys
            key,
            self.capacity,
            self.refill_rate_per_second,
            current_time,
            tokens_needed
        )

        return bool(result[0])

    def get_status(self, client_id: str, resource: str) -> dict:
        """
        Provides status information for the token bucket rate limit.

        Args:
            client_id: A unique identifier for the client.
            resource: The specific API resource.

        Returns:
            A dictionary containing 'allowed', 'capacity', 'refill_rate_per_second',
            'current_tokens', and 'last_refill_time'.
            'allowed' is set to True if there's at least 1 token available.
        """
        key = self._get_key(client_id, resource)
        current_time = time.time()

        # We need to run the refill
gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Guide

This document provides a detailed, professional overview of API Rate Limiters, outlining their critical role in modern API ecosystems, common implementation strategies, and best practices. This output serves as a foundational deliverable for understanding, designing, and deploying robust rate limiting mechanisms.


1. Introduction to API Rate Limiting

An API Rate Limiter is a mechanism that controls the number of requests an API client can make within a defined timeframe. Its primary purpose is to prevent abuse, ensure fair resource allocation, maintain system stability, and protect against various forms of malicious activity and overload.

2. Why API Rate Limiting is Crucial

Implementing API rate limiting is not merely a best practice; it's a fundamental requirement for any production-grade API. Its importance stems from several critical factors:

  • System Stability and Reliability: Prevents a single client or a surge of requests from overwhelming the API servers, database, or other backend services, thus ensuring consistent performance for all users.
  • Resource Protection: Safeguards expensive computational resources (CPU, memory, network bandwidth, database connections) from being exhausted by excessive requests.
  • Security and Abuse Prevention:

* DDoS Attacks (Denial of Service): Mitigates the impact of volumetric attacks designed to flood the API with traffic.

* Brute-Force Attacks: Prevents attackers from repeatedly guessing credentials, API keys, or other sensitive information.

* Scraping: Deters automated bots from rapidly extracting large amounts of data.

  • Cost Management: For cloud-based services where resource usage translates directly to costs, rate limiting helps control infrastructure expenses by preventing uncontrolled scaling due to excessive, non-legitimate traffic.
  • Fair Usage and Quality of Service (QoS): Ensures that all legitimate users receive a fair share of API resources, preventing a few heavy users from degrading the experience for others. It can also be used to enforce different service tiers (e.g., free vs. premium access).
  • Operational Control: Provides a mechanism for API providers to manage traffic flow, introduce new features, or perform maintenance without unexpected outages.

3. How API Rate Limiting Works (General Mechanism)

At its core, an API rate limiter operates by tracking the number of requests made by a specific client (identified by IP address, API key, user ID, etc.) over a given time window.

  1. Request Interception: Every incoming API request is first intercepted by the rate limiting component.
  2. Client Identification: The client making the request is identified (e.g., via IP address, API key in headers, JWT token).
  3. Counter/State Management: The system checks the current state (e.g., request count, last request timestamp, token balance) associated with that client. This state is typically stored in a fast, distributed data store like Redis.
  4. Policy Enforcement: Based on the configured rate limiting policy (e.g., 100 requests per minute), the system decides:

* If the request is within the allowed limit, it's permitted to proceed, and the client's state is updated.

* If the request exceeds the limit, it's blocked.

  1. Response to Exceeded Limit: When a request is blocked, the API typically returns an HTTP 429 Too Many Requests status code, often accompanied by Retry-After headers indicating when the client can safely retry.

4. Common Rate Limiting Algorithms

Different algorithms offer varying trade-offs in terms of accuracy, resource usage, and fairness.

4.1. Fixed Window Counter

  • Concept: Divides time into fixed-size windows (e.g., 60 seconds). Each window has a counter for each client. When a request comes in, the counter for the current window is incremented. If the counter exceeds the limit, the request is rejected.
  • Pros: Simple to implement, low memory footprint.
  • Cons:

* Bursting Problem: A client can make N requests at the very end of one window and N requests at the very beginning of the next window, effectively making 2N requests in a short period around the window boundary.

* Can be unfair if requests are concentrated at the start of a window, potentially blocking legitimate requests later.

  • Use Cases: Simple rate limiting where strict fairness at window boundaries is not critical.

4.2. Sliding Window Log

  • Concept: For each client, maintains a timestamped log of all successful requests. When a new request arrives, it removes all timestamps older than the current time minus the window duration. If the number of remaining timestamps (requests in the current window) exceeds the limit, the request is rejected.
  • Pros: Highly accurate and fair, as it considers the exact timestamps of requests. No "bursting problem" at window boundaries.
  • Cons: High memory usage, as it needs to store a log of timestamps for each client. Can be computationally intensive for large numbers of requests.
  • Use Cases: Scenarios requiring high precision and strict adherence to limits, often for premium tiers or critical APIs.

4.3. Sliding Window Counter (or Sliding Log Counter)

  • Concept: A hybrid approach that addresses the bursting problem of Fixed Window Counter without the high memory cost of Sliding Window Log. It uses two fixed windows: the current window and the previous window.

* When a request arrives, it calculates the "weighted" count from the previous window and adds it to the current window's count.

* The weight is determined by how much of the previous window has elapsed within the current window's effective duration.

Effective Count = (Requests in Current Window) + (Requests in Previous Window Overlap Percentage)

  • Pros: More accurate than Fixed Window Counter, less memory-intensive than Sliding Window Log. Good balance between accuracy and resource usage.
  • Cons: Still allows for some level of bursting, though less severe than Fixed Window. Slightly more complex to implement.
  • Use Cases: General-purpose rate limiting where a good balance of fairness and efficiency is desired.

4.4. Token Bucket

  • Concept: Imagine a bucket with a fixed capacity that holds "tokens." Tokens are added to the bucket at a constant rate. Each API request consumes one token. If a request arrives and the bucket is empty, the request is rejected. If tokens are available, one is consumed, and the request proceeds.
  • Pros:

* Allows for short bursts of requests (up to the bucket capacity) without rejecting them, which is good for user experience.

* Limits the sustained rate to the token refill rate.

* Simple to understand and implement.

  • Cons: Requires careful tuning of bucket size and refill rate.
  • Use Cases: Ideal for APIs that need to handle occasional traffic spikes gracefully while maintaining a steady average rate, common for user-facing applications.

4.5. Leaky Bucket

  • Concept: Similar to a physical bucket with a hole at the bottom, where requests are water droplets. Requests enter the bucket and "leak out" (are processed) at a constant, fixed rate. If the bucket is full, incoming requests overflow and are rejected.
  • Pros: Smooths out bursty traffic into a steady stream of processing. Guarantees a constant output rate.
  • Cons:

* If the processing rate is slow, the queue can grow, leading to increased latency.

* Does not allow for bursts of requests beyond the "leak" rate.

  • Use Cases: Often used for systems where a constant processing rate is critical, such as message queues, rather than direct API request limiting.

5. Key Benefits of API Rate Limiting

  • Enhanced Security: Protection against DDoS, brute-force, and scraping attacks.
  • Improved Reliability: Prevents server overload and ensures consistent API availability.
  • Optimized Resource Utilization: Efficient use of infrastructure, leading to cost savings.
  • Fair Access: Equitable distribution of API resources among diverse user bases.
  • Predictable Performance: Helps maintain a stable and predictable performance profile for the API.

6. Challenges and Considerations

  • Granularity: What entity defines a "client"? IP address (can be shared/spoofed), API key (best for authenticated access), user ID (most accurate for logged-in users).
  • Distributed Systems: In a microservices architecture or cloud environment, counters must be synchronized across multiple instances. A centralized, fast data store (e.g., Redis, memcached) is essential.
  • Policy Complexity: Different endpoints might require different limits (e.g., GET /data vs. POST /upload). Premium users might have higher limits.
  • Edge Cases: What if a legitimate user suddenly makes many requests? How to handle retries?
  • Monitoring and Alerting: Crucial to track rate limit breaches and system health.
  • User Experience: Clear error messages (429 Too Many Requests) and Retry-After headers are vital for client applications to handle limits gracefully.
  • Bypassing: Attackers might try to rotate IP addresses or use botnets. Advanced rate limiting might involve behavioral analysis.
  • Stateless vs. Stateful: Stateful rate limiting (like Token Bucket or Sliding Window Log) requires storing client state, which adds complexity but offers more precise control.

7. Implementation Strategies

Rate limiting can be implemented at various layers of your application stack:

7.1. Application-Level Rate Limiting

  • Description: Implemented directly within your API backend code (e.g., using a library in Node.js, Python, Java).
  • Pros: Fine-grained control, custom logic, easy to integrate with application-specific user IDs or business logic.
  • Cons:

* Adds overhead to your application servers.

* Requires distributed state management if your application scales horizontally (e.g., using Redis for counters).

* Requires careful implementation to avoid introducing bugs or performance bottlenecks.

  • Tools/Libraries: express-rate-limit (Node.js), Flask-Limiter (Python), Spring Cloud Gateway's RateLimiter (Java).

7.2. API Gateway / Reverse Proxy Rate Limiting

  • Description: Implemented at the API Gateway or reverse proxy layer (e.g., Nginx, Envoy, Kong, Apigee). This layer sits in front of your backend services.
  • Pros:

* Decoupled: Offloads rate limiting logic from your application code.

* Centralized: A single point of control for all APIs.

* Performance: Often highly optimized for performance.

* Scalability: Gateways are designed to handle high traffic volumes.

  • Cons: Less granular control than application-level for very specific business logic.
  • Examples: Nginx limit_req, Envoy Proxy rate_limit filter, Kong API Gateway plugins.

7.3. Cloud Provider Services

  • Description: Many cloud providers offer managed rate limiting services, often integrated with their API Gateway or WAF (Web Application Firewall) offerings.
  • Pros:

* Managed Service: No infrastructure to manage.

* Scalability and Reliability: Built for high availability and performance.

* Integration: Seamlessly integrates with other cloud services.

* Advanced Features: Often include WAF capabilities, bot detection, and behavioral analysis.

  • Cons: Vendor lock-in, potentially higher cost, less customization than self-managed solutions.
  • Examples: AWS API Gateway Throttling and Usage Plans, Google Cloud Endpoints, Azure API Management, Cloudflare Rate Limiting.

8. Key Metrics for Monitoring

To ensure your rate limiting strategy is effective and not causing unintended issues, monitor the following:

  • Total Requests: Overall API traffic.
  • Rate Limited Requests: Number of requests blocked by the rate limiter.
  • HTTP 429 Responses: Count of "Too Many Requests" responses.
  • Client-Specific Usage: Track request counts for individual clients/API keys.
  • Latency Impact: Ensure rate limiting itself isn't introducing significant latency.
  • Resource Utilization: Monitor CPU, memory, and network usage of your rate limiting component.
  • Error Rates: Sudden spikes in 429 errors could indicate an attack or a misconfigured client.

9. Best Practices

  • Start with Reasonable Defaults: Don't be overly aggressive initially; gather data and adjust.
  • Communicate Limits Clearly: Document your rate limits in your API documentation, including the algorithm used (if relevant), window sizes, and response headers (Retry-After).
  • Use 429 Too Many Requests: Adhere to the HTTP standard for rate limit responses.
  • Include Retry-After Header: Provide clients with a clear indication of when they can safely retry their request.
  • Implement Exponential Backoff on the Client Side: Advise clients to implement this strategy to avoid immediately hitting the limit again after a 429 response.
  • Different Limits for Different Endpoints/Actions: Critical or resource-intensive endpoints should have stricter limits.
  • Tiered Limits: Offer different rate limits based on subscription plans (e.g., free vs. premium).
  • Granularity Matters: Choose the right identifier for your clients (IP, API Key, User ID).
  • Centralized Storage for Distributed Systems: Use Redis or similar for shared state across multiple instances.
  • Monitor and Alert: Continuously monitor rate limit performance and set up alerts for high rejection rates or unusual patterns.
  • Consider Bursting: If user experience is critical, use algorithms like Token Bucket that allow for short bursts.
  • Test Thoroughly: Simulate high traffic scenarios and edge cases to ensure the rate limiter behaves as expected.

10. Conclusion

API Rate Limiters are an indispensable component of any robust API architecture. By carefully selecting the appropriate algorithm, implementing it strategically, and continuously monitoring its performance, organizations can effectively protect their API infrastructure, ensure system stability, manage costs, and provide a fair and reliable experience for all users. This proactive measure is critical for the long-term health and success of your API ecosystem.

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}