API Rate Limiter
Run ID: 69cc0ee804066a6c4a168f222026-03-31Development
PantheraHive BOS
BOS Dashboard

This document details the professional output for the "API Rate Limiter" step, focusing on the generation of clean, well-commented, and production-ready code.


API Rate Limiter: Code Generation and Implementation Details

1. Introduction to API Rate Limiting

API rate limiting is a crucial mechanism used to control the rate at which clients can make requests to an API within a given timeframe. It serves several vital purposes:

2. Chosen Rate Limiting Strategy: Token Bucket Algorithm

For this implementation, we have chosen the Token Bucket Algorithm. This strategy is widely used due to its flexibility and ability to handle bursts of requests while still enforcing an average rate limit.

How it Works:

Imagine a bucket that holds a certain number of "tokens."

  1. Bucket Capacity: The bucket has a maximum capacity, representing the maximum burst of requests allowed.
  2. Refill Rate: Tokens are added to the bucket at a constant rate (e.g., 1 token per second) up to its maximum capacity.
  3. Request Consumption: When a client makes a request, the system attempts to remove a token from the bucket.

* If a token is available, the request is allowed, and the token is consumed.

* If no tokens are available, the request is denied (or queued, depending on implementation).

Advantages of Token Bucket:

3. Implementation Details

The provided code implements an in-memory Token Bucket Rate Limiter in Python. This implementation is suitable for a single-node application or as a foundational component for more complex distributed systems.

Key Characteristics:

4. Production-Ready Code

Below is the Python code for the TokenBucketRateLimiter class, complete with detailed comments and a usage example.

python • 6,320 chars
import time
import threading

class TokenBucketRateLimiter:
    """
    Implements the Token Bucket algorithm for API rate limiting.

    This rate limiter allows for a burst of requests up to 'capacity'
    and then enforces an average rate of 'refill_rate' tokens per second.

    Note: This in-memory implementation is suitable for single-process
    applications. For distributed systems, consider using a shared
    backend like Redis.
    """

    def __init__(self, capacity: int, refill_rate: float):
        """
        Initializes the TokenBucketRateLimiter.

        Args:
            capacity (int): The maximum number of tokens the bucket can hold.
                            This also defines the maximum burst size.
            refill_rate (float): The rate at which tokens are added to the bucket
                                 per second (e.g., 1.0 means 1 token per second).
        """
        if capacity <= 0:
            raise ValueError("Capacity must be a positive integer.")
        if refill_rate <= 0:
            raise ValueError("Refill rate must be a positive float.")

        self.capacity = capacity
        self.refill_rate = refill_rate
        self.tokens = capacity  # Start with a full bucket
        self.last_refill_time = time.monotonic() # Use monotonic clock for consistent time
        self.lock = threading.Lock() # For thread-safety in multi-threaded environments

    def _refill_tokens(self) -> None:
        """
        Internal method to refill tokens based on the elapsed time
        since the last refill.
        """
        now = time.monotonic()
        time_elapsed = now - self.last_refill_time
        
        # Calculate tokens to add and ensure it doesn't exceed capacity
        tokens_to_add = time_elapsed * self.refill_rate
        self.tokens = min(self.capacity, self.tokens + tokens_to_add)
        
        self.last_refill_time = now

    def allow_request(self, tokens_needed: int = 1) -> bool:
        """
        Checks if a request requiring 'tokens_needed' can be allowed.

        Args:
            tokens_needed (int): The number of tokens required for this request.
                                 Defaults to 1 for a typical API call.

        Returns:
            bool: True if the request is allowed, False otherwise.
        """
        if tokens_needed <= 0:
            raise ValueError("Tokens needed must be a positive integer.")

        with self.lock: # Acquire lock for thread-safe operations
            self._refill_tokens() # Always refill before checking

            if self.tokens >= tokens_needed:
                self.tokens -= tokens_needed
                return True
            else:
                return False

    def get_remaining_tokens(self) -> float:
        """
        Returns the current number of available tokens after a potential refill.
        """
        with self.lock:
            self._refill_tokens()
            return self.tokens

    def get_time_to_next_token(self) -> float:
        """
        Calculates the estimated time (in seconds) until the next token is available.
        Returns 0 if tokens are currently available.
        """
        with self.lock:
            self._refill_tokens()
            if self.tokens >= 1:
                return 0.0
            
            # How many tokens are missing to get to 1?
            missing_tokens = 1 - self.tokens 
            # How much time is needed to generate those missing tokens?
            return missing_tokens / self.refill_rate

# --- Usage Example ---
if __name__ == "__main__":
    print("--- Token Bucket Rate Limiter Demonstration ---")

    # Scenario 1: Basic Rate Limiting (5 requests/sec, burst up to 10)
    print("\nScenario 1: 5 requests/sec with burst capacity of 10")
    limiter = TokenBucketRateLimiter(capacity=10, refill_rate=5.0)

    print("Initial tokens:", limiter.get_remaining_tokens())

    # Burst of requests
    print("Attempting a burst of 12 requests:")
    for i in range(1, 13):
        if limiter.allow_request():
            print(f"  Request {i}: ALLOWED. Tokens left: {limiter.get_remaining_tokens():.2f}")
        else:
            print(f"  Request {i}: DENIED. Tokens left: {limiter.get_remaining_tokens():.2f}")
            print(f"  Time to next token: {limiter.get_time_to_next_token():.2f}s")
        time.sleep(0.05) # Small delay to simulate some processing

    # Wait for a bit and try again
    print("\nWaiting 1.5 seconds to refill...")
    time.sleep(1.5)
    print("Tokens after wait:", limiter.get_remaining_tokens())

    print("Attempting more requests after refill:")
    for i in range(1, 6):
        if limiter.allow_request():
            print(f"  Request {i}: ALLOWED. Tokens left: {limiter.get_remaining_tokens():.2f}")
        else:
            print(f"  Request {i}: DENIED. Tokens left: {limiter.get_remaining_tokens():.2f}")
        time.sleep(0.1) # Simulate requests coming in at 100ms intervals

    # Scenario 2: Different token costs
    print("\nScenario 2: Requests with different token costs (capacity=5, rate=1.0)")
    complex_limiter = TokenBucketRateLimiter(capacity=5, refill_rate=1.0)

    print("Initial tokens:", complex_limiter.get_remaining_tokens())
    if complex_limiter.allow_request(tokens_needed=3):
        print(f"  Complex Request (cost 3): ALLOWED. Tokens left: {complex_limiter.get_remaining_tokens():.2f}")
    else:
        print(f"  Complex Request (cost 3): DENIED. Tokens left: {complex_limiter.get_remaining_tokens():.2f}")

    if complex_limiter.allow_request(tokens_needed=3):
        print(f"  Complex Request (cost 3): ALLOWED. Tokens left: {complex_limiter.get_remaining_tokens():.2f}")
    else:
        print(f"  Complex Request (cost 3): DENIED. Tokens left: {complex_limiter.get_remaining_tokens():.2f}")
        print(f"  Time to next token: {complex_limiter.get_time_to_next_token():.2f}s")

    print("\nWaiting 3 seconds...")
    time.sleep(3)
    print("Tokens after wait:", complex_limiter.get_remaining_tokens())
    if complex_limiter.allow_request(tokens_needed=3):
        print(f"  Complex Request (cost 3): ALLOWED. Tokens left: {complex_limiter.get_remaining_tokens():.2f}")
    else:
        print(f"  Complex Request (cost 3): DENIED. Tokens left: {complex_limiter.get_remaining_tokens():.2f}")
Sandboxed live preview

API Rate Limiter: Comprehensive Study Plan

This document outlines a detailed and structured study plan to master the concepts, design, and implementation of API Rate Limiters. This plan is designed to provide a robust understanding, from fundamental algorithms to advanced distributed system considerations and production best practices.


1. Overview and Introduction

API Rate Limiting is a critical component in modern web services, essential for ensuring stability, preventing abuse, and managing resource consumption. By controlling the number of requests a user or client can make to an API within a given timeframe, rate limiters protect against DDoS attacks, brute-force attempts, and resource exhaustion, while also providing fair access to all legitimate users.

This study plan aims to equip you with the knowledge and practical skills required to design, implement, and operate effective API rate limiting systems.


2. Learning Objectives

Upon completion of this study plan, you will be able to:

  • Understand Core Concepts: Articulate the purpose, benefits, and necessity of API rate limiting in various contexts.
  • Master Rate Limiting Algorithms: Describe and differentiate between common rate limiting algorithms (Fixed Window Counter, Sliding Log, Sliding Window Counter, Leaky Bucket, Token Bucket), including their pros, cons, and appropriate use cases.
  • Design Distributed Systems: Comprehend the challenges associated with distributed rate limiting (e.g., consistency, race conditions, clock skew) and propose solutions for building scalable and fault-tolerant distributed rate limiters.
  • Implement Practical Solutions: Develop functional rate limiter implementations using various technologies and data stores (e.g., in-memory, Redis, API Gateways).
  • Evaluate Performance & Scalability: Analyze the performance characteristics and scalability implications of different rate limiting approaches.
  • Apply Best Practices: Identify and implement best practices for configuring, monitoring, and debugging rate limiters in production environments, including client-side considerations.
  • Make Informed Decisions: Choose the most suitable rate limiting strategy and algorithm based on specific business requirements, traffic patterns, and infrastructure constraints.

3. Weekly Study Plan

This 3-week plan is structured to progressively build your knowledge and practical skills. Each week includes theoretical learning, practical exercises, and specific topics to cover.

Week 1: Fundamentals & Basic Implementations

Focus: Core concepts, basic algorithms, single-server implementations, and an introduction to HTTP standards for rate limiting.

  • Learning Topics:

* Introduction to Rate Limiting: What it is, why it's needed (security, stability, cost control, fairness).

* HTTP Status Codes: Understanding 429 Too Many Requests.

* Rate Limiting Headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset.

* Basic Algorithms:

* Fixed Window Counter: Mechanism, advantages, disadvantages (bursting problem).

* Sliding Log: Mechanism, advantages (no bursting), disadvantages (memory usage for logs).

* Implementation Strategies (Single Server): In-memory data structures (HashMaps, Lists) for basic rate limiting.

* Rate Limiting Policies: User-based, IP-based, API Key-based.

  • Activities:

* Read introductory articles on API rate limiting.

* Study the HTTP 429 status code and associated headers.

* Hands-on: Implement a simple Fixed Window Counter rate limiter in your preferred programming language (e.g., Python, Node.js, Java). Test with various request patterns to observe its behavior, especially the bursting issue.

* Hands-on: Implement a basic Sliding Log rate limiter. Compare its memory footprint and accuracy with the Fixed Window Counter.

  • Estimated Time: 8-12 hours

Week 2: Advanced Algorithms & Distributed Systems

Focus: More sophisticated algorithms, the complexities of distributed environments, and leveraging external data stores like Redis for distributed rate limiting.

  • Learning Topics:

* Advanced Algorithms:

* Sliding Window Counter: Mechanism, advantages (mitigates bursting), disadvantages (approximation).

* Leaky Bucket: Mechanism, advantages (smooth outflow), disadvantages (no burst capacity).

* Token Bucket: Mechanism, advantages (burst capacity, simple), disadvantages (can be complex to tune).

* Challenges of Distributed Rate Limiting:

* Consistency issues across multiple instances.

* Race conditions and concurrency.

* Clock skew in distributed environments.

* Network latency and its impact.

* Distributed Implementation using Redis:

* Redis data types for rate limiting (Strings, Hashes, Sorted Sets).

* Atomic operations with INCR, EXPIRE.

* Using Lua scripts for complex, atomic operations (e.g., Token Bucket logic).

* Eventual Consistency: How it applies to distributed rate limiters.

  • Activities:

* Deep dive into the mechanics and trade-offs of Sliding Window Counter, Leaky Bucket, and Token Bucket algorithms.

* Study Redis commands relevant to rate limiting (INCR, EXPIRE, ZADD, ZCOUNT, ZREMRANGEBYSCORE).

* Explore Redis Lua scripting for atomic rate limiting logic.

* Hands-on: Implement a Token Bucket or Sliding Window Counter algorithm using Redis as the backend store. Focus on ensuring atomicity using Lua scripts.

* Research case studies of distributed rate limiters (e.g., Uber, Stripe).

  • Estimated Time: 10-15 hours

Week 3: Production Considerations & Best Practices

Focus: Real-world deployment strategies, monitoring, testing, and advanced topics for building robust, production-grade rate limiters.

  • Learning Topics:

* Deployment Strategies:

* API Gateways: NGINX, AWS API Gateway, Kong, Apigee.

* Service Mesh: Istio, Envoy proxy.

* Application-level: Implementing within microservices.

* Hard vs. Soft Limits: Differentiating between strict and advisory limits.

* Burst Capacity: How to design for temporary traffic spikes.

* Client-Side Best Practices: Backoff and retry mechanisms, client-side rate limit awareness.

* Monitoring and Alerting: Key metrics to track (rate limited requests, remaining capacity), setting up alerts.

* Testing Rate Limiters: Unit tests, integration tests, load testing, edge case testing.

* Security Implications: Protecting against IP spoofing, DDoS attacks, and ensuring fairness.

* Advanced Topics: Dynamic rate limiting, tiered rate limits, global vs. local rate limits.

* Rate Limiting as a Service: Exploring commercial solutions and managed services.

  • Activities:

* Study NGINX rate limiting configurations.

* Research how major cloud providers (AWS, GCP, Azure) implement API Gateway rate limiting.

* Design a monitoring dashboard for a rate limiting system, identifying key metrics.

* Hands-on: Set up NGINX with basic rate limiting for a simple web application.

* Design Exercise: Propose a comprehensive, production-ready rate limiting architecture for a hypothetical high-traffic API (e.g., a social media feed API), justifying algorithm choices, deployment strategy, and monitoring plan.

* Review common pitfalls and best practices for rate limiter configuration.

  • Estimated Time: 8-12 hours

4. Recommended Resources

Leverage these resources for in-depth learning and practical application:

  • Articles & Blogs:

* Stripe Engineering Blog: "Scaling your API with rate limiters" (classic, foundational read).

* Uber Engineering Blog: "Designing a Distributed Rate Limiting System" (excellent for distributed challenges).

* Envoy Proxy Documentation: Specifically the rate limiting section.

* NGINX Documentation: "Limiting access with NGINX" and "Rate Limiting with NGINX and NGINX Plus".

* Redis Documentation: Relevant commands and Lua scripting examples.

* System Design Interview Resources: Search for "API Rate Limiter System Design" on platforms like ByteByteGo, Gaurav Sen, LeetCode System Design section, or YouTube channels dedicated to system design.

  • Books:

* "Designing Data-Intensive Applications" by Martin Kleppmann: Chapters on distributed systems, consistency, and fault tolerance are highly relevant.

* "System Design Interview – An Insider's Guide" by Alex Xu: Chapter on Rate Limiter.

  • Online Courses:

* Courses on System Design (e.g., on Coursera, Udemy, Educative.io) often include dedicated modules on rate limiting.

* Specific courses on Redis for advanced usage.

  • Tools & Technologies:

* Redis: Local installation or cloud service (e.g., Redis Cloud).

* NGINX: For gateway-level rate limiting.

* Your Preferred Programming Language: Python, Node.js, Java, Go, etc., for implementing algorithms.

* Postman/cURL: For testing API requests and observing rate limiting behavior.


5. Milestones

Achieving these milestones will demonstrate progressive mastery of the subject:

  • End of Week 1: Successfully implement and thoroughly test a basic in-memory Fixed Window Counter and Sliding Log rate limiter. Be able to explain their differences and limitations.
  • End of Week 2: Successfully implement a distributed rate limiter (e.g., Token Bucket or Sliding Window Counter) using Redis, demonstrating atomic operations via Lua scripts. Be able to discuss the challenges of distributed systems in this context.
  • End of Week 3: Develop a high-level architectural design document for a robust, scalable, and production-ready API rate limiting system for a given scenario. This should include algorithm choice, storage, deployment strategy (e.g., API Gateway, service mesh), monitoring plan, and justification for design decisions.

6. Assessment Strategies

Regular assessment ensures a solid grasp of the material and practical application:

  • Self-Assessment:

* Code Review: Regularly review your own code implementations against best practices and algorithm correctness.

* Concept Quizzes: Create flashcards or self-quizzes on algorithms, HTTP headers, and distributed system challenges.

* Debugging Exercises: Intentionally introduce bugs into your rate limiter implementations and practice debugging them.

  • Practical Implementation Challenges:

* Algorithm Implementation: Implement a specific rate limiting algorithm from scratch, given only its definition.

* Scenario-Based Design: Given a hypothetical API with specific traffic patterns and business requirements, propose a complete rate limiting solution.

* Performance Testing: Write scripts to simulate high traffic and observe how your rate limiters behave under load.

  • Knowledge Checks & Discussions:

* Algorithm Comparison: Be able to clearly articulate the pros, cons, and ideal use cases for each major rate limiting algorithm.

* Distributed System Challenges: Explain how you would address race conditions, consistency, and clock skew in a distributed rate limiter.

* Trade-off Analysis: Discuss the trade-offs involved in choosing different deployment strategies (e.g., application-level vs. API Gateway vs. service mesh).

* Whiteboard Sessions: Practice drawing and explaining your rate limiter designs as if in a system design interview.


This comprehensive study plan provides a structured path to becoming proficient in API Rate Limiter design and implementation. Consistent effort and practical application of the concepts will be key to your success.

5. Key Considerations for Production Deployment

While the provided code offers a solid foundation, a production-grade API rate limiter requires additional considerations:

  • Distributed Rate Limiting (Redis):

* Problem: The current implementation is in-memory and only works for a single process. If you have multiple API instances (load-balanced), each instance would have its own independent bucket, failing to enforce a global rate limit.

* Solution: Use a shared, high-performance data store like Redis. Redis's atomic operations (INCR, SETEX, Lua scripting) are ideal for implementing distributed rate limiters (e.g., using redis-py library). This ensures all API instances share the same bucket state.

  • Granularity of Limits:

* Requirement: Rate limits often need to be applied differently based on various factors.

* Examples:

* Per User/Client ID: Each authenticated user gets their own bucket.

* Per IP Address: Unauthenticated requests are limited by their source IP.

* Per Endpoint: Different API endpoints might have different rate limits (e.g., /api/read vs. /api/write).

* Per API Key: Specific API keys might have dedicated limits.

* Implementation: The TokenBucketRateLimiter class would need to be instantiated and managed per unique identifier (e.g., user_id, ip_address, api_key). A dictionary or a cache could map these identifiers to their respective limiter instances.

  • Error Handling (HTTP 429 Too Many Requests):

* Standard Practice: When a request is denied by the rate limiter, the API should respond with an HTTP 429 Too Many Requests status code.

* Headers: Include helpful headers for clients:

* Retry-After: Indicates how long the client should wait before making another request (in seconds or a specific date/time).

* X-RateLimit-Limit: The total number of requests allowed in the current window.

* X-RateLimit-Remaining: The number of requests remaining in the current window.

* X-RateLimit-Reset: The time at which the current rate limit window resets (e.g., Unix timestamp).

* Implementation: The API gateway or application layer handling the allow_request result would be responsible for constructing and sending this HTTP response.

  • Configuration and Management:

* Dynamic Configuration: Rate limits might need to be adjusted without redeploying the application. External configuration services (e.g., Consul, etcd, a database) or environment variables can be used.

* Admin Interface: For complex systems, an administrative interface to view and modify rate limits in real-time can be beneficial.

  • Monitoring and Alerting:

* Visibility: Monitor the number of requests allowed vs. denied by the rate limiter.

* Alerts: Set up alerts for high rates of denied requests, which could indicate a potential attack or a misconfigured client.

* Metrics: Expose metrics (e.g., Prometheus, Grafana) for rate_limit_allowed_total, rate_limit_denied_total, rate_limit_tokens_remaining, etc.

  • Edge Cases and Edge Computing:

* Consider where the rate limiter is best placed: API Gateway (e.g., Nginx, Envoy, AWS API Gateway), Load Balancer, or directly within the application. Placing it at the edge often reduces load on backend services.

* Handling clock skew in distributed systems is critical if using time-based limits without a centralized time source. time.monotonic() helps within a single process, but for distributed systems, relying on Redis's time or a central time authority is better.

6. Conclusion

This deliverable provides a robust, well-explained, and production-ready Python implementation of an API Rate Limiter using the Token Bucket algorithm. It includes a comprehensive code example and highlights crucial considerations for deploying such a system in a real-world, distributed environment. By addressing these points, you can ensure your API remains stable, secure, and performant under various load conditions.

gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Guide

This document provides a detailed professional overview of API Rate Limiters, outlining their importance, common implementation strategies, and best practices. This information is crucial for maintaining the stability, security, and performance of your API ecosystem.


1. Introduction to API Rate Limiting

An API Rate Limiter is a mechanism designed to control the number of requests a client can make to an API within a defined timeframe. Its primary purpose is to regulate traffic, prevent abuse, ensure fair resource allocation, and maintain the overall stability and performance of the API infrastructure.

Key Objectives:

  • Prevent Abuse: Mitigate Distributed Denial of Service (DDoS) attacks, brute-force attempts, and excessive data scraping.
  • Ensure System Stability: Protect backend services from being overwhelmed by sudden spikes in traffic, preventing resource exhaustion and service outages.
  • Fair Usage: Distribute API resources equitably among all consumers, preventing a single user or application from monopolizing resources.
  • Cost Management: Control infrastructure costs by limiting the processing load on servers, databases, and other services.
  • Improve User Experience: Guarantee consistent and reliable service for legitimate users by preventing performance degradation.

2. Core Benefits of Implementing API Rate Limiting

Implementing a robust API rate limiting strategy yields significant advantages for both API providers and consumers:

  • Enhanced Security Posture: Directly combats malicious activities like credential stuffing, content scraping, and various forms of API abuse by restricting the rate at which these attacks can be performed.
  • Guaranteed Service Availability: Acts as a crucial line of defense against traffic surges, ensuring that your API remains responsive and available even under high load conditions.
  • Optimized Resource Utilization: Prevents any single client from consuming a disproportionate share of server resources, leading to more efficient and cost-effective operation.
  • Predictable Performance: By smoothing out traffic spikes, rate limiting helps maintain a consistent level of performance, which is vital for applications relying on your API.
  • Clear Usage Policies: Establishes transparent boundaries for API consumption, helping developers understand and adhere to acceptable usage patterns, reducing support queries related to unexpected behavior.
  • Facilitates Tiered Access: Enables the creation of different service tiers (e.g., free, premium, enterprise) with varying rate limits, allowing for flexible monetization and service differentiation.

3. Common Rate Limiting Algorithms

Several algorithms are used to implement API rate limiting, each with its own characteristics regarding accuracy, resource usage, and how it handles bursts of traffic.

3.1. Fixed Window Counter

  • Mechanism: Divides time into fixed-size windows (e.g., 60 seconds). Each request increments a counter within the current window. If the counter exceeds the limit, requests are rejected until the next window starts.
  • Pros: Simple to implement, low memory consumption.
  • Cons: Prone to "burstiness" at window edges. For example, a client could make N requests at the end of window 1 and N requests at the beginning of window 2, effectively making 2N requests in a very short period around the window transition.
  • Best For: Simple, less critical applications where edge-case burstiness is acceptable.

3.2. Sliding Window Log

  • Mechanism: Stores a timestamp for every request made by a client. When a new request arrives, it removes all timestamps older than the current window and counts the remaining timestamps. If the count exceeds the limit, the request is rejected.
  • Pros: Most accurate, handles bursts smoothly across window boundaries.
  • Cons: High memory consumption, as it needs to store timestamps for every request.
  • Best For: Scenarios requiring high accuracy and smooth rate enforcement, where memory is not a significant constraint.

3.3. Sliding Window Counter

  • Mechanism: A hybrid approach. It uses two fixed windows: the current window and the previous window. It calculates the request count for the current window and a weighted average of the previous window's requests that fall into the "sliding" portion of the current window.
  • Pros: Good balance between accuracy and resource usage, mitigates the "burstiness" issue of the fixed window counter without the high memory cost of the sliding window log.
  • Cons: More complex to implement than fixed window.
  • Best For: General-purpose rate limiting where accuracy and efficiency are both important.

3.4. Token Bucket

  • Mechanism: A "bucket" holds "tokens." Tokens are added to the bucket at a fixed rate. Each request consumes one token. If a request arrives and the bucket is empty, it is rejected. The bucket has a maximum capacity, allowing for a certain amount of burst traffic (up to the bucket's capacity).
  • Pros: Allows for bursts of traffic up to the bucket capacity, then enforces a steady average rate. Simple to understand.
  • Cons: Can be complex to tune bucket size and refill rate effectively.
  • Best For: APIs that expect occasional bursts of traffic but need to maintain a consistent average request rate.

3.5. Leaky Bucket

  • Mechanism: Requests are added to a queue (the "bucket"). Requests "leak" out of the bucket (are processed) at a constant, fixed rate. If the bucket is full when a new request arrives, the request is rejected.
  • Pros: Smooths out bursty traffic, ensures a constant output rate, preventing backend services from being overwhelmed.
  • Cons: Can introduce latency if the bucket fills up, as requests must wait for their turn to "leak" out.
  • Best For: Systems where a steady processing rate is critical, such as message queues or resource-constrained services.

4. Key Implementation Considerations

Successful API rate limiting requires careful planning and consideration of various architectural and operational factors.

  • Granularity of Limits:

* Per IP Address: Simplest, but problematic for users behind NAT or proxies.

* Per API Key/Client ID: Most common for public APIs, requires clients to authenticate.

* Per User/Account: Ideal for authenticated users, allows for personalized limits.

* Per Endpoint: Different limits for different API endpoints (e.g., GET /data vs. POST /upload).

* Combinations: Often, a tiered approach combining several granularities is most effective.

  • Distributed Systems Challenges:

* In a distributed environment (multiple API servers), rate limit counters must be synchronized across all instances.

* Solutions: Use a centralized data store like Redis (with atomic increment operations) or a distributed consensus system.

* Consistency vs. Performance: Decide on the acceptable trade-off. Strict consistency might introduce latency.

  • Client Communication (HTTP Headers):

* X-RateLimit-Limit: The maximum number of requests permitted in the current period.

* X-RateLimit-Remaining: The number of requests remaining in the current period.

* X-RateLimit-Reset: The time (in UTC epoch seconds or relative seconds) when the current rate limit window resets.

* HTTP 429 Too Many Requests: The standard HTTP status code returned when a client exceeds their rate limit. The response body should contain a human-readable message, and the Retry-After header can indicate how long the client should wait before retrying.

  • Error Handling and User Experience:

* Provide clear, informative error messages for 429 responses.

* Encourage clients to implement exponential backoff and jitter for retries to avoid overwhelming the API after a reset.

* Consider a "grace period" or "soft limit" where clients receive warnings before hard limits are enforced.

  • Monitoring and Alerting:

* Track key metrics: total requests, rate-limited requests, 429 responses, per-client usage.

* Set up alerts for unusual activity, sustained high 429 rates, or potential attacks.

* Use logs to identify problematic clients or unexpected traffic patterns.

  • Dynamic vs. Static Configuration:

* Static: Limits are hardcoded or configured at deployment. Simpler but less flexible.

* Dynamic: Limits can be adjusted in real-time without redeploying, often via a centralized configuration service. Essential for adapting to changing traffic patterns or responding to incidents.


5. Best Practices for Effective API Rate Limiting

To maximize the benefits of API rate limiting, consider these best practices:

  • Document Limits Clearly: Publish your rate limiting policies prominently in your API documentation, including specific limits, reset periods, and the meaning of X-RateLimit headers.
  • Communicate Changes: Notify API consumers in advance of any changes to rate limits or policies.
  • Implement Layered Security: Rate limiting is one layer of defense. Combine it with other security measures like authentication, authorization, input validation, and WAFs.
  • Test Thoroughly: Conduct load testing and stress testing to validate your rate limiter's effectiveness under various scenarios, including normal load, burst traffic, and malicious attacks.
  • Provide SDKs/Libraries: If possible, offer client-side SDKs that automatically handle 429 responses with intelligent retry logic (e.g., exponential backoff).
  • Monitor and Iterate: Continuously monitor API usage, analyze rate limit enforcement data, and adjust your policies as needed based on observed traffic patterns and business requirements.
  • Consider Burst Tolerance: Design limits with some tolerance for legitimate bursts, especially for interactive applications, using algorithms like Token Bucket or Sliding Window Counter.
  • Offer Overrides/Escalations: Have a process for legitimate high-volume users to request higher limits or temporary overrides.

6. Actionable Takeaways for Your Organization

To successfully implement or refine your API Rate Limiting strategy, we recommend the following steps:

  1. Conduct an API Audit:

* Identify all public and internal APIs that require rate limiting.

* Analyze existing traffic patterns, typical usage, and potential abuse vectors for each API.

* Determine the business criticality of each API and its underlying resources.

  1. Define Rate Limiting Policies:

* For each API and/or endpoint, establish specific limits (e.g., 100 requests/minute, 1000 requests/hour).

* Decide on the granularity (per IP, per API key, per user, per endpoint).

* Consider different tiers for various user segments (e.g., free vs. paid, internal vs. external).

  1. Select Appropriate Algorithms & Architecture:

* Based on your requirements for accuracy, burst handling, and resource constraints, choose the most suitable rate limiting algorithm(s).

* Plan your architecture: will it be a dedicated service, an API Gateway feature, or integrated into each microservice?

* For distributed environments, select a robust shared storage solution (e.g., Redis Cluster).

  1. Plan Implementation & Integration:

* Technology Stack: Identify the tools and libraries compatible with your current tech stack for implementing rate limiting.

* API Gateway Integration: Leverage existing API Gateway capabilities (e.g., AWS API Gateway, Azure API Management, Kong, Nginx) for centralized rate limiting.

* Custom Implementation: If an API Gateway is not sufficient, design and build custom rate limiting logic into your services.

* Client-Side Guidance: Develop clear guidelines and potentially code examples for how API consumers should handle 429 responses.

  1. Develop Monitoring & Alerting:

* Integrate rate limiting metrics into your existing monitoring dashboards.

* Configure alerts for specific thresholds (e.g., 429 errors exceeding 5% of total requests, specific clients hitting limits repeatedly).

  1. Document and Communicate:

* Update your API documentation with comprehensive details on rate limits, error codes, and recommended retry strategies.

* Prepare internal and external communication plans for any significant changes to rate limiting policies.

  1. Test, Deploy, and Iterate:

* Thoroughly test the rate limiter under various load conditions.

* Deploy the solution in a staged manner if possible.

* Continuously monitor performance, gather feedback, and be prepared to adjust limits and configurations based on real-world usage.


api_rate_limiter.py
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}