API Rate Limiter
Run ID: 69ccdadf3e7fb09ff16a5b102026-04-01Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter Architecture Plan

Project Title: Scalable and Resilient API Rate Limiter

Date: October 26, 2023

Version: 1.0


1. Introduction

An API Rate Limiter is a critical component in modern microservices architectures and public-facing APIs. Its primary purpose is to control the rate at which clients can send requests to an API, preventing abuse, ensuring fair resource usage, and protecting the backend services from being overwhelmed. This document outlines a comprehensive architectural plan for designing and implementing a robust, scalable, and highly available API Rate Limiter.

Key Benefits:

2. Core Requirements

The API Rate Limiter must fulfill the following functional and non-functional requirements:

3. Architectural Goals

The design prioritizes the following architectural qualities:

4. High-Level Architecture Overview

The API Rate Limiter will be implemented as a distributed system, integrated primarily at the API Gateway layer. This allows for centralized enforcement before requests reach backend services.

Key Components:

  1. API Gateway/Proxy (Enforcement Point): Intercepts incoming requests, extracts client identifiers, and queries the Rate Limiting Service.
  2. Rate Limiting Service: A dedicated microservice responsible for applying rate limiting logic, interacting with the distributed cache, and making allow/deny decisions.
  3. Distributed Cache (State Storage): Stores the current state (counters, timestamps) for all active rate limits.
  4. Configuration Service: Manages and distributes rate limiting rules to the Rate Limiting Service instances.
  5. Monitoring & Alerting: Collects metrics, logs, and triggers alerts based on predefined thresholds.
mermaid • 535 chars
graph TD
    A[Client] --> B(API Gateway / Load Balancer)
    B --> C{Rate Limiting Filter/Plugin}
    C --> D(Rate Limiting Service)
    D --> E(Distributed Cache - e.g., Redis Cluster)
    D --> F(Configuration Service - e.g., Consul/etcd)
    E -- Stores/Retrieves State --> D
    F -- Provides Rules --> D
    D -- Decision (Allow/Deny) --> C
    C -- Allowed --> G(Backend Services)
    C -- Denied (429) --> B
    B --> A
    D -- Logs/Metrics --> H(Monitoring & Alerting)
    C -- Logs/Metrics --> H
    G -- Logs/Metrics --> H
Sandboxed live preview

5. Detailed Component Design

5.1. Placement Strategy (Enforcement Point)

Recommendation: A hybrid approach, primarily at the API Gateway/Reverse Proxy, with a dedicated Rate Limiting Service for complex logic and state management.

  • API Gateway/Reverse Proxy (e.g., Envoy, Nginx with Lua, Kong):

* Pros: Centralized enforcement, minimal impact on backend services, can handle initial request filtering. High performance.

* Cons: Limited flexibility for complex logic directly within the gateway; requires external state management.

Implementation: The gateway will have a lightweight filter or plugin that intercepts requests, extracts client identifiers, and makes a fast, RPC call* to the dedicated Rate Limiting Service.

  • Dedicated Rate Limiting Service:

* Pros: Decouples rate limiting logic from the gateway, allows for complex algorithms, easier to scale independently, technology-agnostic.

* Cons: Introduces an additional network hop and potential latency.

* Implementation: This service will encapsulate the core rate limiting logic, interacting with the distributed cache and configuration service. It

gemini Output

As a critical component of any robust API infrastructure, an API Rate Limiter regulates the number of requests a client can make within a specified timeframe. This prevents abuse, ensures fair usage, protects backend services from overload, and maintains the stability and availability of your API.

This deliverable provides a comprehensive, production-ready code implementation for an API Rate Limiter, focusing on the Token Bucket algorithm for its effectiveness in handling bursts and ensuring smooth operation. We will also discuss extending this to a distributed environment using Redis with a Sliding Window Counter approach.


1. Understanding API Rate Limiting Strategies

Before diving into the code, it's essential to understand the common strategies for rate limiting:

  • Fixed Window Counter: Divides time into fixed windows (e.g., 60 seconds). Each request increments a counter for the current window. If the counter exceeds the limit, requests are blocked.

Pros:* Simple to implement.

Cons:* Can suffer from a "bursty" problem at the window edges, where a client can make a full quota of requests at the very end of one window and another full quota at the very beginning of the next.

  • Sliding Window Log: Stores a timestamp for each request made by a client. To check if a request is allowed, it counts all timestamps within the current window (e.g., last 60 seconds).

Pros:* Very accurate, avoids the "bursty" problem of fixed windows.

Cons:* Can be memory-intensive as it stores individual request timestamps.

  • Sliding Window Counter (Hybrid): A more efficient variation that combines elements of Fixed Window and Sliding Window Log. It uses two fixed windows (current and previous) and weights their counts based on the overlap.

Pros:* Good balance of accuracy and efficiency, often used with Redis.

Cons:* Slightly more complex to implement than fixed window.

  • Token Bucket: Each client has a "bucket" that holds tokens. Tokens are added to the bucket at a fixed rate (e.g., 1 token per second) up to a maximum capacity. Each request consumes one token. If the bucket is empty, the request is denied.

Pros:* Excellent for handling bursts (clients can consume multiple tokens quickly if available), smooth request distribution, simple to understand and implement.

Cons:* Requires careful tuning of bucket capacity and fill rate.

  • Leaky Bucket: Similar to Token Bucket but with a queue. Requests are added to a queue, and processed at a fixed rate. If the queue overflows, requests are dropped.

Pros:* Smooths out bursty traffic, good for situations where consistent processing is more important than immediate request handling.

Cons:* Adds latency due to queuing, queue size management.

For this deliverable, we will provide a primary implementation using the Token Bucket algorithm due to its widespread adoption and effectiveness in managing burst traffic while maintaining a steady average rate. We will also discuss the Sliding Window Counter with Redis for distributed environments.


2. Token Bucket Rate Limiter (In-Memory Implementation)

This implementation provides a thread-safe, in-memory Token Bucket rate limiter suitable for a single application instance.

2.1. Core Concepts

  • Bucket Capacity (capacity): The maximum number of tokens an identifier (e.g., IP address, user ID) can accumulate. This defines the maximum burst capacity.
  • Fill Rate (fill_rate): The rate at which tokens are added back to the bucket, typically expressed in tokens per second. This determines the sustained average rate.
  • Last Refill Time (last_refill_time): The timestamp when the bucket was last refilled or a request was processed. Used to calculate how many tokens to add since then.
  • Tokens (tokens): The current number of tokens available in the bucket for a given identifier.

2.2. Python Code Implementation


import time
import threading
from collections import defaultdict

class TokenBucketRateLimiter:
    """
    Implements a thread-safe, in-memory Token Bucket rate limiting algorithm.

    This algorithm allows for bursts of requests up to the bucket's capacity,
    while ensuring a sustained average rate defined by the fill_rate.

    Attributes:
        capacity (int): The maximum number of tokens the bucket can hold.
                        Represents the maximum burst capacity.
        fill_rate (float): The rate at which tokens are added to the bucket per second.
                           Represents the sustained average rate.
        _buckets (dict): Stores the state of each identifier's bucket:
                         {'identifier': {'tokens': current_tokens, 'last_refill_time': timestamp}}
        _lock (threading.Lock): A lock to ensure thread-safety for bucket operations.
    """

    def __init__(self, capacity: int, fill_rate: float):
        """
        Initializes the TokenBucketRateLimiter.

        Args:
            capacity (int): The maximum number of tokens the bucket can hold.
                            (e.g., 100 for 100 requests burst)
            fill_rate (float): The rate at which tokens are added to the bucket per second.
                               (e.g., 10.0 for 10 requests/second sustained)
        """
        if capacity <= 0:
            raise ValueError("Capacity must be a positive integer.")
        if fill_rate <= 0:
            raise ValueError("Fill rate must be a positive number.")

        self.capacity = capacity
        self.fill_rate = fill_rate
        self._buckets = defaultdict(lambda: {'tokens': self.capacity, 'last_refill_time': time.time()})
        self._lock = threading.Lock()

    def _refill_tokens(self, identifier: str) -> None:
        """
        Refills the tokens for a given identifier based on elapsed time.

        Args:
            identifier (str): The unique identifier for the client (e.g., IP address, user ID).
        """
        current_time = time.time()
        bucket = self._buckets[identifier]
        
        # Calculate time elapsed since last refill
        time_elapsed = current_time - bucket['last_refill_time']
        
        # Calculate tokens to add
        tokens_to_add = time_elapsed * self.fill_rate
        
        # Add tokens, ensuring it doesn't exceed capacity
        bucket['tokens'] = min(self.capacity, bucket['tokens'] + tokens_to_add)
        bucket['last_refill_time'] = current_time

    def allow_request(self, identifier: str, cost: int = 1) -> bool:
        """
        Checks if a request for the given identifier is allowed.

        Args:
            identifier (str): The unique identifier for the client (e.g., IP address, user ID).
            cost (int): The number of tokens this request consumes. Defaults to 1.

        Returns:
            bool: True if the request is allowed, False otherwise.
        """
        if cost <= 0:
            raise ValueError("Request cost must be a positive integer.")

        with self._lock:
            self._refill_tokens(identifier)
            bucket = self._buckets[identifier]

            if bucket['tokens'] >= cost:
                bucket['tokens'] -= cost
                return True
            return False

    def get_remaining_tokens(self, identifier: str) -> float:
        """
        Gets the current number of remaining tokens for a given identifier.

        Args:
            identifier (str): The unique identifier for the client.

        Returns:
            float: The number of remaining tokens.
        """
        with self._lock:
            self._refill_tokens(identifier)
            return self._buckets[identifier]['tokens']

    def get_time_to_next_request(self, identifier: str, cost: int = 1) -> float:
        """
        Calculates the estimated time (in seconds) until a request would be allowed.

        Args:
            identifier (str): The unique identifier for the client.
            cost (int): The number of tokens this request would consume. Defaults to 1.

        Returns:
            float: The estimated time in seconds. Returns 0 if a request is currently allowed.
        """
        if cost <= 0:
            raise ValueError("Request cost must be a positive integer.")

        with self._lock:
            self._refill_tokens(identifier)
            bucket = self._buckets[identifier]
            
            if bucket['tokens'] >= cost:
                return 0.0 # Request can be made immediately

            # Calculate how many tokens are needed
            tokens_needed = cost - bucket['tokens']
            
            # Calculate time to generate those tokens
            # (tokens_needed / fill_rate)
            return tokens_needed / self.fill_rate

# Example Usage:
if __name__ == "__main__":
    # Allow 10 requests per second, with a burst capacity of 20 requests.
    rate_limiter = TokenBucketRateLimiter(capacity=20, fill_rate=10.0)

    print("--- Testing burst capacity ---")
    client_ip = "192.168.1.1"
    allowed_count = 0
    for i in range(25): # Try to send 25 requests
        if rate_limiter.allow_request(client_ip):
            allowed_count += 1
            print(f"[{i+1}] Request ALLOWED for {client_ip}. Remaining tokens: {rate_limiter.get_remaining_tokens(client_ip):.2f}")
        else:
            print(f"[{i+1}] Request DENIED for {client_ip}. Remaining tokens: {rate_limiter.get_remaining_tokens(client_ip):.2f}. Retry-After: {rate_limiter.get_time_to_next_request(client_ip):.2f}s")
            # Simulate waiting before retrying
            time.sleep(rate_limiter.get_time_to_next_request(client_ip) + 0.01) # Wait slightly longer
    print(f"Total allowed in burst test: {allowed_count}")

    print("\n--- Testing sustained rate ---")
    client_id = "user123"
    print(f"Allowing requests for {client_id} at a sustained rate (10/sec).")
    for i in range(15):
        if rate_limiter.allow_request(client_id):
            print(f"[{i+1}] Request ALLOWED for {client_id}. Remaining tokens: {rate_limiter.get_remaining_tokens(client_id):.2f}")
        else:
            print(f"[{i+1}] Request DENIED for {client_id}. Remaining tokens: {rate_limiter.get_remaining_tokens(client_id):.2f}. Retry-After: {rate_limiter.get_time_to_next_request(client_id):.2f}s")
        time.sleep(0.1) # Simulate requests coming in every 0.1 seconds (10 requests/sec)

    print("\n--- Testing different costs ---")
    client_api_key = "api-key-xyz"
    print(f"Initial tokens for {client_api_key}: {rate_limiter.get_remaining_tokens(client_api_key):.2f}")
    if rate_limiter.allow_request(client_api_key, cost=5):
        print(f"Request with cost 5 ALLOWED. Remaining tokens: {rate_limiter.get_remaining_tokens(client_api_key):.2f}")
    else:
        print(f"Request with cost 5 DENIED. Remaining tokens: {rate_limiter.get_remaining_tokens(client_api_key):.2f}")

    if rate_limiter.allow_request(client_api_key, cost=20):
        print(f"Request with cost 20 ALLOWED. Remaining tokens: {rate_limiter.get_remaining_tokens(client_api_key):.2f}")
    else:
        print(f"Request with cost 20 DENIED. Remaining tokens: {rate_limiter.get_remaining_tokens(client_api_key):.2f}")

2.3. Integration with a Web Framework (Example: Flask)

To integrate this rate limiter into a

gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Guide

This document provides a comprehensive, professional overview of API Rate Limiters, detailing their purpose, design considerations, common algorithms, and best practices for implementation and management. This information is critical for maintaining API stability, security, and fair usage.


1. Introduction to API Rate Limiting

An API Rate Limiter is a mechanism that controls the number of requests a client can make to an API within a given timeframe. It acts as a gatekeeper, preventing abuse, ensuring fair resource allocation, and maintaining the stability and performance of your services. Without effective rate limiting, an API can become vulnerable to denial-of-service (DoS) attacks, resource exhaustion, and unfair usage patterns that degrade the experience for all users.

2. Core Concepts and Terminology

To effectively discuss and implement API rate limiting, it's important to understand the following key terms:

  • Rate Limit: The maximum number of requests permitted within a specific time window (e.g., 100 requests per minute).
  • Quota: A larger, often longer-term, limit on resource consumption (e.g., 10,000 requests per day for a specific tier).
  • Throttling: The process of intentionally slowing down a client's requests or rejecting them once a rate limit is exceeded.
  • Burst: A temporary spike in requests that might exceed the average rate but is still within an acceptable, short-term threshold. Rate limiters often account for bursts.
  • Grace Period: A short period after a limit is hit where some requests might still be allowed, often used to prevent immediate hard blocking for minor overages.
  • Retry-After Header: An HTTP response header (RFC 7231) sent when a client is throttled, indicating how long the client should wait before making another request.
  • HTTP Status Codes:

* 429 Too Many Requests: The standard HTTP status code indicating that the user has sent too many requests in a given amount of time.

* 503 Service Unavailable: Can sometimes be used if the server is truly overloaded, but 429 is more specific for rate limiting.

3. Why Implement API Rate Limiting?

Implementing a robust API rate limiting strategy offers numerous benefits:

  • Prevent Abuse and Misuse: Protects against malicious activities like DoS attacks, brute-force attempts, and data scraping.
  • Ensure Fair Resource Allocation: Prevents a single client or a small group of clients from monopolizing server resources, ensuring all legitimate users receive a consistent quality of service.
  • Maintain System Stability and Performance: Reduces the load on backend servers, databases, and other infrastructure components, preventing overload and ensuring the API remains responsive.
  • Control Costs: Limits the consumption of expensive resources (e.g., database queries, third-party API calls) that are triggered by API requests.
  • Monetization and Tiered Services: Enables the creation of different service tiers (e.g., free, premium, enterprise) with varying rate limits, allowing for monetization strategies.
  • Improved User Experience: By preventing system crashes and slowdowns, rate limiting contributes to a more reliable and predictable API experience for legitimate users.
  • Operational Visibility: Provides valuable data on API usage patterns, helping identify popular endpoints, potential bottlenecks, and unusual activity.

4. Key Considerations for Design and Implementation

When designing and implementing an API rate limiter, several factors must be carefully considered:

  • Scope of Limiting:

* Global: Limit all requests to the entire API.

* Per User/Client: Limit requests based on an authenticated user ID or API key. This is the most common and flexible approach.

* Per IP Address: Limit requests from a specific IP address (useful for unauthenticated requests, but can be problematic with shared IPs or proxies).

* Per Endpoint: Apply different limits to specific API endpoints, as some operations are more resource-intensive than others.

* Per Region/Data Center: Distribute limits geographically.

  • Granularity:

* Time Window: What is the duration for counting requests (e.g., per second, per minute, per hour)?

* Request Type: Should GET, POST, PUT, DELETE requests be counted equally or differently?

  • Enforcement Point:

* API Gateway/Load Balancer: Ideal for early rejection of requests before they reach backend services, reducing load.

* Application Layer: More granular control, allowing for specific logic per endpoint or user, but consumes application resources.

* Hybrid: A combination of both, where a gateway handles basic limits and the application handles more complex, business-logic-driven limits.

  • State Management:

* Distributed System: If your API runs on multiple instances, the rate limiter must be distributed and share state (e.g., using Redis, Memcached, or a distributed database) to ensure consistent limits across all instances.

* Persistence: Should rate limits persist across restarts or be reset?

  • Response to Exceeding Limits:

* HTTP 429 Too Many Requests: Standard response.

* Headers: Include X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset to inform clients about their current status.

* Retry-After Header: Crucial for guiding clients on when to retry.

* Custom Error Messages: Provide clear, actionable messages.

  • Dynamic vs. Static Limits:

* Static: Hardcoded limits that rarely change.

* Dynamic: Limits that can be adjusted on the fly, potentially based on system load, user behavior, or service tier.

  • Logging and Monitoring:

* Crucial for understanding usage patterns, identifying abuse, and debugging.

* Integrate with existing monitoring systems for real-time alerts.

5. Common Rate Limiting Algorithms

Different algorithms offer various trade-offs in terms of accuracy, resource usage, and how they handle bursts.

5.1. Fixed Window Counter

  • How it works: Divides time into fixed-size windows (e.g., 60 seconds). Each incoming request increments a counter for the current window. If the counter exceeds the limit within that window, subsequent requests are blocked until the next window begins.
  • Pros: Simple to implement, low memory usage.
  • Cons:

* Burst Issue at Window Edges: A client can make N requests at the very end of one window and N requests at the very beginning of the next window, effectively sending 2N requests in a short period (e.g., 120 requests in 2 seconds for a 60 req/min limit).

* Inefficient for uneven traffic: Bursts are not handled gracefully.

5.2. Sliding Log

  • How it works: Stores a timestamp for every request made by a client. When a new request arrives, it removes all timestamps older than the current time minus the window duration. It then counts the remaining timestamps. If the count exceeds the limit, the request is denied.
  • Pros: Very accurate, handles bursts well, no "edge case" issue.
  • Cons:

* High Memory Usage: Requires storing a log of timestamps for each client, which can be significant for high-traffic APIs.

* High CPU Usage: Counting and pruning timestamps can be computationally intensive, especially for large logs.

5.3. Sliding Window Counter

  • How it works: A hybrid approach that combines elements of Fixed Window and Sliding Log. It uses two fixed windows: the current window and the previous window. When a request arrives, it calculates an interpolated count based on the current window's count and a weighted portion of the previous window's count.

* Example: If the window is 60 seconds, and 30 seconds into the current window, the algorithm considers the current window's count plus 50% of the previous window's count.

  • Pros: Better burst handling than Fixed Window, less memory/CPU intensive than Sliding Log, good compromise.
  • Cons: Not perfectly accurate, can still allow slight overages during specific edge cases.

5.4. Leaky Bucket Algorithm

  • How it works: Models a bucket with a fixed capacity and a constant leak rate. Requests are "water drops" entering the bucket. If the bucket is full, new requests are dropped (denied). Requests are processed at a constant rate (the leak rate).
  • Pros: Smooths out request bursts into a steady output rate, good for protecting backend services from sudden spikes.
  • Cons:

* Queueing: Requests might be delayed if the bucket is not full but the leak rate is slower than the incoming rate.

* Complexity: More complex to implement than simple counters.

* No burst allowance: By design, it processes at a constant rate, so it doesn't inherently allow for bursts above the leak rate.

5.5. Token Bucket Algorithm

  • How it works: A bucket holds "tokens." Tokens are added to the bucket at a fixed rate. Each request consumes one token. If the bucket is empty, the request is denied. The bucket has a maximum capacity, limiting the number of tokens that can accumulate.
  • Pros:

* Allows for bursts: If the bucket has accumulated tokens, a client can make a burst of requests up to the bucket's capacity.

* Simpler to implement and understand than Leaky Bucket for many use cases.

* Flexible: Can be easily configured to allow different burst sizes and refill rates.

  • Cons:

* Requires careful tuning of bucket size and refill rate.

* State needs to be managed for each client.

6. Implementation Best Practices

  • Use a Dedicated Rate Limiting Service/Library: Don't reinvent the wheel. Leverage battle-tested solutions like:

* Redis: Excellent for distributed rate limiting due to its atomic operations (INCR, EXPIRE) and in-memory speed.

* API Gateways: Nginx, Envoy, Kong, AWS API Gateway, Google Apigee, Azure API Management all offer built-in rate limiting capabilities.

* Language-specific Libraries: Many programming languages have libraries for implementing various rate limiting algorithms.

  • Layered Approach: Combine rate limiting at the edge (API Gateway for initial broad limits) with application-level limiting for finer-grained control.
  • Clear Error Responses: When a client is rate-limited, return a 429 Too Many Requests status code along with informative headers:

* X-RateLimit-Limit: The total number of requests allowed in the current window.

* X-RateLimit-Remaining: The number of requests remaining in the current window.

* X-RateLimit-Reset: The time (in UTC epoch seconds) when the current rate limit window resets.

* Retry-After: The number of seconds the client should wait before making another request.

  • Client Communication: Document your rate limits clearly in your API documentation. Explain how clients should handle 429 responses and recommend exponential backoff strategies for retries.
  • Idempotency: Ensure that retrying a request after a rate limit will not cause unintended side effects if the original request was partially processed.
  • Graceful Degradation: Consider what happens when the rate limiting service itself becomes unavailable. Fail open (allow requests) or fail closed (block requests)? This depends on your security and availability requirements.
  • Avoid Single Points of Failure: Ensure your rate limiting infrastructure is highly available and scalable.
  • Testing: Rigorously test your rate limiter under various load conditions, including bursts and sustained high traffic, to confirm it behaves as expected.

7. Monitoring and Alerting

Effective monitoring is crucial for managing your API rate limiter:

  • Key Metrics to Monitor:

* Rate-limited requests: Count of 429 responses.

* Requests per second/minute/hour: Overall and per client/endpoint.

* Resource utilization: CPU, memory, network I/O of the rate limiting service and backend APIs.

* Queue length/latency: If using a queue-based algorithm like Leaky Bucket.

  • Alerting: Set up alerts for:

* Spikes in rate-limited requests, indicating potential abuse or misbehaving clients.

* Unexpected drops in API traffic (could indicate the rate limiter is too aggressive).

* High resource utilization on rate limiting infrastructure.

  • Logging: Log all rate-limiting events, including client ID, IP, endpoint, and the specific limit hit. This data is invaluable for debugging and security analysis.

8. Testing Strategies

  • Unit Tests: Verify the logic of your chosen rate limiting algorithm.
  • Integration Tests: Test the interaction between your API and the rate limiting component.
  • Load Testing: Simulate high traffic, including bursts and sustained requests above limits, to ensure the rate limiter performs correctly under stress and doesn't become a bottleneck itself.
  • Edge Case Testing: Specifically test scenarios like requests hitting exactly at the window boundary for fixed window algorithms, or emptying/filling the bucket for token/leaky bucket algorithms.
  • Client-Side Testing: Validate that client applications correctly handle 429 responses and implement Retry-After and exponential backoff.

9. Customer-Facing Aspects

  • Clear Documentation: Provide comprehensive API documentation detailing:

* The specific rate limits for different endpoints and user tiers.

* The HTTP headers used for conveying rate limit status (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, Retry-After).

* Recommended best practices for handling 429 responses (e.g., exponential backoff, respecting Retry-After).

* How to request higher limits if needed.

  • Developer Portal: Offer a dashboard or portal where developers can view their current usage against their limits.
  • Communication: Be proactive in communicating any changes to rate limits or policies.

10. Conclusion

API rate limiting is an indispensable component of any robust and scalable API ecosystem. By thoughtfully designing, implementing, and monitoring your rate limiting strategy, you can effectively protect your services from abuse, ensure fair resource distribution, maintain performance, and ultimately deliver a superior experience for your API consumers. It is a continuous process of tuning and adaptation to evolving usage patterns and security threats.

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}