API Rate Limiter
Run ID: 69cce8da3e7fb09ff16a630a2026-04-01Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter: Comprehensive Code Generation & Implementation Strategy

This document provides a detailed, professional output for an API Rate Limiter, including core concepts, implementation strategies, and production-ready code examples in Python. This deliverable is designed to be directly actionable for integration into your API infrastructure.


1. Introduction to API Rate Limiting

An API Rate Limiter is a critical component in modern web service architectures, designed to control the number of requests a client can make to an API within a given timeframe. It acts as a gatekeeper, ensuring fair usage, preventing abuse, and maintaining the stability and availability of your services.

2. Importance of API Rate Limiters

Implementing a robust API Rate Limiter offers several significant benefits:

3. Core Concepts & Chosen Algorithm: Sliding Window Log

Several algorithms exist for rate limiting, each with its own trade-offs. For this deliverable, we will focus on the Sliding Window Log algorithm, as it offers high accuracy and fairness, making it suitable for production environments.

3.1. Sliding Window Log Algorithm

The Sliding Window Log algorithm works by storing a timestamp for every request made by a client. When a new request arrives, it performs the following steps:

  1. Record Timestamp: Adds the current timestamp to a log associated with the client (e.g., identified by API key, IP address, or user ID).
  2. Filter Old Requests: Removes all timestamps from the log that fall outside the defined time window (i.e., older than current_time - window_duration).
  3. Count Requests: Counts the number of remaining timestamps in the log.
  4. Check Limit: If the count is less than or equal to the allowed limit, the request is permitted. Otherwise, it is denied.

Advantages:

Disadvantages:

4. Implementation Strategy

Our implementation will provide two main components:

  1. In-Memory Rate Limiter: Suitable for single-instance applications, development, and testing. It uses Python's built-in data structures.
  2. Redis-Backed Rate Limiter: The recommended approach for production environments. It leverages Redis's Sorted Set data type, which is ideal for storing and querying timestamps efficiently across distributed services.

Both implementations will expose a simple allow_request method and will be demonstrated with a Flask API decorator for easy integration.

5. Generated Code: API Rate Limiter (Python)

We will provide clean, well-commented, and production-ready Python code.

5.1. In-Memory Rate Limiter (For Development & Single-Instance Applications)

This implementation is useful for local development, testing, or applications that do not require distributed rate limiting. It stores request logs in a dictionary in the application's memory.

text • 1,600 chars
#### Explanation of In-Memory Rate Limiter:
*   **`InMemoryRateLimiter` Class:** Manages the rate limiting logic.
*   **`_requests` Dictionary:** Stores a `collections.deque` (double-ended queue) for each `client_id`. A `deque` is efficient for appending to the right and popping from the left, which is perfect for managing timestamps in a sliding window.
*   **`allow_request(client_id, limit, window_seconds)`:**
    *   Gets the current time.
    *   Initializes a `deque` for the client if it's their first request.
    *   **Pruning:** It iterates from the left of the `deque` and removes any timestamps that are older than `current_time - window_seconds`. This keeps the `deque` clean and only contains relevant requests within the current window.
    *   **Limit Check:** If the number of remaining requests in the `deque` is less than `limit`, the request is allowed. The current timestamp is appended.
    *   Otherwise, the request is denied.
*   **`get_status`:** Provides insight into the current state of the rate limit for a client, including remaining requests and estimated reset time.

### 5.2. Redis-Backed Rate Limiter (For Production & Distributed Systems)

For production environments, especially microservices or horizontally scaled applications, a distributed rate limiter is essential. Redis is an excellent choice due to its high performance and suitable data structures. We will use Redis's **Sorted Sets**.

#### Prerequisites:
*   **Redis Server:** A running Redis instance accessible from your application.
*   **`redis-py` library:** Install via `pip install redis`.

Sandboxed live preview

API Rate Limiter: Comprehensive Study Plan

1. Introduction and Importance

API Rate Limiting is a critical component in the architecture of any robust and scalable API-driven system. It serves as a protective mechanism to control the number of requests a client can make to an API within a given timeframe. Without effective rate limiting, APIs are vulnerable to various issues, including:

  • Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks: Malicious actors can overwhelm an API with excessive requests, making it unavailable to legitimate users.
  • Resource Exhaustion: Uncontrolled requests can deplete server resources (CPU, memory, database connections), leading to performance degradation or system crashes.
  • Fair Usage: Ensures that no single client monopolizes API resources, providing equitable access to all users.
  • Cost Control: Prevents unexpected high infrastructure costs due to excessive API usage, especially with cloud-based services.
  • Abuse Prevention: Limits brute-force attacks on authentication endpoints, spamming, or data scraping.

This study plan is meticulously designed to equip software engineers, architects, and system designers with a deep theoretical understanding and practical implementation skills for various API Rate Limiting strategies.

2. Learning Objectives

Upon successful completion of this comprehensive study plan, you will be able to:

  • Knowledge & Understanding:

* Define API Rate Limiting, its core principles, and its critical role in system resilience and security.

* Identify and articulate the mechanisms, advantages, and disadvantages of various rate limiting algorithms, including Fixed Window, Sliding Log, Sliding Window Counter, Leaky Bucket, and Token Bucket.

* Differentiate between client-side and server-side rate limiting, and understand their respective applications.

* Explain the unique challenges and considerations involved in implementing rate limiting in distributed systems.

Understand common HTTP headers and response codes associated with rate limiting (e.g., X-RateLimit-, 429 Too Many Requests, Retry-After).

  • Design & Architecture:

* Design a rate limiting system tailored to specific business requirements (e.g., per user, per IP, per API key, per endpoint, tiered limits).

* Select the most appropriate rate limiting algorithm based on use case characteristics, traffic patterns, and system constraints (e.g., burst tolerance, memory usage, fairness).

* Architect a scalable and fault-tolerant distributed rate limiting solution, considering consistency models, data storage, and inter-service communication.

* Plan for edge cases, burst traffic handling, and graceful degradation during system overload.

  • Implementation & Integration:

* Implement foundational rate limiting algorithms using common programming languages (e.g., Python, Java, Go, Node.js) and data stores (e.g., Redis, in-memory caches).

* Integrate rate limiting functionality into existing API Gateways (e.g., Nginx, Envoy, Kong, AWS API Gateway) or directly within microservices.

* Configure and manage dynamic rate limiting rules and policies.

  • Evaluation & Optimization:

* Monitor the effectiveness of a rate limiting system using relevant metrics (e.g., blocked requests, allowed requests, latency, error rates).

* Test rate limiting implementations for correctness, performance, and resilience under various load conditions.

* Optimize rate limiting configurations to balance protection, user experience, and resource utilization.

* Troubleshoot common rate limiting issues and identify potential bypass mechanisms.

3. Weekly Schedule

This study plan is structured for a 4-week intensive learning period, assuming approximately 10-15 hours of dedicated study per week. This includes reading, watching videos, hands-on coding, and design exercises. Adjust the pace according to your individual learning style and availability.


Week 1: Fundamentals and Core Algorithms

  • Focus: Laying the groundwork for API Rate Limiting by understanding its necessity and exploring basic to intermediate algorithms.
  • Topics:

* Introduction: Definition, importance, common use cases (e.g., login attempts, search queries, data uploads), and the impact of not having rate limits.

* Basic Algorithms:

* Fixed Window Counter: Principles, implementation, advantages (simplicity), and disadvantages (burstiness at window edges).

* Sliding Log: Principles, implementation, advantages (accuracy), and disadvantages (high memory usage for large windows).

* Sliding Window Counter: Principles, implementation, advantages (hybrid approach, balances accuracy and memory), and disadvantages.

* Key Concepts: Rate limits (e.g., 100 requests/minute), burst limits, quotas, and different levels of application (e.g., per IP, per user, per API key).

  • Activities:

* Read introductory articles and blog posts on rate limiting.

* Watch foundational video explanations of the algorithms.

* Hands-on: Implement the Fixed Window Counter algorithm in your preferred programming language (e.g., Python, Node.js).

* Analysis: Compare and contrast Fixed Window, Sliding Log, and Sliding Window Counter, focusing on their trade-offs in terms of accuracy, memory, and burst handling.

  • Deliverable: A concise document (1-2 pages) summarizing the three algorithms studied, including their pseudocode/logic, pros, cons, and ideal use cases.

Week 2: Advanced Algorithms and Implementation Details

  • Focus: Deep dive into more sophisticated algorithms that offer better control over traffic shaping and practical considerations for implementation.
  • Topics:

* Advanced Algorithms:

* Leaky Bucket Algorithm: Principles (fixed output rate), implementation (queue-based), advantages (smooth traffic), and disadvantages (potential for dropped requests, no burst tolerance).

* Token Bucket Algorithm: Principles (tokens for requests), implementation (token generation rate), advantages (burst tolerance, flexibility), and disadvantages (more complex to tune).

* Data Storage: Options for storing rate limit state: in-memory, Redis (single instance), database-backed. Discuss their implications for scalability and consistency.

* Client Identification: Strategies for identifying clients (IP address, User ID, API Key, JWT claims) and handling anonymous users.

*

python

import time

import redis

from typing import Tuple

class RedisRateLimiter:

"""

A Redis-backed rate limiter using the Sliding Window Log algorithm.

Ideal for distributed systems and production environments.

Leverages Redis Sorted Sets for efficient timestamp management.

"""

def __init__(self, redis_client: redis.Redis, key_prefix: str = "rate_limit:"):

"""

Initializes the RedisRateLimiter.

Args:

redis_client (redis.Redis): An initialized Redis client instance.

key_prefix (str): Prefix for Redis keys to avoid collisions.

"""

self._redis = redis_client

self._key_prefix = key_prefix

def _get_key(self, client_id: str) -> str:

"""Constructs the Redis key for a given client_id."""

return f"{self._key_prefix}{client_id}"

def allow_request(self, client_id: str, limit: int, window_seconds: int) -> bool:

"""

Checks if a request from the given client_id should be allowed based on

the specified limit and time window using Redis.

Args:

client_id (str): A unique identifier for the client.

limit (int): The maximum number of requests allowed within the window.

window_seconds (int): The duration of the sliding window in seconds.

Returns:

bool: True if the request is allowed, False otherwise.

"""

key = self._get_key(client_id)

current_time_ms = int(time.time() * 1000) # Use milliseconds for Sorted Set scores

# Use a Redis pipeline for atomicity and efficiency

with self._redis.pipeline() as pipe:

# 1. Remove all requests older than the current window

# ZREMRANGEBYSCORE key min

gemini Output

API Rate Limiter: Comprehensive Review and Documentation

This document provides a comprehensive overview, detailed analysis, and actionable recommendations regarding API Rate Limiting. It is designed to serve as a definitive guide for understanding, implementing, and managing effective rate limiting strategies for your API ecosystem.


1. Introduction to API Rate Limiting

API Rate Limiting is a crucial mechanism used to control the number of requests a client can make to an API within a given time window. It acts as a protective layer, ensuring the stability, availability, and fair usage of your API resources.

Why is API Rate Limiting Essential?

  • Prevent Abuse and Misuse: Mitigates denial-of-service (DoS) attacks, brute-force attempts, and scraping.
  • Ensure Fair Usage: Prevents a single client from monopolizing server resources, ensuring all legitimate users receive adequate service.
  • Maintain System Stability: Protects backend services from being overloaded, preventing crashes and performance degradation.
  • Manage Costs: For cloud-based services, limiting requests can help control infrastructure costs associated with excessive resource consumption.
  • Enforce Business Policies: Allows for differentiated access tiers (e.g., free vs. premium, higher limits for partners).

2. Core Concepts and Mechanisms

At its heart, rate limiting involves tracking incoming requests and applying predefined rules to determine if a request should be allowed or denied.

  • Client Identification: Requests are typically identified by IP address, API key, user ID, or OAuth token.
  • Counters/Buckets: Mechanisms to track the number of requests or available "tokens" for a client over a period.
  • Time Windows: The duration over which requests are counted (e.g., per second, per minute, per hour).
  • Thresholds: The maximum number of requests allowed within a time window.
  • Response to Exceeding Limits: Typically an HTTP 429 Too Many Requests status code, often accompanied by Retry-After headers.

3. Common Rate Limiting Algorithms

Different algorithms offer varying trade-offs in terms of complexity, fairness, and performance.

3.1. Fixed Window Counter

  • How it Works: Divides time into fixed-size windows (e.g., 60 seconds). Each request increments a counter for the current window. If the counter exceeds the limit, further requests are blocked until the next window starts.
  • Pros: Simple to implement, low overhead.
  • Cons: Prone to "burstiness" at window edges. A client could make N requests at the very end of one window and N requests at the very beginning of the next, effectively making 2N requests in a short 2*epsilon period.
  • Example: 100 requests/minute. If a client makes 100 requests at 0:59 and 100 requests at 1:01, they made 200 requests in 2 minutes, but also 200 requests in a 2-second span.

3.2. Sliding Window Log

  • How it Works: Stores a timestamp for each request made by a client. When a new request arrives, it counts how many timestamps fall within the current sliding window (e.g., the last 60 seconds). Older timestamps are discarded.
  • Pros: Highly accurate and fair, avoids the "burstiness" problem of fixed windows.
  • Cons: High memory consumption (stores all timestamps), computationally more expensive for large numbers of requests.
  • Example: 100 requests/minute. The system checks all request timestamps from the last 60 seconds. If there are 100 or more, the new request is denied.

3.3. Sliding Window Counter (Hybrid)

  • How it Works: A hybrid approach. It uses a fixed window counter for the current window and estimates the previous window's usage, weighting it. For example, if 30% of the current window has passed, it takes 70% of the previous window's count plus 30% of the current window's count.
  • Pros: Balances accuracy with lower memory usage than sliding window log, mitigates the "burstiness" problem of fixed windows.
  • Cons: Still an approximation, not perfectly accurate.
  • Example: 100 requests/minute. At 0:30 (halfway through the current minute), it might combine 50% of the previous minute's count with 50% of the current minute's count.

3.4. Token Bucket

  • How it Works: Conceptually, tokens are added to a "bucket" at a fixed rate. Each request consumes one token. If the bucket is empty, the request is denied. The bucket has a maximum capacity, allowing for bursts up to that capacity.
  • Pros: Allows for controlled bursts, smooths out traffic, simple to implement.
  • Cons: Can be difficult to choose optimal bucket size and refill rate.
  • Example: Bucket refills at 10 tokens/second, max capacity 100 tokens. A client can make 100 requests instantly (using all tokens), then must wait for tokens to refill.

3.5. Leaky Bucket

  • How it Works: Requests are added to a queue (the bucket). Requests "leak" out of the bucket at a constant rate, processing them sequentially. If the bucket is full, new requests are dropped.
  • Pros: Smooths out bursty traffic, ensures a steady processing rate.
  • Cons: Can introduce latency if the bucket fills up, requests might be dropped even if the server isn't overloaded.
  • Example: Bucket can hold 50 requests, leaks 10 requests/second. If 60 requests arrive instantly, 10 are dropped, and the remaining 50 are processed at 10/second.

4. Design and Implementation Considerations

Implementing an effective API rate limiter requires careful planning across several dimensions.

4.1. Where to Implement

  • API Gateway/Edge Layer: Highly recommended. Offloads rate limiting logic from individual services, centralizes policy management, and protects all downstream services. Examples: AWS API Gateway, Azure API Management, Kong, Apigee, NGINX.
  • Service Layer (Middleware): Implemented within individual microservices or applications. Useful for fine-grained, service-specific limits or when an API Gateway is not used. Can lead to duplicated logic and management overhead if not carefully designed.

4.2. Granularity of Limits

  • Per IP Address: Simple, but problematic for NAT environments (multiple users share one IP) or VPNs.
  • Per API Key/OAuth Token: Most common and effective. Allows for user-specific or application-specific limits.
  • Per Authenticated User ID: Ideal for user-facing APIs where each user has a distinct identity.
  • Per Endpoint/Resource: Allows different limits for different API calls (e.g., GET /products might have higher limits than POST /orders).
  • Combined: Often, a combination (e.g., per API key, but also a lower global IP limit) offers the best protection.

4.3. Defining Rate Limits

  • Requests Per Second (RPS): Common for high-throughput APIs.
  • Requests Per Minute (RPM) / Per Hour (RPH): Suitable for less frequent operations.
  • Burst Limits: Maximum number of requests allowed instantly before throttling begins (often used with Token Bucket).
  • Concurrent Requests: Limiting the number of parallel active requests.

4.4. Response to Exceeding Limits

  • HTTP Status Code 429 Too Many Requests: Standard and expected.
  • Retry-After Header: Crucial for informing clients how long they should wait before retrying. Can be an absolute timestamp or a number of seconds.
  • Clear Error Message: Provide a human-readable message explaining the limit and how to resolve it (e.g., "Rate limit exceeded. Try again in 60 seconds.").
  • Logging and Monitoring: Log rate limit breaches for security analysis and operational insights.

4.5. Handling Distributed Systems

  • Centralized State: For distributed microservices, rate limit counters must be synchronized. Redis is a popular choice for storing and managing rate limit states across multiple instances.
  • Consistent Hashing: Distributing rate limit checks across multiple nodes efficiently.

4.6. Edge Cases and Special Considerations

  • Whitelisting/Exemptions: Allow specific IPs or API keys (e.g., internal services, critical partners) to bypass rate limits.
  • Graceful Degradation: Instead of hard blocking, consider returning cached data or a simplified response when under extreme load.
  • Client-Side Behavior: Educate clients on implementing exponential backoff and respecting Retry-After headers.
  • Monitoring and Alerting: Set up alerts for frequent rate limit breaches, which could indicate an attack or a misconfigured client.

5. Implementation Strategies (High-Level)

5.1. Using an API Gateway (Recommended)

  • Cloud-Native:

* AWS API Gateway: Built-in throttling and quotas configurable per stage, method, or API key.

* Azure API Management: Policies for rate limits, quotas, and concurrent call limits.

* Google Cloud Apigee: Advanced rate limiting, spike arrest, and quota management.

  • Self-Hosted/Open Source:

* Kong Gateway: Plugins for rate limiting (e.g., rate-limiting, rate-limiting-advanced).

* NGINX/NGINX Plus: limit_req and limit_conn directives provide robust rate limiting and connection limiting.

* Envoy Proxy: Comprehensive rate limit filters.

5.2. Custom Middleware

  • Language-Specific Frameworks: Many web frameworks (e.g., Node.js Express, Python Flask/Django, Java Spring Boot) have middleware or libraries for rate limiting.
  • Redis-backed: Use Redis to store counters/timestamps. This is highly scalable and performant for distributed systems.

* Example: Incrementing a key in Redis for each request, setting a TTL, and checking the count. For more advanced algorithms, Redis sorted sets can store timestamps for sliding window log.

  • In-Memory (Single Instance): Simple for single-instance applications but not suitable for scaling or distributed environments.

5.3. Client-Side Considerations

  • Exponential Backoff: Clients should implement a strategy to progressively increase the wait time between retries after encountering a 429 error.
  • Respect Retry-After: Clients must parse and adhere to the Retry-After header provided by the server.
  • Circuit Breakers: Implement client-side circuit breakers to prevent continuous hammering of an overloaded API.

6. Best Practices

  1. Start with Sensible Defaults, then Iterate: Begin with reasonable rate limits based on expected usage, then monitor and adjust based on real-world traffic patterns and performance metrics.
  2. Document Clearly: Provide clear, comprehensive documentation for API consumers detailing your rate limiting policies, error responses, and recommended retry strategies.
  3. Provide Informative Error Messages: Ensure error responses are specific enough for developers to understand why their request was denied and how to proceed.
  4. Monitor and Alert: Continuously monitor rate limit breaches. High rates of 429 errors could indicate an attack, a buggy client, or that your limits are too restrictive.
  5. Educate Clients: Actively communicate best practices for interacting with your rate-limited API, including exponential backoff and respecting Retry-After headers.
  6. Offer Differentiated Tiers: Implement different rate limits for various user groups (e.g., free, premium, enterprise) to monetize your API or support different business needs.
  7. Consider Global vs. Local Limits: Decide if limits apply across all endpoints or per specific endpoint, balancing protection with usability.
  8. Avoid Single Points of Failure: Ensure your rate limiting infrastructure itself is highly available and scalable.

7. Actionable Recommendations

Based on this comprehensive review, we recommend the following steps for implementing and optimizing your API Rate Limiter:

  1. Leverage an API Gateway: Prioritize implementing rate limiting at the API Gateway layer (e.g., AWS API Gateway, Kong, NGINX) for centralized control, performance, and protection of downstream services.
  2. Adopt the Sliding Window Counter Algorithm: This algorithm offers a good balance between accuracy, fairness, and performance, mitigating the burstiness issues of fixed windows without the high memory cost of the sliding window log.
  3. Define Granular Limits: Implement rate limits primarily per API Key/OAuth Token and per Authenticated User ID. Supplement this with a lower per IP address limit as a secondary defense.
  4. Implement Retry-After Headers: Ensure all 429 responses include a Retry-After header to guide client retry logic effectively.
  5. Establish Centralized State with Redis: For distributed API services, utilize Redis to maintain a synchronized, high-performance state for your rate limit counters.
  6. Set Up Robust Monitoring and Alerting: Configure alerts for sustained periods of high 429 responses or unusual traffic patterns to quickly identify and address potential issues.
  7. Document API Rate Limits Thoroughly: Publish clear documentation on your developer portal detailing your rate limiting policies, including specific limits, error codes, and recommended client-side retry strategies.
  8. Regularly Review and Tune Limits: Conduct periodic reviews of your rate limit policies based on API usage patterns, system performance, and business requirements.

8. Conclusion

API Rate Limiting is an indispensable component of a robust and resilient API architecture. By carefully selecting the right algorithms, implementing them at the appropriate layer, and continuously monitoring their effectiveness, you can safeguard your API resources, ensure optimal performance, and provide a reliable experience for all your consumers. This detailed documentation serves as a foundation for achieving these critical objectives.

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}