API Rate Limiter
Run ID: 69cb11b042bc43f7e3be719a2026-03-31Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter: Production-Ready Implementation (Step 2 of 3)

1. Deliverable Overview

This document provides a comprehensive, detailed, and production-ready implementation for an API Rate Limiter. The solution utilizes Python with Flask as the web framework and Redis as the high-performance, in-memory data store for tracking request counts and timestamps.

The chosen algorithm is a Sliding Window Log, which offers high accuracy by tracking individual request timestamps within a defined window. This approach effectively mitigates the "burst" problem often seen with simple fixed-window counters at the window edges.

This output includes:

2. Key Concepts of API Rate Limiting

API rate limiting is a crucial mechanism to control the number of requests a client can make to a server within a given timeframe. It serves several critical purposes:

3. Chosen Approach: Sliding Window Log with Redis

3.1 Why this Approach?

The Sliding Window Log algorithm, implemented with Redis Sorted Sets, is an excellent choice for robust API rate limiting due to its accuracy and efficiency:

3.2 How it Works

  1. Request Arrival: When a request arrives, the current timestamp (in milliseconds or microseconds) is recorded.
  2. Window Definition: A window_size (e.g., 60 seconds) is defined.
  3. Cleanup Old Requests: All timestamps in the Redis Sorted Set for the given client key that are older than current_timestamp - window_size are removed. This ensures only requests within the current sliding window are considered.
  4. Count Remaining Requests: The number of remaining timestamps in the Sorted Set is counted.
  5. Check Limit: If the count is less than the limit, the request is allowed. The current timestamp is then added to the Sorted Set.
  6. Reject Request: If the count is equal to or exceeds the limit, the request is rejected with a 429 Too Many Requests HTTP status code.

3.3 Advantages

3.4 Disadvantages

4. Prerequisites

Before running the code, ensure you have the following installed:

text • 464 chars
### 5. Implementation Details (Code)

The solution consists of two main parts:
1.  A `RedisRateLimiter` class that encapsulates the core rate limiting logic using Redis.
2.  A Flask application demonstrating how to integrate this rate limiter using a custom decorator.

#### 5.1 `rate_limiter.py`: Redis-backed Sliding Window Log Rate Limiter Class

This module contains the `RedisRateLimiter` class, which handles all interactions with Redis for rate limiting.

Sandboxed live preview

API Rate Limiter Architecture Plan

This document outlines a comprehensive architecture plan for implementing a robust, scalable, and highly available API Rate Limiter. This solution is designed to protect your backend services from abuse, ensure fair resource allocation, and maintain service stability under varying load conditions.


1. Introduction & Rationale

API rate limiting is a critical component for any public or internal API. It controls the number of requests a client can make to an API within a given timeframe.

Key Objectives:

  • Prevent Abuse & DDoS: Safeguard backend services from malicious attacks or accidental overloading.
  • Ensure Fair Usage: Distribute available resources equitably among all clients.
  • Cost Control: Limit excessive resource consumption by individual clients, especially for cloud-based services.
  • Maintain Service Stability: Prevent cascading failures by gracefully degrading service for over-limit clients.
  • Enable Tiered Access: Support different rate limits for various client types (e.g., free vs. premium users).

2. Functional & Non-Functional Requirements

2.1. Functional Requirements

  • Flexible Limiting: Support rate limits per IP address, authenticated user, API key, endpoint, and HTTP method.
  • Configurable Limits: Allow dynamic configuration of rate limits (e.g., 100 requests/minute, 10 requests/second).
  • Enforcement Actions: Block requests (HTTP 429 Too Many Requests) or throttle them.
  • Client Feedback: Communicate rate limit status to clients via HTTP headers (e.g., X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset).
  • Bursting: Allow for small, temporary bursts of requests above the sustained rate.
  • Exclusions: Ability to exempt specific IPs, users, or internal services from rate limits.

2.2. Non-Functional Requirements

  • Performance: Minimal latency overhead (target < 5ms).
  • Scalability: Horizontally scalable to handle millions of requests per second.
  • High Availability: No single point of failure; resilient to component outages.
  • Reliability: Accurate counting and consistent enforcement across distributed instances.
  • Observability: Comprehensive metrics, logging, and alerting for operational insights.
  • Configurability: Centralized and auditable configuration management.
  • Security: Protection against bypass attempts and internal vulnerabilities.

3. Core Components

The API Rate Limiter will be composed of the following key architectural components:

  1. Request Interceptor/Gateway: The entry point for all API requests. It's responsible for intercepting requests and forwarding them to the Rate Limit Service.
  2. Rate Limit Service: The core logic component that applies the chosen rate limiting algorithm, communicates with the distributed store, and decides whether to allow or deny a request.
  3. Distributed State Store: A high-performance, distributed key-value store used to maintain client request counters and timestamps.
  4. Configuration Management: A system for defining and managing rate limit rules (e.g., per-endpoint limits, global limits).

4. Rate Limiting Algorithms

Several algorithms exist, each with different trade-offs in terms of accuracy, resource usage, and handling of request bursts.

  • Fixed Window Counter: Simple to implement. Counts requests within a fixed time window. Prone to burst issues at window boundaries.
  • Sliding Window Log: Stores timestamps of all requests within a window. Highly accurate but memory-intensive for high request volumes.
  • Sliding Window Counter (Recommended): A hybrid approach. Uses a fixed window counter for the current window and estimates counts from the previous window. Offers a good balance of accuracy and efficiency, mitigating the burst problem of fixed windows.
  • Token Bucket: Clients consume "tokens" to make requests. Tokens are refilled at a fixed rate. Excellent for traffic shaping and allowing bursts up to the bucket capacity.
  • Leaky Bucket: Similar to Token Bucket but smooths out bursts by processing requests at a constant rate, queuing excess requests.

Recommendation:

We recommend starting with the Sliding Window Counter algorithm due to its balance of accuracy, efficiency, and robustness against burst traffic. For more advanced traffic shaping and predictable throughput, the Token Bucket algorithm can be considered as a future enhancement or for specific high-priority APIs.


5. High-Level Architecture Design

The proposed architecture leverages a distributed approach, placing the rate limiting logic close to the API gateway layer for optimal performance and integration.

5.1. Deployment Strategy

  • Gateway Layer Integration: The rate limiter will be integrated at the API Gateway level (e.g., Nginx, Envoy, AWS API Gateway, Azure API Management). This provides a centralized enforcement point before requests reach backend services.

* Pros: Centralized control, transparent to backend services, efficient for common limits.

* Cons: Can become a bottleneck if not scaled properly, potential for single point of failure if not highly available.

5.2. Data Storage

  • Redis Cluster: A highly performant, in-memory data store with atomic operations is ideal for managing rate limit counters.

* Pros: Extremely fast read/write operations, supports atomic increments, built-in expiration (TTL), excellent for distributed counters, high availability through Sentinel/Cluster.

* Cons: Requires careful management of memory, potential for data loss on hard crashes (though AOF persistence mitigates this).

5.3. Architecture Flow

  1. Client Request: A client sends a request to an API endpoint.
  2. API Gateway: The request first hits the API Gateway.
  3. Rate Limit Interceptor: The Gateway's rate limit module (or a dedicated sidecar/plugin) extracts client identifiers (e.g., IP, API Key, User ID) and endpoint information.
  4. Rate Limit Service Call: The interceptor makes a non-blocking call to the central Rate Limit Service.
  5. Rate Limit Service Logic:

* Retrieves the relevant rate limit rules from Configuration Management.

* Queries the Redis Cluster for the current count and timestamps associated with the client identifier.

* Applies the chosen rate limiting algorithm (e.g., Sliding Window Counter).

* Updates the count in Redis atomically.

* Determines if the request is allowed or denied.

  1. Gateway Action:

Allowed: The Gateway forwards the request to the backend service. It also adds X-RateLimit- headers

python

rate_limiter.py

import time

import redis

from typing import Optional, Tuple

class RedisRateLimiter:

"""

A Redis-backed API rate limiter using the Sliding Window Log algorithm.

This algorithm stores timestamps of each request within a sorted set in Redis.

When a new request comes in, it removes all timestamps older than the

defined window and then checks if the remaining count exceeds the limit.

"""

def __init__(self, redis_client: redis.Redis, limit: int, window_size_seconds: int):

"""

Initializes the RedisRateLimiter.

Args:

redis_client: An initialized Redis client instance.

limit: The maximum number of requests allowed within the window.

window_size_seconds: The duration of the sliding window in seconds.

"""

if not isinstance(redis_client, redis.Redis):

raise TypeError("redis_client must be an instance of redis.Redis")

if not isinstance(limit, int) or limit <= 0:

raise ValueError("limit must be a positive integer")

if not isinstance(window_size_seconds, int) or window_size_seconds <= 0:

raise ValueError("window_size_seconds must be a positive integer")

self.redis_client = redis_client

self.limit = limit

self.window_size_seconds = window_size_seconds

def _get_current_timestamp_ms(self) -> int:

"""Returns the current timestamp in milliseconds."""

return int(time.time() * 1000)

def _get_key(self, client_id: str) -> str:

"""Generates a Redis key for the given client_id."""

return f"rate_limit:{client_id}"

def allow_request(self, client_id: str) -> Tuple[bool, int, int, int]:

"""

Checks if a request from the given client_id should be allowed.

Args:

client_id: A unique identifier for the client (e.g., IP address, user ID, API key).

Returns:

A tuple: (allowed: bool, remaining: int, limit: int, reset_time_epoch_seconds: int)

- allowed: True if the request is allowed, False otherwise.

- remaining: Number of requests remaining in the current window.

- limit: The maximum allowed requests for the window.

- reset_time_epoch_seconds: The estimated time (epoch seconds) when the

current window effectively "resets" or when the oldest request would expire.

This is an approximation for the sliding window log.

"""

key = self._get_key(client_id)

current_time_ms = self._get_current_timestamp_ms()

# Calculate the start of the current sliding window

# All timestamps older than this should be removed

window_start_time_ms = current_time_ms - (self.window_size_seconds * 1000)

# Use a Redis pipeline for atomicity and efficiency

pipe = self.redis_client.pipeline()

# 1. Remove all requests older than the current window start

# ZREMRANGEBYSCORE key min max: Removes all elements in the sorted set

# with a score between min and max (inclusive).

pipe.zremrangebyscore(key, 0, window_start_time_ms)

# 2. Count the number of requests remaining in the window

# ZCARD key: Returns the number of elements in the sorted set.

pipe.zcard(key)

# 3. Add the current request's timestamp to the sorted set

# ZADD key score member: Adds all the specified members with the specified scores

# to the sorted set stored at key. If a member is already a member of the sorted

# set, its score is updated.

# We use current_time_ms as both score and member for simplicity,

# but a unique ID could be used for member if needed to store more info.

pipe.zadd(key, {str(current_time_ms): current_time_ms})

# 4. Set an expiry on the key to prevent it from growing indefinitely

# even if no new requests come in for a long time.

# This is important for memory management.

# The expiry should be slightly longer than the window size.

pipe.expire(key, self.window_size_seconds + 5) # +5 seconds buffer

results = pipe.execute()

# The count of requests within the window is the second result from the pipeline

current_requests_count = results[1]

allowed = current_requests_count <= self.limit

remaining = max(0, self.limit - current_requests_count)

# For sliding window log, getting an exact "reset time" is tricky as it's continuous.

# A reasonable approximation for X-RateLimit-Reset is the time when the oldest

# request in the current window would expire if no new requests were made.

# This requires fetching the oldest timestamp.

# For simplicity and common use cases, we can estimate it as current_time + window_size

# or fetch the oldest item from the sorted set if more accurate headers are needed.

# Let's fetch the oldest item for a more precise X-RateLimit-Reset.

oldest_request_score = self.redis_client.zrange(key, 0, 0, withscores=True)

if oldest_request_score:

# oldest_request_score is like [('1678886400000', 1678886400000.0)]

oldest_timestamp_ms = int(oldest_request_score[0][1])

reset_time_epoch_seconds = int((oldest_timestamp_ms + (self.window_size_seconds * 1000)) / 1000)

else:

# If no requests in the window, reset time is roughly window_size_seconds from now

reset_time_epoch_seconds = int(time.time() + self.window_size_seconds)

return allowed, remaining, self.limit, reset_time_epoch_seconds

def get_status(self, client_id: str) -> Tuple[int, int, int]:

"""

Retrieves the current rate limit status for a client without making a new request.

Args:

client_id: The unique identifier for the client.

Returns:

A tuple: (remaining: int, limit: int, reset_time_epoch_seconds: int)

"""

key = self._get_key(client_id)

current_time_ms = self._get_current_timestamp_ms()

window_start_time_ms = current_time_ms - (self.window_size_seconds * 1000)

# Remove old requests (cleanup) and then count

self.redis_client.zremrangebyscore(key, 0, window_start_time_ms)

current_requests_count = self.redis_client.zcard(key)

remaining = max(0, self.limit - current_requests_count)

oldest_request_score = self.redis_client.zrange(key, 0,

gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Guide

This document provides a detailed, professional overview of API Rate Limiters, covering their importance, common algorithms, implementation considerations, and best practices. This deliverable is designed to equip you with a thorough understanding necessary for effective design, implementation, and consumption of rate-limited APIs.


1. Introduction to API Rate Limiting

An API Rate Limiter is a critical component in modern web service architectures that controls the number of requests a client can make to an API within a defined time window. Its primary purpose is to protect the API from various forms of abuse, ensure fair usage among all clients, and maintain the stability and performance of the underlying infrastructure.

2. Why Implement API Rate Limiting? (Benefits)

Implementing an API Rate Limiter offers a multitude of benefits for both API providers and consumers:

  • Protection Against Abuse and Attacks:

* Denial-of-Service (DoS) and Distributed DoS (DDoS) Attacks: Prevents malicious actors from overwhelming servers with an excessive number of requests, rendering the service unavailable.

* Brute-Force Attacks: Limits attempts to guess passwords, API keys, or other credentials.

* Scraping: Deters automated bots from excessively scraping data.

  • Ensuring Fair Usage and Resource Allocation:

* Prevents Resource Exhaustion: Guarantees that no single client can monopolize server resources (CPU, memory, network bandwidth, database connections).

* Fair Access: Ensures all legitimate users receive a consistent and reliable service experience.

* Tiered Service Levels: Enables differentiation of service tiers (e.g., free vs. premium, higher limits for paying customers).

  • Cost Management:

* Reduces operational costs associated with excessive resource consumption and scaling infrastructure unnecessarily.

  • API Stability and Performance:

* Maintains predictable response times and overall API reliability by preventing overload.

  • Fraud Prevention:

* Can limit the rate of specific transactions to prevent fraudulent activities.

3. Core Concepts and Terminology

  • Rate Limit: The maximum number of requests allowed within a specific time window (e.g., 100 requests per minute).
  • Quota: A broader term, often referring to a total number of requests allowed over a longer period (e.g., 10,000 requests per day) or for a specific resource.
  • Throttling: The process of intentionally slowing down the rate of requests, often by delaying responses or returning errors.
  • Burst: A temporary spike in requests that might exceed the sustained rate limit but is allowed for a short duration.
  • Client Identification: The method used to identify a unique client (e.g., IP address, API key, user ID, JWT token).
  • Grace Period/Retry-After: A period specified to the client indicating when they can retry a request after being rate-limited.

4. Common Rate Limiting Algorithms

Different algorithms offer varying trade-offs in terms of complexity, fairness, and burst handling.

4.1. Fixed Window Counter

  • How it works: Divides time into fixed-size windows (e.g., 1 minute). Each window has a counter. When a request comes, the counter for the current window increments. If the counter exceeds the limit, the request is blocked.
  • Pros: Simple to implement and understand.
  • Cons:

* "Burstiness" at Window Edges: A client can make N requests at the very end of one window and N requests at the very beginning of the next window, effectively making 2N requests in a very short period (e.g., 200 requests in 2 seconds if N=100/min). This can still overwhelm the server.

* Doesn't handle bursts well across window boundaries.

4.2. Sliding Window Log

  • How it works: For each client, it stores a timestamp of every request made. When a new request arrives, it removes all timestamps older than the current time minus the window duration. If the number of remaining timestamps (including the new one) exceeds the limit, the request is rejected.
  • Pros: Highly accurate and provides excellent control over the rate. No "burstiness" at window edges.
  • Cons:

* High Memory Usage: Requires storing a timestamp for every request for every client, which can be memory-intensive for high-volume APIs.

* Computationally more expensive due to timestamp management.

4.3. Sliding Window Counter (Hybrid)

  • How it works: A more practical approach combining aspects of Fixed Window and Sliding Window Log. It calculates the rate based on the current fixed window's count and a weighted average of the previous window's count.

* Example: For a 1-minute window, if a request comes 30 seconds into the current window, it considers 50% of the previous window's count and 50% of the current window's count.

  • Pros: A good balance between accuracy and memory efficiency. Smoother rate limiting than Fixed Window.
  • Cons: Still an approximation, not perfectly precise like Sliding Window Log.

4.4. Token Bucket

  • How it works: Imagine a bucket with a fixed capacity for "tokens." Tokens are added to the bucket at a constant rate. Each incoming request consumes one token. If the bucket is empty, the request is rejected (or queued).
  • Pros:

* Handles Bursts: Allows for bursts of requests up to the bucket's capacity.

* Smooths out the overall request rate.

* Simple to implement.

  • Cons: Can be challenging to tune the bucket size and refill rate perfectly.

4.5. Leaky Bucket

  • How it works: Imagine a bucket that leaks at a constant rate. Requests arrive and are added to the bucket. If the bucket is full, new requests are discarded. Requests are processed (leak out) at a constant rate.
  • Pros: Enforces a perfectly smooth output rate, regardless of how bursty the input is.
  • Cons:

* Queueing: Can introduce latency if requests are queued.

* If the bucket is full, requests are lost, which might not be desirable for all applications.

* Less suitable for APIs where immediate feedback is crucial.

5. Key Considerations for Implementation

When designing and implementing an API Rate Limiter, consider the following:

  • Granularity of Limits:

* Global: A single limit across all API endpoints.

* Per Endpoint/Resource: Different limits for different APIs (e.g., GET /users might have a higher limit than POST /users).

* Per HTTP Method: Different limits for GET vs. POST.

  • Client Identification:

* IP Address: Easiest, but problematic for NAT/proxies (multiple users share one IP) or VPNs (one user can change IP).

* API Key/Access Token: Most robust for authenticated clients.

* User ID: Ideal for logged-in users.

* Hybrid: Combine IP for unauthenticated requests and API Key/User ID for authenticated ones.

  • Distributed Systems:

* If your API is deployed across multiple servers, the rate limiter must be distributed (e.g., using a shared Redis cache) to ensure consistent limits across all instances.

  • Rate Limit Policy:

* What happens when a limit is exceeded?

* Reject Request: Return a 429 Too Many Requests HTTP status code.

* Delay Request: Queue the request (Leaky Bucket), but this can add latency.

* Degrade Service: Return partial data or a simpler response.

  • Informative Responses (HTTP Headers):

* Communicate rate limit status to clients using standard HTTP headers:

* X-RateLimit-Limit: The total number of requests allowed in the current window.

* X-RateLimit-Remaining: The number of requests remaining in the current window.

* X-RateLimit-Reset: The time (usually Unix epoch seconds or UTC string) when the current window resets.

* Retry-After: (On 429 responses) The number of seconds to wait before making another request.

  • Edge Cases and Whitelisting:

* Consider whitelisting internal services, monitoring tools, or trusted partners from rate limits.

* Handle edge cases like invalid API keys or malformed requests (should they count towards the limit?).

  • Scalability and Performance of the Rate Limiter Itself:

* The rate limiter should not become a bottleneck. Using efficient data stores (like Redis) for counters is crucial.

6. Best Practices for API Consumers

Clients consuming rate-limited APIs should adhere to the following best practices:

  • Respect 429 Too Many Requests: Do not immediately retry upon receiving a 429.
  • Implement Backoff and Retry Logic:

* Exponential Backoff: Wait an exponentially increasing amount of time between retries (e.g., 1s, 2s, 4s, 8s).

* Jitter: Add a small random delay to the backoff time to prevent all clients from retrying simultaneously at the same interval, creating a "thundering herd" problem.

Respect Retry-After Header: If provided, wait at least* this many seconds before retrying.

  • Monitor Rate Limit Headers: Parse X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset to proactively manage request rates and avoid hitting limits.
  • Cache Responses: Cache API responses where appropriate to reduce the number of requests needed.
  • Batch Requests: If the API supports it, combine multiple operations into a single request to reduce the overall request count.
  • Use Webhooks (if available): Instead of polling an API frequently, subscribe to webhooks to receive real-time updates, significantly reducing request volume.

7. Monitoring and Alerting

Effective monitoring is crucial for a well-managed rate limiting system:

  • Key Metrics to Monitor:

* Number of 429 Too Many Requests responses.

* Number of requests blocked by the rate limiter.

* Rate limit usage per client/API key.

* Overall API request rate.

  • Alerting:

* Set up alerts for spikes in 429 errors or when specific clients consistently hit their limits.

* Alert on potential DoS/DDoS attempts detected by the rate limiter.

  • Logging:

* Log rate limiting events (who, when, what limit, what action taken) for auditing and analysis.

8. Conclusion

API Rate Limiters are an indispensable defense mechanism for protecting and optimizing API performance and availability. By carefully selecting an appropriate algorithm, considering implementation details, and guiding API consumers with clear best practices, you can build a robust and resilient API ecosystem. Proactive monitoring and adherence to these principles will ensure your API remains stable, secure, and available for all legitimate users.

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}