API Rate Limiter
Run ID: 69cd2f363e7fb09ff16a8b242026-04-01Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter: Architectural Design Document

Date: October 26, 2023

Version: 1.0

Status: Draft

Prepared For: Customer Deliverable


1. Executive Summary

This document outlines a comprehensive architectural plan for an API Rate Limiter. The proposed solution aims to protect backend services from abuse, ensure fair usage, prevent resource exhaustion, and enhance system stability. We will explore various architectural patterns, identify key components, recommend suitable technologies, and detail the operational considerations for a robust, scalable, and highly available rate limiting system. The focus is on a centralized, distributed approach capable of handling high traffic volumes with minimal latency overhead.

2. Introduction & Objectives

2.1. Purpose of an API Rate Limiter

An API Rate Limiter controls the number of requests a client can make to an API within a defined time window. This mechanism is crucial for:

  • Preventing Abuse & DoS Attacks: Mitigating Distributed Denial of Service (DDoS) attacks and brute-force attempts.
  • Ensuring Fair Usage: Allocating resources equitably among consumers, preventing a single client from monopolizing services.
  • Cost Control: Managing infrastructure costs by preventing excessive usage that could scale up resources unnecessarily.
  • Maintaining Stability: Protecting backend services from being overloaded, leading to improved reliability and uptime.
  • Monetization & Tiering: Enabling different service levels (e.g., free tier vs. premium tier with higher limits).

2.2. Key Functional Requirements

The API Rate Limiter will support the following core functionalities:

  • Limit Enforcement: Restrict API requests based on predefined rules (e.g., requests per second, requests per minute).
  • Configurable Rules: Allow dynamic configuration of rate limits per client identifier, API endpoint, or API group.
  • Multiple Algorithms: Support various rate limiting algorithms (e.g., Fixed Window, Sliding Window Log, Sliding Window Counter, Token Bucket).
  • Client Identification: Identify clients using various attributes (e.g., IP address, API Key, User ID from JWT token).
  • Error Responses: Return standardized HTTP 429 (Too Many Requests) responses with Retry-After headers for rejected requests.
  • Whitelisting/Blacklisting: Ability to exempt specific clients from rate limits or block them entirely.
  • Metrics & Monitoring: Provide visibility into request rates, blocked requests, and system health.

2.3. Key Non-Functional Requirements

The solution must adhere to the following non-functional requirements:

  • Scalability: Horizontally scalable to handle increasing request volumes and client bases.
  • Low Latency: Introduce minimal overhead to API request processing (target < 5ms).
  • High Availability: Ensure continuous operation even with component failures.
  • Fault Tolerance: Gracefully handle failures in dependent services (e.g., cache, configuration).
  • Configurability & Flexibility: Allow for easy updates and adjustments to rate limiting rules without service downtime.
  • Security: Protect the rate limiting service itself from attacks and ensure secure communication.
  • Observability: Provide comprehensive logging, metrics, and tracing for debugging and performance analysis.

3. Architectural Approaches

Several approaches can be taken for implementing rate limiting, each with its trade-offs:

  • Client-Side Rate Limiting: Not recommended as it's easily bypassed and offers no real protection.
  • Application-Level Rate Limiting: Implementing rate limiting logic directly within each microservice.

* Pros: Granular control, easy for small deployments.

* Cons: Duplication of effort, inconsistent enforcement, difficult to manage across many services, poor for distributed counts.

  • Dedicated Service (Sidecar/Centralized): A standalone service responsible solely for rate limiting.

* Pros: Centralized logic, consistent enforcement, decoupled from application logic, easily scalable.

* Cons: Adds network hop, potential for single point of failure if not designed for HA.

  • API Gateway/Reverse Proxy Level Rate Limiting: Leveraging an existing API Gateway or reverse proxy to enforce limits.

* Pros: Centralized, often built-in capabilities, minimal impact on backend services, can handle initial request filtering.

* Cons: May require custom plugins for advanced algorithms or complex rules, can become a bottleneck if not scaled properly.

Recommended Approach: A centralized, distributed rate limiting service integrated with an API Gateway/Reverse Proxy. This combines the benefits of centralized control and enforcement at the edge with the flexibility and scalability of a dedicated service using a distributed cache.

4. Detailed Architecture Design

4.1. Core Components

The proposed architecture consists of the following key components:

  1. API Gateway / Reverse Proxy:

* Function: The entry point for all client requests. It intercepts requests, extracts relevant client identifiers (IP, API Key, User ID), and forwards them to the Rate Limiting Service for decision-making. If allowed, it proxies the request to the backend service; otherwise, it returns an HTTP 429 response.

* Examples: Nginx, Envoy Proxy, Kong, AWS API Gateway, Google Cloud Endpoints, Azure API Management.

  1. Rate Limiting Service (RLS):

* Function: The brain of the system. It receives request metadata from the API Gateway, applies the configured rate limiting algorithm, queries/updates the Distributed Cache, and returns an ALLOW/REJECT decision. This service should be stateless and horizontally scalable.

* Implementation: Can be a custom microservice (e.g., Go, Java, Python) or a specialized rate limiting component within the Gateway (e.g., Envoy's native rate limiter with an external Redis backend).

  1. Distributed Cache / Data Store:

* Function: Stores the real-time state for rate limiting (e.g., current counts, timestamps, token buckets). Crucial for ensuring consistent rate limiting across multiple instances of the Rate Limiting Service. Must support atomic operations.

* Recommendation: Redis is highly recommended due to its in-memory performance, atomic operations (e.g., INCR, EXPIRE, sorted sets), and high availability features (Redis Cluster, Sentinel).

  1. Configuration Service / Store:

* Function: Stores all rate limiting rules, policies, and client configurations (e.g., limits per endpoint, per client type, whitelisted IPs). This allows for dynamic updates without redeploying the RLS.

* Examples: Consul, etcd, Kubernetes ConfigMaps, dedicated database, or even simple YAML files managed via a GitOps pipeline.

  1. Monitoring & Alerting System:

* Function: Collects metrics from the API Gateway and Rate Limiting Service (e.g., total requests, allowed requests, blocked requests, RLS latency). Provides dashboards and triggers alerts on critical events (e.g., high block rate, RLS errors).

* Examples: Prometheus & Grafana, Datadog, New Relic.

  1. Logging System:

* Function: Captures detailed logs from all components for auditing, debugging, and post-incident analysis.

* Examples: ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, CloudWatch Logs, Google Cloud Logging.

4.2. Data Flow & Workflow

The following sequence illustrates the typical flow of a client request through the API Rate Limiter:

  1. Client Request: A client sends an HTTP request to the API Gateway.
  2. Gateway Interception: The API Gateway intercepts the request.
  3. Identifier Extraction: The Gateway extracts relevant client identifiers (e.g., X-Forwarded-For IP, Authorization header for API Key/JWT, User-Agent).
  4. RLS Query: The Gateway sends a lightweight query to the Rate Limiting Service, including the client identifier, requested API path, and timestamp.
  5. RLS Decision Logic:

* The RLS retrieves the applicable rate limiting rules from the Configuration Service (e.g., "100 requests per minute for /api/v1/data for this client type").

* It queries the Distributed Cache (Redis) for the current state (e.g., current request count, last request timestamp for the client).

* It applies the configured rate limiting algorithm (e.g., increments a counter in Redis, checks if a token is available).

* It updates the state in Redis and determines if the request should be ALLOWED or REJECTED.

  1. Decision Return: The RLS returns the decision (ALLOW/REJECT) to the API Gateway, along with a Retry-After value if rejected.
  2. Gateway Action:

* If ALLOWED: The Gateway forwards the request to the appropriate backend service.

* If REJECTED: The Gateway immediately returns an HTTP 429 (Too Many Requests) response to the client, including the Retry-After header.

  1. Logging & Metrics: Both the Gateway and RLS log the request outcome and emit metrics for monitoring.

graph TD
    A[Client] -->|HTTP Request| B(API Gateway / Reverse Proxy)
    B -->|Extract ID, Path| C(Rate Limiting Service)
    C -->|Get Rules| D(Configuration Service)
    C -->|Query/Update State| E(Distributed Cache - Redis)
    E -->|State| C
    D -->|Rules| C
    C -->|ALLOW/REJECT, Retry-
gemini Output

As part of your "API Rate Limiter" workflow, this step focuses on generating the core code for a robust and scalable API rate limiting solution. This deliverable provides a detailed implementation using Python and Redis, leveraging the Sliding Window Counter algorithm for efficient and accurate rate control.


API Rate Limiter: Comprehensive Design and Implementation

1. Introduction to API Rate Limiting

API rate limiting is a critical mechanism for controlling the number of requests a client can make to an API within a defined time window. It serves multiple crucial purposes:

  • Security: Prevents DoS (Denial of Service) attacks, brute-force attacks, and abusive scraping by limiting the rate at which a single client can interact with the system.
  • Performance & Stability: Protects backend
gemini Output

As a professional AI assistant from PantheraHive, I am pleased to present the comprehensive documentation for "API Rate Limiter." This deliverable outlines the critical aspects of API rate limiting, its benefits, common strategies, design considerations, and best practices.


API Rate Limiter: Comprehensive Documentation

1. Introduction to API Rate Limiting

An API Rate Limiter is a mechanism that controls the number of requests a client can make to an API within a specified time window. Its primary purpose is to protect the API infrastructure from abuse, ensure fair usage among consumers, and maintain service stability and availability.

2. Why API Rate Limiting is Essential

Implementing an API Rate Limiter provides numerous critical benefits:

  • Preventing Abuse and Misuse: Protects against malicious activities such as Denial-of-Service (DoS) attacks, brute-force attempts, and excessive data scraping.
  • Ensuring Fair Usage: Distributes available resources equitably among all API consumers, preventing a single client from monopolizing server capacity.
  • Maintaining Service Stability: Prevents server overload by limiting the incoming request volume, thereby ensuring consistent performance and availability for all users.
  • Cost Management: Reduces infrastructure costs associated with processing an unbounded number of requests, especially in cloud-based environments where resource usage is billed.
  • Security Enhancement: Acts as a first line of defense against certain types of attacks, complementing other security measures.
  • Revenue Protection: For commercial APIs, rate limits can be tied to different service tiers (e.g., free, premium), enabling differentiated service offerings.

3. Common Rate Limiting Algorithms and Strategies

Several algorithms can be employed to implement API rate limiting, each with its own advantages and trade-offs:

  • Fixed Window Counter:

* Mechanism: Divides time into fixed-size windows (e.g., 60 seconds). Each request increments a counter for the current window. If the counter exceeds the limit within the window, subsequent requests are rejected.

* Pros: Simple to implement and understand.

* Cons: Can suffer from a "bursty problem" at window edges, where clients can make a large number of requests at the end of one window and the beginning of the next, effectively doubling the rate within a short period.

* Use Case: Basic rate limiting where edge case bursts are acceptable.

  • Sliding Window Log:

* Mechanism: Stores a timestamp for every request made by a client. When a new request arrives, it counts how many timestamps fall within the current time window (e.g., the last 60 seconds). If this count exceeds the limit, the request is rejected. Old timestamps are periodically purged.

* Pros: Highly accurate, effectively smooths out traffic bursts.

* Cons: Requires storing a potentially large number of timestamps per client, which can be memory-intensive and computationally expensive for high-volume APIs.

* Use Case: Scenarios requiring high accuracy and smooth rate limiting, often for premium tiers or critical endpoints.

  • Sliding Window Counter (or Leaky Bucket with Sliding Window):

* Mechanism: A hybrid approach. It uses a fixed window counter but also considers the rate from the previous window, weighted by how much of the current window has passed. For example, if 70% of the current window has passed, the counter for the current window is added to 30% of the previous window's counter.

* Pros: Offers a good balance between accuracy and resource efficiency compared to Sliding Window Log. Mitigates the fixed window "bursty problem" more effectively.

* Cons: Slightly more complex to implement than Fixed Window Counter.

* Use Case: A good general-purpose solution for many applications, offering better burst handling than fixed windows without the overhead of the sliding window log.

  • Token Bucket:

* Mechanism: A "bucket" with a fixed capacity is filled with "tokens" at a constant rate. Each request consumes one token. If a request arrives and the bucket is empty, the request is rejected or queued.

* Pros: Allows for bursts of requests (up to the bucket capacity) and is easy to implement.

* Cons: Requires careful tuning of bucket size and refill rate.

* Use Case: Ideal for scenarios where occasional bursts of traffic are expected and should be allowed, but sustained high rates need to be limited.

  • Leaky Bucket:

* Mechanism: Requests are added to a queue (the "bucket"). Requests "leak" out of the bucket at a constant rate, meaning they are processed at a steady pace. If the bucket is full, new requests are rejected.

* Pros: Smooths out traffic and ensures a steady processing rate. Prevents bursts from overwhelming downstream services.

* Cons: Introduces latency for requests when the bucket is partially full. Requests might be rejected even if the average rate is low, if a sudden burst fills the bucket.

* Use Case: Primarily for smoothing out request processing, ensuring downstream services receive a consistent load, rather than just limiting total requests.

4. Key Design Considerations for API Rate Limiters

When designing and implementing an API Rate Limiter, several factors must be carefully considered:

  • Granularity:

* Per User/Client ID: Limits requests based on authenticated user IDs or API keys.

* Per IP Address: Limits requests originating from a specific IP address. Useful for unauthenticated endpoints but susceptible to NAT/proxy issues.

* Per Endpoint: Different limits for different API endpoints (e.g., /read might have a higher limit than /write).

* Combined: Often, a combination (e.g., per user, falling back to per IP for unauthenticated requests) is most effective.

  • Distributed vs. Centralized:

* Centralized: A single, shared store (e.g., Redis) maintains all rate limiting counters. Essential for horizontally scaled API services to ensure consistent limits across all instances.

* Distributed: Each API instance manages its own rate limits. Simpler but ineffective for scaled services as limits are not globally enforced.

  • Error Handling and User Feedback:

* HTTP Status Code 429 Too Many Requests: The standard response for rate-limited requests.

* Retry-After Header: Should be included in the 429 response, indicating how long the client should wait before making another request.

* Rate Limit Headers: Provide clients with information about their current limit status (e.g., X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset).

  • Burst Handling:

* Determine if short, high-volume bursts should be allowed (e.g., using Token Bucket) or strictly limited (e.g., Leaky Bucket or Sliding Window Log). This depends on the specific use case and API tolerance.

  • Quota Management and Tiers:

* Define different rate limits for various service tiers (e.g., free, premium, enterprise). This enables business models and caters to diverse user needs.

  • Monitoring and Alerting:

* Implement robust monitoring to track rate limit breaches, identify potential abuse patterns, and understand API usage trends.

* Set up alerts for critical thresholds to proactively address issues.

  • Scalability and Performance:

* The rate limiting mechanism itself must be highly scalable and performant to avoid becoming a bottleneck for the API. Using in-memory caches (like Redis) is common for this reason.

  • Security Considerations:

* Ensure the rate limiter cannot be easily bypassed (e.g., by spoofing headers or using multiple IPs).

* Consider different limits for authenticated vs. unauthenticated requests.

5. Implementation Aspects

  • Where to Implement:

* API Gateway/Proxy Layer (e.g., Nginx, Envoy, AWS API Gateway, Azure API Management): Often the preferred location as it provides a centralized point of control before requests reach the application logic, offloading this concern from individual services.

* Application Layer: Can be implemented within the application code itself. Offers finer-grained control but can become complex to manage across multiple services and languages.

* Dedicated Service: A microservice specifically designed for rate limiting.

  • Technologies and Tools:

* Redis: Widely used as a highly performant, in-memory data store for counters and timestamps, enabling centralized rate limiting across distributed systems.

* Nginx: Can be configured with ngx_http_limit_req_module for fixed window rate limiting.

* Cloud Provider Services: AWS API Gateway, Azure API Management, Google Cloud Endpoints all offer built-in rate limiting capabilities.

* Programming Language Libraries: Various libraries exist for different languages (e.g., ratelimit in Python, golang.org/x/time/rate in Go) for in-application rate limiting.

6. Best Practices for API Consumers

To ensure a smooth experience when consuming rate-limited APIs, clients should adhere to the following best practices:

  • Respect Retry-After Headers: Always honor the Retry-After header when receiving a 429 response. Do not retry immediately.
  • Implement Backoff Strategies: For transient errors (including 429s), use exponential backoff with jitter to prevent overwhelming the API with retries.
  • Monitor Rate Limit Headers: Parse and act upon X-RateLimit-* headers to understand current limits and remaining quota, adjusting request rates proactively.
  • Cache Responses: Cache API responses where appropriate to reduce the number of requests to the API.
  • Batch Requests: If the API supports it, batch multiple operations into a single request to reduce the overall request count.
  • Optimize Request Frequency: Design applications to make requests only when necessary, avoiding polling if webhooks or event-driven mechanisms are available.

7. Conclusion and Recommendations

API Rate Limiting is an indispensable component of any robust API ecosystem. By carefully selecting an appropriate algorithm, considering design factors such as granularity and distribution, and implementing effective monitoring, organizations can significantly enhance the reliability, security, and scalability of their API offerings.

PantheraHive recommends:

  • Start with a Sliding Window Counter or Token Bucket algorithm for a balance of accuracy and efficiency, suitable for most general-purpose APIs.
  • Implement rate limiting at the API Gateway layer to centralize control and protect backend services efficiently.
  • Utilize a centralized data store like Redis for managing rate limit counters in distributed environments.
  • Provide clear 429 Too Many Requests responses with Retry-After and X-RateLimit-* headers to guide API consumers.
  • Continuously monitor and adjust rate limits based on usage patterns, system performance, and business requirements.

This comprehensive approach will ensure your API remains performant, secure, and accessible to all legitimate users while effectively mitigating potential abuse.

api_rate_limiter.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}