API Rate Limiter
Run ID: 69cd212f3e7fb09ff16a831a2026-04-01Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter: Architecture Plan & Implementation Roadmap

This document outlines a comprehensive architecture plan for an API Rate Limiter, a critical component for ensuring the stability, security, and fair usage of API services. It details the core requirements, architectural components, technology recommendations, and provides a structured project implementation plan.


1. Introduction: API Rate Limiter Architecture Plan

An API Rate Limiter is an essential mechanism to control the rate at which clients can access an API. It prevents abuse, protects backend services from overload, ensures fair resource allocation among users, and helps manage operational costs. This plan provides a detailed blueprint for designing and implementing a robust, scalable, and highly available rate limiting system.

1.1. Purpose and Scope

The purpose of this document is to define the architectural components, functional and non-functional requirements, and an actionable implementation strategy for an API Rate Limiter. The scope covers the design of the rate limiting logic, state management, integration points, and operational considerations.

1.2. Key Objectives


2. Core Requirements & Use Cases

2.1. Functional Requirements

2.2. Non-Functional Requirements

2.3. Example Use Cases


3. High-Level Architecture Overview

The API Rate Limiter will primarily function as a distributed service, ideally integrated into an API Gateway or as a standalone microservice that intercepts requests before they reach the backend application logic.

mermaid • 544 chars
graph TD
    A[Client Application] --> B(API Gateway / Load Balancer)
    B --> C{Rate Limiter Service}
    C -- Check Limit --> D[Distributed Cache (e.g., Redis)]
    D -- Update Counter --> C
    C -- Limit Exceeded --> E[HTTP 429 Response]
    C -- Limit OK --> F[Backend API Service]
    F --> B
    B --> A

    subgraph Management
        G[Admin Console / API] --> H(Policy Management Service)
        H --> D
    end

    subgraph Monitoring
        I[Rate Limiter Metrics / Logs] --> J[Monitoring System (Prometheus, Grafana)]
    end
Sandboxed live preview

3.1. Main Components

  1. API Gateway/Proxy: The entry point for all API requests. It's responsible for routing, authentication, and crucially, invoking the rate limiting logic.
  2. Rate Limiter Service: The core logic component that evaluates requests against defined policies, updates counters, and makes decisions (allow/deny). This can be an embedded module within the gateway or a separate microservice.
  3. Distributed Cache: A high-performance, in-memory data store (e.g., Redis) to store and manage rate limiting counters and states across multiple instances of the Rate Limiter Service.
  4. Policy Management Service: A system (could be a simple configuration file, a database, or a dedicated service) for defining, storing, and updating rate limiting rules.
  5. Monitoring & Logging: Components for collecting metrics, logs, and providing alerts related to rate limiting events and system health.

4. Detailed Component Design & Algorithm Selection

4.1. Request Interception Layer

  • Placement: The rate limiter should be placed as early as possible in the request lifecycle, typically within an API Gateway (e.g., Nginx, Envoy, Kong, AWS API Gateway, Azure API Management) or a custom reverse proxy. This ensures that malicious or excessive traffic is dropped before consuming resources from downstream services.
  • Identification: For each incoming request, the interceptor must identify the client (IP address, API key from headers, user ID from JWT, etc.) and the target resource (endpoint, HTTP method).

4.2. Rate Limiting Logic & Algorithm

The choice of algorithm significantly impacts accuracy, resource usage, and fairness. We will evaluate a few and recommend one or a hybrid approach.

  • Fixed Window Counter:

* How it works: Divides time into fixed windows (e.g., 60 seconds). Each request increments a counter. If the counter exceeds the limit within the window, requests are denied.

* Pros: Simple to implement, low memory usage.

* Cons: Can allow a "burst" of requests at the window boundary (e.g., 100 requests at 0:59 and 100 requests at 1:00, totaling 200 in a short span).

  • Sliding Window Log:

* How it works: Stores a timestamp for each request made by a client. When a new request arrives, it counts how many timestamps fall within the current window (e.g., last 60 seconds). If the count exceeds the limit, the request is denied. Old timestamps are purged.

* Pros: Highly accurate, no "burst" at window boundaries.

* Cons: High memory usage as it stores individual timestamps.

  • Sliding Window Counter:

* How it works: A hybrid approach. Uses two fixed windows: the current window and the previous window. A weighted average of requests from both windows is used to estimate the request rate.

* Pros: Better accuracy than Fixed Window, lower memory than Sliding Window Log.

* Cons: Still an approximation, not perfectly accurate.

  • Token Bucket:

* How it works: Clients are given a "bucket" of tokens. Tokens are added to the bucket at a fixed rate. Each request consumes one token. If the bucket is empty, the request is denied. The bucket has a maximum capacity, allowing for bursts up to that capacity.

* Pros: Allows for bursts, smooths out traffic, good for controlling average rate while allowing flexibility.

* Cons: Slightly more complex to implement.

  • Leaky Bucket:

* How it works: Requests are added to a queue (the "bucket"). Requests "leak" out of the bucket at a constant rate, processing them. If the bucket overflows, new requests are dropped.

* Pros: Smooths out traffic, good for protecting downstream services from variable request rates.

* Cons: Introduces latency due to queuing, can drop requests if the queue is full.

Recommendation: For most general-purpose API rate limiting, a Sliding Window Counter or a Token Bucket algorithm provides a good balance of accuracy, fairness, and resource efficiency.

  • Sliding Window Counter is often preferred for its balance between accuracy (mitigating the fixed window boundary problem) and resource efficiency (not storing every timestamp).
  • Token Bucket is excellent when controlled bursting is a requirement, providing a smoother experience for clients while still enforcing an average rate.

We recommend starting with Sliding Window Counter due to its relative simplicity and good performance characteristics for distributed systems, using Redis ZSET or HASH data structures. If specific bursting requirements become critical, a Token Bucket can be considered as an enhancement.

4.3. State Management (Distributed Cache)

  • Requirement: The rate limiter must be stateless on its own instances to scale horizontally, meaning all state (counters, timestamps) must be stored externally in a highly available, low-latency distributed cache.
  • Recommendation: Redis is the de-facto standard for this purpose.

* Data Structures:

* Sliding Window Counter: Can use Redis HASH for storing (timestamp:count) pairs for each client/window, or ZSET to store timestamps and then use ZRANGEBYSCORE to count within a window.

* Token Bucket: Can use Redis HASH to store (tokens_available:timestamp_of_last_refill).

* Atomicity: Redis commands like INCR, SETNX, WATCH/MULTI/EXEC (transactions), or Lua scripts are crucial for ensuring atomic operations on counters in a concurrent environment.

* Expiration: Utilize Redis's Time-To-Live (TTL) feature to automatically expire old counters/keys, preventing memory bloat.

* Clustering: For high availability and scalability, Redis Cluster should be used.

4.4. Configuration & Policy Management

  • Policy Definition: Policies should be defined centrally, specifying:

* Scope: global, per_IP, per_API_key, per_user_ID, per_endpoint.

* Limit: Number of requests.

* Window: Time duration (e.g., 60 seconds, 1 hour).

* Burst: (Optional for Token Bucket) Maximum burst capacity.

* Exclusions/Overrides: Specific IP ranges or API keys to whitelist/blacklist.

  • Storage: Policies can be stored in:

* Configuration Files: Simple YAML/JSON files for smaller setups.

* Database: A relational (PostgreSQL) or NoSQL (MongoDB) database for more complex, dynamic policies.

* Key-Value Store: A distributed key-value store (e.g., Consul, Etcd) for dynamic, distributed configuration.

  • Distribution: Policies should be loaded by the Rate Limiter Service instances and refreshed periodically or pushed via a publish-subscribe mechanism (e.g., Redis Pub/Sub, Kafka).

4.5. Response Handling

  • When a request is rate-limited, the API Gateway/Rate Limiter Service must return an HTTP 429 Too Many Requests status code.
  • It should include standard X-RateLimit-* headers and a Retry-After header to inform the client when they can retry:

* X-RateLimit-Limit: The total number of requests allowed in the current window.

* `X-RateLimit

gemini Output

This document provides a comprehensive and professional output for implementing an API Rate Limiter using Python, Flask, and Redis. It includes detailed code, explanations, and considerations for production deployment.


API Rate Limiter

1. Overview

An API Rate Limiter is a critical component for managing the traffic to your services. It restricts the number of requests a user or client can make to an API within a specified timeframe. This mechanism is essential for:

  • Preventing Abuse: Mitigating DoS (Denial of Service) attacks, brute-force attempts, and scraping.
  • Ensuring Fair Usage: Preventing a single user or a few users from monopolizing server resources.
  • Maintaining Stability: Protecting your backend services from being overloaded, leading
gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Guide

1. Introduction

An API Rate Limiter is a critical component in modern web service architecture, designed to control the rate at which clients can send requests to an API. It sets a cap on the number of requests a user or client can make within a given timeframe. This document provides a detailed overview of API rate limiting, its importance, common algorithms, implementation strategies, and best practices.

2. Why API Rate Limiting is Essential

Implementing an API Rate Limiter offers numerous benefits, crucial for the stability, security, and cost-effectiveness of any API-driven service:

  • Prevent Abuse and Denial-of-Service (DoS) Attacks: By restricting the number of requests, rate limiters can mitigate brute-force attacks, credential stuffing, and other malicious activities that aim to overwhelm the server or exploit vulnerabilities.
  • Ensure Fair Usage: It prevents a single user or a small group of users from monopolizing server resources, ensuring that all legitimate users have equitable access to the API.
  • Control Infrastructure Costs: High request volumes consume significant computational resources (CPU, memory, bandwidth, database connections). Rate limiting helps manage and reduce operational costs by preventing uncontrolled resource consumption.
  • Maintain Service Quality and Stability: By preventing overload, rate limiters help maintain consistent response times and service availability for all users, even during peak loads.
  • Security: Beyond DoS prevention, rate limiting can help prevent enumeration attacks (e.g., trying to guess user IDs or email addresses) and provide a layer of defense against certain types of scraping.
  • Monetization and Tiered Services: For commercial APIs, rate limiting is fundamental for implementing different service tiers (e.g., free tier with lower limits, paid tiers with higher limits) and enforcing usage policies.

3. Key Concepts and Terminology

To understand API rate limiting, it's important to be familiar with the following terms:

  • Rate Limit: The maximum number of requests allowed within a specified time window (e.g., 100 requests per minute).
  • Quota: A broader term, often referring to a total number of requests allowed over a longer period (e.g., 10,000 requests per month), distinct from the per-minute or per-second rate limit.
  • Burst: A temporary allowance for exceeding the standard rate limit for a very short period, often used to accommodate intermittent spikes in legitimate traffic.
  • Throttling: The act of intentionally slowing down or rejecting requests when a rate limit is exceeded.
  • Grace Period: A short period after a rate limit is exceeded where some requests might still be allowed before full blocking occurs, to prevent immediate lockout for slightly over-limit users.
  • HTTP Status Code 429 Too Many Requests: The standard HTTP status code returned to a client when they have sent too many requests in a given amount of time.
  • Rate Limit Headers: Standard or custom HTTP headers used to communicate rate limit status to clients:

* X-RateLimit-Limit: The maximum number of requests allowed in the current window.

* X-RateLimit-Remaining: The number of requests remaining in the current window.

* X-RateLimit-Reset: The time (usually in UTC epoch seconds) when the current rate limit window resets.

* Retry-After: Indicates how long the user should wait before making another request (in seconds or a HTTP-date).

4. Common Rate Limiting Algorithms

Different algorithms offer varying trade-offs in terms of simplicity, accuracy, and resource utilization.

4.1. Fixed Window Counter

  • How it works: Divides time into fixed windows (e.g., 60 seconds). Each request increments a counter for the current window. If the counter exceeds the limit within that window, further requests are blocked until the next window starts.
  • Pros: Simple to implement and understand. Low memory usage.
  • Cons: Can suffer from the "bursty problem" at window edges. For example, if the limit is 100 requests/minute, a client could make 100 requests at 0:59 and another 100 requests at 1:01, effectively making 200 requests in a very short period (2 minutes total, but 200 in 2 seconds).

4.2. Sliding Window Log

  • How it works: For each client, stores a timestamp of every request in a sorted log. When a new request arrives, it counts how many timestamps in the log fall within the last N seconds (the window). If this count exceeds the limit, the request is rejected. Old timestamps are pruned.
  • Pros: Highly accurate; no edge-case issues like fixed window.
  • Cons: High memory consumption, especially for high request volumes or long windows, as it stores a timestamp for every request. CPU-intensive for counting and pruning.

4.3. Sliding Window Counter

  • How it works: A hybrid approach. It uses two fixed windows: the current window and the previous window. When a request comes in, it calculates a weighted average of the counts from the previous window and the current window, based on how much of the current window has passed.

Count = (Count_prev overlap_ratio) + Count_current

* If Count exceeds the limit, the request is rejected.

  • Pros: Offers a good balance between accuracy (mitigates the bursty problem of fixed windows) and memory efficiency (only two counters per client/limit).
  • Cons: More complex to implement than fixed window. Still an approximation, not perfectly precise.

4.4. Token Bucket

  • How it works: A bucket with a fixed capacity is filled with "tokens" at a constant rate. Each API request consumes one token. If a request arrives and the bucket is empty, the request is rejected (or queued). If tokens are available, one is removed, and the request is processed.
  • Pros: Allows for bursts (up to the bucket capacity). Smooths out traffic, as requests can only be processed as fast as tokens are added.
  • Cons: More complex to implement. Needs careful tuning of bucket size and refill rate.

4.5. Leaky Bucket

  • How it works: Similar to a token bucket but with a different analogy. Requests are added to a "bucket" (a queue) at an arbitrary rate. Requests "leak" out of the bucket (are processed) at a constant, fixed rate. If the bucket overflows, new requests are rejected.
  • Pros: Excellent for smoothing out bursty traffic into a steady stream. Guarantees a constant output rate.
  • Cons: May introduce latency if the bucket fills up. If the processing rate is too slow, requests can be dropped even if the overall average rate is within limits.

5. Implementation Strategies

Rate limiting can be implemented at various layers of your application stack:

  • API Gateway / Reverse Proxy (Recommended for most cases):

* Description: Implementing rate limiting at the edge of your network, using tools like Nginx, Envoy, Kong, or cloud-managed API Gateways (e.g., AWS API Gateway, Azure API Management, Google Cloud Endpoints).

* Pros: Decouples rate limiting logic from your application code. Centralized control. Can protect multiple backend services. Often highly performant and scalable.

* Cons: Configuration can be complex for very granular or dynamic limits.

* Actionable: Utilize your cloud provider's API Gateway rate limiting features or configure Nginx/Envoy with appropriate modules (e.g., ngx_http_limit_req_module for Nginx).

  • Load Balancer:

* Description: Some advanced load balancers (e.g., HAProxy) offer basic rate limiting capabilities.

* Pros: Can provide an additional layer of protection before requests reach application servers.

* Cons: Typically less flexible and granular than API Gateways or application-level solutions.

  • Application Layer:

* Description: Implementing rate limiting directly within your application code using libraries (e.g., express-rate-limit for Node.js, Flask-Limiter for Python, Guava RateLimiter for Java).

* Pros: Highly flexible, allows for very granular and context-aware limits (e.g., based on user roles, specific data in the request body).

* Cons: Adds complexity to application code. Requires careful design for distributed systems (needs a shared state, often using Redis or a database). Can consume application resources if not efficiently implemented.

* Actionable: For microservices or specific endpoint control, integrate a robust rate-limiting library with a distributed store like Redis.

  • Dedicated Rate Limiting Service:

* Description: A separate service specifically designed for rate limiting, often built on top of a fast key-value store like Redis.

* Pros: Centralized, scalable, and highly performant. Can serve multiple applications.

* Cons: Adds another service to manage and deploy.

* Actionable: Consider building a dedicated service if your architecture involves many APIs across different teams and requires a unified, high-performance rate-limiting solution.

6. Key Considerations for Design and Implementation

When designing your rate limiting strategy, consider the following:

  • Granularity:

* Per IP Address: Simplest, but vulnerable to NAT issues (multiple users sharing an IP) or distributed attacks.

* Per API Key/Token: More robust, requires authentication, suitable for authenticated users or applications.

* Per User/Account: Most accurate for user-specific limits, requires user authentication.

* Per Endpoint: Different limits for different API endpoints (e.g., POST /users might have a lower limit than GET /products).

  • Scope:

* Global: A single limit across all API endpoints.

* Endpoint-Specific: Different limits for specific endpoints.

* Method-Specific: Different limits for GET vs. POST requests.

  • Bursting: Decide if and how much burst capacity to allow. Token Bucket is ideal for this.
  • Overload Handling:

* Blocking: Reject requests and return 429.

* Queuing: Queue requests and process them when capacity becomes available (introduces latency).

* Degradation: Return a degraded response (e.g., fewer results, older data) rather than an error.

  • Visibility and Monitoring:

* Logging: Log when limits are hit, by whom, and which endpoint.

* Metrics: Track rate limit hits, remaining requests, and reset times.

* Alerting: Set up alerts for sustained rate limit violations, indicating potential abuse or misbehaving clients.

  • Scalability: For distributed systems, ensure your rate limiting solution can handle high concurrency and maintain a consistent view of limits across multiple instances (e.g., using Redis for shared state).
  • Fairness: Implement different tiers of service (e.g., anonymous, free, premium) with corresponding rate limits.
  • Edge Cases: How to handle unauthenticated requests, malformed requests, or requests that fail validation before hitting the rate limiter. Generally, rate limit all requests at the gateway level to protect against basic DoS, then apply more granular limits after authentication.

7. Best Practices for API Consumers

Educating your API consumers on rate limiting best practices is crucial for a healthy API ecosystem:

  • Respect Retry-After Header: Clients should always adhere to the Retry-After header when a 429 response is received.
  • Implement Exponential Backoff: When encountering 429s or other transient errors, clients should implement an exponential backoff strategy (waiting longer after each successive failure) before retrying.
  • Monitor X-RateLimit-* Headers: Clients should parse and monitor these headers to proactively adjust their request rate and avoid hitting limits.
  • Cache Responses: Where appropriate, clients should cache API responses to reduce the need for repeated requests.
  • Batch Requests: Encourage clients to batch multiple operations into a single API call if your API supports it, reducing the total number of requests.
  • Use Webhooks: For event-driven data, encourage clients to use webhooks instead of polling the API, significantly reducing request volume.

8. Example Implementation (Conceptual - Fixed Window Counter with Redis)

This conceptual example illustrates a simple fixed-window rate limiter using Redis for shared state, suitable for a distributed application layer.


// Pseudocode for a Fixed Window Rate Limiter using Redis

FUNCTION checkRateLimit(clientId, endpoint, limit, windowSeconds)
    key = "rate_limit:" + clientId + ":" + endpoint
    currentTime = getCurrentTimestampInSeconds()
    windowStart = floor(currentTime / windowSeconds) * windowSeconds // Start of current window

    // Use Redis INCR to atomically increment and check
    // INCRBY command increments the counter and returns the new value
    // EXPIRE command sets the key to expire, ensuring old windows are cleaned up

    currentCount = Redis.INCR(key) // Increment counter for the current window
    IF currentCount == 1 THEN
        // If this is the first request in the window, set its expiry
        Redis.EXPIRE(key, windowSeconds)
    END IF

    IF currentCount > limit THEN
        RETURN { allowed: false, retryAfter: (windowStart + windowSeconds - currentTime) }
    ELSE
        remaining = limit - currentCount
        resetTime = windowStart + windowSeconds
        RETURN { allowed: true, remaining: remaining, reset: resetTime }
    
api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}