API Rate Limiter
Run ID: 69cc6a2c3e7fb09ff16a19fb2026-04-01Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter Implementation

This document provides a comprehensive, detailed, and professional output for the implementation of an API Rate Limiter. It includes well-commented, production-ready code, along with thorough explanations, setup instructions, and best practices.


1. Introduction to API Rate Limiting

An API Rate Limiter is a critical component in modern web applications, designed to control the rate at which clients can make requests to an API. It serves several vital purposes:

2. Core Rate Limiting Concepts and Algorithms

Several algorithms are commonly used for rate limiting, each with its own characteristics:

Pros*: Simple to implement.

Cons*: Can suffer from a "bursty" problem at window edges, where a client might make a large number of requests at the end of one window and the beginning of the next, effectively doubling the rate within a short period.

Pros*: Very accurate, avoids the bursty problem of fixed windows.

Cons*: Requires storing many timestamps, which can be memory-intensive for high traffic.

Pros*: More accurate than fixed window, less memory-intensive than sliding window log. Good balance.

Cons*: Slightly more complex than fixed window.

Pros*: Allows for bursts of requests up to the bucket capacity.

Cons*: Can be complex to tune parameters (fill rate, bucket size).

Pros*: Smooths out bursts of requests, ensuring a steady output rate.

Cons*: Introduces latency due to queuing.

3. Selected Approach and Technologies

For this implementation, we will use the following:

Reasoning*: It offers a good balance between accuracy and resource efficiency, mitigating the "bursty" problem of simple fixed windows without the high memory overhead of storing individual logs.

Reasoning*: Python is widely used for backend services, and Flask provides a lightweight and flexible framework for demonstrating API endpoints and middleware.

Reasoning*: For a production-ready rate limiter, an external, high-performance key-value store like Redis is essential. It provides:

* Persistence: Rate limits persist across application restarts.

* Centralization: Allows multiple API instances to share the same rate limiting state, crucial for horizontally scaled applications.

* Atomic Operations: Redis commands like INCR and EXPIRE are atomic, preventing race conditions.

4. Prerequisites

Before running the code, ensure you have the following installed:

  1. Python 3.7+:
text • 380 chars
3.  **Redis Server**:
    *   **Installation**: Follow instructions for your OS (e.g., `sudo apt-get install redis-server` on Ubuntu, `brew install redis` on macOS).
    *   **Start Server**: Ensure the Redis server is running, typically `redis-server` in your terminal. By default, it runs on `localhost:6379`.

### 5. Setup Instructions

1.  **Create a Project Directory**:
    
Sandboxed live preview

As a professional AI assistant, I'm delivering the detailed architecture plan for an API Rate Limiter, followed by a comprehensive study plan as requested. This output is designed to be actionable and directly consumable by the customer.


API Rate Limiter: Architecture Plan

1. Introduction to API Rate Limiting

An API Rate Limiter is a critical component in modern distributed systems, designed to control the rate at which clients can send requests to an API. Its primary purposes include:

  • Preventing Abuse: Mitigating DDoS attacks, brute-force attempts, and spam.
  • Ensuring Fair Usage: Allocating API resources equitably among all consumers.
  • Stabilizing System Performance: Protecting backend services from being overwhelmed by traffic spikes.
  • Cost Management: Controlling infrastructure costs by limiting excessive resource consumption.
  • Monetization: Implementing tiered access based on different rate limits.

Designing an effective rate limiter involves addressing challenges such as distributed state management, performance, accuracy, and fault tolerance.

2. Core Rate Limiting Algorithms

The choice of algorithm significantly impacts the rate limiter's behavior and resource consumption.

2.1. Fixed Window Counter

  • Mechanism: Divides time into fixed-size windows (e.g., 1 minute). Each request increments a counter for the current window. If the counter exceeds the limit, the request is denied.
  • Pros: Simple to implement, low memory footprint.
  • Cons: Susceptible to "bursts" at the window edges, where a client can make limit requests at the end of one window and limit requests at the beginning of the next, effectively doubling the rate in a short period.

2.2. Sliding Window Log

  • Mechanism: For each client, stores timestamps of all their requests within a sorted log (e.g., Redis Sorted Set). To check a request, it counts how many timestamps fall within the current window and removes timestamps older than the window.
  • Pros: Highly accurate, no edge case issues.
  • Cons: High memory consumption (stores every request's timestamp), computationally intensive (needs to process the log for each request).

2.

python

import redis

from flask import Flask, jsonify, request

from rate_limiter import SlidingWindowRateLimiter

--- Redis Configuration ---

Ensure your Redis server is running on localhost:6379

try:

redis_client = redis.Redis(host='localhost', port=6379, db=0, decode_responses=True)

redis_client.ping()

print("Successfully connected to Redis for rate limiting.")

except redis.exceptions.ConnectionError as e:

print(f"ERROR: Could not connect to Redis: {e}")

print("Please ensure the Redis server is running and accessible.")

# In a real production app, you might want to exit or use a fallback mechanism

exit(1)

app = Flask(__name__)

--- Rate Limiter Configuration ---

5 requests per 60 seconds (1 minute)

api_rate_limiter = SlidingWindowRateLimiter(

redis_client=redis_client,

limit=5,

window_seconds=60,

prefix="api_global_limit" # Unique prefix for this limiter instance

)

10 requests per 10 seconds for a specific endpoint

specific_endpoint_limiter = SlidingWindowRateLimiter(

redis_client=redis_client,

limit=10,

window_seconds=10,

gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Strategy

This document provides a detailed professional output on API Rate Limiters, covering their importance, core concepts, common algorithms, implementation considerations, best practices, and actionable recommendations. This information is crucial for building robust, secure, and scalable API ecosystems.


1. Executive Summary

An API Rate Limiter is a critical component in modern API architectures designed to control the rate at which clients can send requests to an API. By enforcing limits on the number of requests within a defined timeframe, rate limiters protect the API infrastructure from abuse, ensure fair usage among consumers, prevent denial-of-service (DoS) attacks, and maintain the stability and performance of the service. Implementing an effective rate limiting strategy is essential for resource management, cost control, and delivering a consistent user experience.


2. Core Concepts and Importance

2.1. Definition

An API Rate Limiter is a mechanism that restricts the number of API requests a user or client can make within a given time window. If a client exceeds this predefined limit, subsequent requests are typically rejected until the window resets.

2.2. Why API Rate Limiting is Essential

  • Prevent Abuse and Misuse: Protects against malicious activities like DoS attacks, brute-force attempts, and excessive scraping.
  • Ensure Fair Usage: Prevents a single client from monopolizing server resources, ensuring that all legitimate users receive adequate service.
  • Maintain System Stability and Performance: Controls the load on backend servers, databases, and other resources, preventing overload and ensuring consistent API response times.
  • Cost Management: Reduces infrastructure costs by preventing uncontrolled scaling due to excessive, non-productive requests.
  • Monetization and Tiered Services: Enables the creation of different service tiers (e.g., free, premium, enterprise) with varying rate limits, supporting business models.
  • Data Integrity: Helps prevent rapid-fire writes that could lead to data corruption or race conditions.

2.3. Key Metrics for Rate Limiting

Rate limits are typically defined based on:

  • Requests per second (RPS), per minute, or per hour.
  • Per client/user: Identified by API key, user ID, or authentication token.
  • Per IP address: Useful for unauthenticated requests or as a secondary layer.
  • Per endpoint: Different limits for different API endpoints (e.g., read vs. write operations).
  • Per resource: Limits on accessing specific resources.

3. Common Rate Limiting Algorithms

Choosing the right algorithm depends on the specific requirements for fairness, resource consumption, and burst tolerance.

3.1. Fixed Window Counter

  • How it works: Divides time into fixed-size windows (e.g., 60 seconds). Each request increments a counter. If the counter exceeds the limit within the window, requests are rejected. The counter resets at the start of each new window.
  • Pros: Simple to implement, low memory footprint.
  • Cons:

* Bursting Problem: A client can make a large number of requests at the very end of one window and then again at the very beginning of the next, effectively doubling the rate within a short period.

* Requests near the window boundary can be unfairly rejected if the previous window was heavily utilized.

3.2. Sliding Window Log

  • How it works: For each client, stores a timestamp of every request made. When a new request arrives, it removes all timestamps older than the current time minus the window size. If the number of remaining timestamps exceeds the limit, the request is rejected.
  • Pros: Highly accurate, prevents the bursting problem of fixed windows.
  • Cons: High memory consumption, especially for high-volume APIs, as it stores every request's timestamp. Can be computationally intensive to manage large lists of timestamps.

3.3. Sliding Window Counter

  • How it works: A hybrid approach. It combines the current fixed window's count with a weighted count from the previous window. For example, if the window is 60 seconds and a request comes at 30 seconds into the current window, it considers the current window's count plus 50% of the previous window's count.
  • Pros: Addresses the bursting problem better than Fixed Window, more memory efficient than Sliding Window Log.
  • Cons: Not perfectly accurate, as it's an estimation. Edge cases can still lead to slight over- or under-counting.

3.4. Token Bucket

  • How it works: A "bucket" holds a fixed capacity of "tokens." Tokens are added to the bucket at a constant rate. Each request consumes one token. If the bucket is empty, the request is rejected (or queued).
  • Pros: Allows for some burstiness (up to the bucket capacity), simple to implement, handles variable request rates gracefully.
  • Cons: Choosing the right bucket size and refill rate requires careful tuning.

3.5. Leaky Bucket

  • How it works: Similar to a bucket with a hole in it. Requests are added to the bucket. If the bucket overflows, new requests are rejected. Requests "leak" out of the bucket at a constant rate, meaning requests are processed at a steady pace.
  • Pros: Smooths out bursts of requests, ensuring a steady output rate. Good for preventing server overload.
  • Cons: Does not allow for burstiness beyond the bucket capacity. All requests are processed at the same rate, which might not be ideal for all use cases.

4. Implementation Considerations

4.1. Where to Implement

  • API Gateway / Load Balancer: Ideal for centralized enforcement, providing protection before requests hit backend services. Examples: NGINX, HAProxy, AWS API Gateway, Azure API Management, Kong.
  • Application Layer: Implemented within the service code itself. Offers granular control but can be harder to manage across microservices.
  • Sidecar Proxy: In a microservices architecture, a sidecar proxy (e.g., Envoy with a rate limiting service) can enforce limits for each service.

4.2. Storage for Counters/Logs

  • Redis: Highly recommended for its in-memory, high-performance key-value store capabilities. Its atomic operations (e.g., INCR, EXPIRE) are perfect for implementing rate limiting algorithms efficiently across distributed systems.
  • Distributed Cache (e.g., Memcached): Can also be used, though Redis often offers more advanced features.
  • Database (e.g., PostgreSQL, MongoDB): Generally not recommended for high-volume rate limiting due to latency and I/O overhead. Suitable for very low-volume APIs or for storing long-term rate limit policies.
  • In-Memory: Fastest but only suitable for single-instance applications; not scalable for distributed systems.

4.3. Distributed vs. Single-Instance

  • Single-Instance: Rate limits apply only to requests handled by that specific instance. Not suitable for horizontally scaled APIs.
  • Distributed: Requires a shared state (e.g., Redis) that all API instances can access and update. This ensures consistent rate limiting across the entire API fleet. This is the standard approach for scalable APIs.

4.4. Error Handling and User Feedback

  • HTTP Status Code 429 Too Many Requests: The standard HTTP status code for rate limiting.
  • Informative Headers: Provide clients with information about their current rate limit status:

* X-RateLimit-Limit: The maximum number of requests allowed in the current window.

* X-RateLimit-Remaining: The number of requests remaining in the current window.

* X-RateLimit-Reset: The time (in UTC epoch seconds or seconds until reset) when the current window resets.

* Retry-After: Indicates how long the user should wait before making a new request (in seconds).

  • Clear Error Message: A human-readable message explaining that the rate limit has been exceeded and advising the client to wait.

4.5. Bursting and Throttling

  • Bursting: Allowing a client to exceed the average rate for a short period, up to a certain maximum, before the limit kicks in. Token Bucket is excellent for this.
  • Throttling: A broader term that includes rate limiting but can also refer to deliberately slowing down request processing or queuing requests to manage load, rather than outright rejecting them. Rate limiting is a form of throttling.

5. Best Practices & Design Principles

  • Document Your Policies Clearly: Publish your rate limiting policies in your API documentation, including limits, window sizes, and how to interpret 429 responses and rate limit headers.
  • Start with Reasonable Defaults, Then Iterate: Begin with limits that balance protection and usability. Monitor API usage and performance, then adjust limits based on real-world data.
  • Differentiate Limits: Apply different limits for:

* Authenticated vs. Unauthenticated users: Authenticated users usually get higher limits.

* Different API Keys/Tiers: Premium customers can have significantly higher limits.

* Different Endpoints: Read-heavy endpoints might have higher limits than write-heavy or resource-intensive ones.

* Internal vs. External Services: Internal services often have much higher or no limits.

  • Graceful Degradation: When a rate limit is hit, consider if there are ways to provide a degraded service (e.g., cached data, partial response) instead of a hard rejection, if appropriate.
  • Client-Side Best Practices: Advise clients to implement exponential backoff and retry logic when encountering 429 responses, respecting the Retry-After header.
  • Monitoring and Alerting: Implement robust monitoring for rate limit hits, system load, and API performance. Set up alerts for excessive rate limit breaches or unusual traffic patterns.
  • Configurability: Make rate limit policies easily configurable without requiring code deployments.
  • Edge Cases: Consider how to handle long-running requests, idempotent requests, and potential for client IP spoofing (though harder to solve solely with rate limiting).

6. Actionable Recommendations for Implementation

  1. Define Clear Rate Limiting Policies:

* For each major API endpoint or group of endpoints, establish specific limits (e.g., 100 requests/minute per authenticated user, 10 requests/minute per IP for unauthenticated users).

* Determine if burst capacity is needed and to what extent.

* Define different tiers (e.g., Free, Basic, Premium) with corresponding limits.

  1. Choose Appropriate Algorithm(s):

* For general-purpose, high-volume APIs requiring fairness and burst tolerance, consider Token Bucket or Sliding Window Counter.

* If absolute accuracy and strict prevention of bursts are paramount, and memory is not a major concern, Sliding Window Log can be used, but with caution.

* For simplicity and low overhead where bursting is less critical, Fixed Window Counter might suffice for less critical endpoints.

  1. Select Technology Stack:

* Primary Recommendation: Implement rate limiting at the API Gateway layer (e.g., NGINX, Kong, AWS API Gateway) for centralized control and protection.

* Distributed State Management: Utilize Redis as the shared state store for counters/logs across all API instances. Ensure Redis is highly available (e.g., Redis Cluster, Sentinel).

* Language/Framework Specific Libraries: If implementing at the application layer, leverage existing, well-tested libraries for your chosen language (e.g., ratelimit for Python, go-limiter for Go, express-rate-limit for Node.js).

  1. Implement Robust Error Handling and Informative Headers:

* Ensure all rate-limited responses return HTTP 429 Too Many Requests.

Include X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers in all* API responses (even successful ones) so clients can anticipate limits.

* For 429 responses, always include the Retry-After header.

  1. Develop Monitoring and Alerting:

* Track metrics such as 429 response rates, X-RateLimit-Remaining values (especially when low), and overall API request volume.

* Integrate with your existing monitoring solutions (e.g., Prometheus, Grafana, Datadog) to visualize rate limit activity.

* Set up alerts for sudden spikes in 429 errors or when X-RateLimit-Remaining consistently approaches zero for specific clients.

  1. Document and Communicate:

* Clearly document the rate limiting policies in your API documentation.

* Provide example 429 responses and explain how clients should handle them (e.g., exponential backoff).

* Communicate any changes to rate limits well in advance to API consumers.


7. Conclusion

API Rate Limiters are an indispensable component of any production-grade API. By strategically implementing and managing rate limits, you can safeguard your infrastructure, provide a fair and reliable service to your users, and control operational costs. The recommendations provided herein offer a solid foundation for designing and deploying an effective API rate limiting strategy. Consistent monitoring and iterative refinement based on usage patterns will ensure the long-term success of your API ecosystem.

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}