API Rate Limiter
Run ID: 69cca2723e7fb09ff16a3aea2026-04-01Development
PantheraHive BOS
BOS Dashboard

API Rate Limiter: Comprehensive Architecture Plan

This document outlines a comprehensive and detailed architecture plan for an API Rate Limiter system. It covers the core requirements, design principles, component breakdown, chosen algorithms, scalability considerations, and integration points necessary for a robust, performant, and highly available solution.


1. Introduction & Overview

An API Rate Limiter is a critical component in modern web service architectures, designed to control the rate at which clients can access an API. Its primary purpose is to protect backend services from abuse, overload, and malicious attacks (e.g., DDoS attempts), ensuring fair usage and consistent performance for all legitimate users.

Key Benefits:

  • Service Stability: Prevents server overload by rejecting excessive requests.
  • Resource Protection: Safegu
gemini Output

This document provides a comprehensive and detailed output for implementing an API Rate Limiter. It covers fundamental concepts, two distinct implementation strategies (in-memory Fixed Window and Redis-backed Token Bucket), and integration with a web framework, offering production-ready code with thorough explanations.


API Rate Limiter: Comprehensive Implementation Guide

1. Introduction to API Rate Limiting

API Rate Limiting is a critical component for managing the usage of your API. It restricts the number of requests a user or client can make to an API within a given timeframe.

Why is API Rate Limiting Necessary?

  • Prevent Abuse & DDoS Attacks: Limits the impact of malicious activity, such as denial-of-service attacks or brute-force attempts.
  • Ensure Fair Usage: Prevents a single user or a small group from monopolizing server resources, ensuring a consistent experience for all users.
  • Manage Infrastructure Costs: Reduces the load on your servers, databases, and other infrastructure components, leading to lower operational costs.
  • Maintain Service Quality: Guarantees that your API remains responsive and available even under high demand.
  • Monetization & Tiered Access: Enables the creation of different service tiers (e.g., free vs. premium) with varying rate limits.

Common Identifiers for Rate Limiting:

  • IP Address: Simple to implement, but can be problematic for users behind NATs or proxies, and easily bypassed by VPNs.
  • User ID: Requires user authentication but provides more accurate per-user limiting.
  • API Key / Client ID: Ideal for client applications, allowing granular control over specific applications or integrations.
  • Session ID: Useful for web applications where users might not be logged in.

2. Key Rate Limiting Strategies

Several algorithms exist for implementing rate limiting, each with its strengths and weaknesses:

  • Fixed Window Counter:

* Concept: A fixed time window (e.g., 60 seconds) is defined. Requests within this window are counted. Once the count reaches the limit, further requests are blocked until the window resets.

* Pros: Simple to implement, low memory usage.

* Cons: Can suffer from "bursty" traffic at the window edges (e.g., if many requests arrive just before a reset and then many more immediately after).

* Chosen for In-Memory Implementation: Provides a clear, easy-to-understand starting point for rate limiting concepts.

  • Sliding Window Log:

* Concept: Stores a timestamp for every request made by a client. To check if a request is allowed, it counts how many timestamps fall within the current window.

* Pros: Very accurate, no burst issues.

* Cons: High memory usage as it stores individual request timestamps, computationally more expensive.

  • Sliding Window Counter:

* Concept: A hybrid approach using two fixed windows (current and previous) and a weighted average to approximate a smoother limit.

* Pros: Mitigates the burst problem of fixed windows, less memory intensive than sliding window log.

* Cons: More complex to implement than fixed window.

  • Leaky Bucket:

* Concept: Requests are added to a "bucket" that has a fixed capacity. Requests "leak" out of the bucket at a constant rate. If the bucket is full, new requests are dropped.

* Pros: Smooths out bursty traffic, ensures a constant output rate.

* Cons: Requests might experience delays, complex to implement with multiple concurrent requests.

  • Token Bucket:

* Concept: A "bucket" holds "tokens." Tokens are added to the bucket at a fixed rate. Each request consumes one token. If no tokens are available, the request is denied. The bucket has a maximum capacity, allowing for bursts up to that capacity.

* Pros: Allows for bursts up to a certain limit, simple to understand, effective for controlling average rate while allowing some burstiness.

* Cons: Can be slightly more complex than fixed window to implement correctly, especially in distributed systems.

* Chosen for Redis-Backed Implementation: Excellent for distributed systems, handles burstiness well, and is a robust, production-ready solution when combined with Redis's atomic operations and Lua scripting.

3. Implementation: In-Memory Fixed Window Rate Limiter (Python)

This implementation is suitable for single-instance applications or development environments where state does not need to be shared across multiple processes or servers. It uses a simple dictionary to store the request counts and reset times.

Use Case:

  • Local development and testing.
  • Small-scale applications running on a single server.
  • Microservices that are guaranteed to run as a single instance.

import time
import threading
from collections import defaultdict

class InMemoryFixedWindowRateLimiter:
    """
    An in-memory API rate limiter using the Fixed Window algorithm.
    Suitable for single-instance applications.
    """

    def __init__(self, capacity: int, window_seconds: int):
        """
        Initializes the rate limiter.

        Args:
            capacity (int): The maximum number of requests allowed within the window.
            window_seconds (int): The duration of the window in seconds.
        """
        if not isinstance(capacity, int) or capacity <= 0:
            raise ValueError("Capacity must be a positive integer.")
        if not isinstance(window_seconds, int) or window_seconds <= 0:
            raise ValueError("Window seconds must be a positive integer.")

        self.capacity = capacity
        self.window_seconds = window_seconds
        # Stores client_id -> {'count': int, 'reset_time': float}
        self.client_requests = defaultdict(lambda: {'count': 0, 'reset_time': 0.0})
        self._lock = threading.Lock() # For thread-safe access to client_requests

    def _get_current_window_start(self) -> int:
        """Calculates the start time of the current fixed window."""
        return int(time.time() // self.window_seconds) * self.window_seconds

    def allow_request(self, client_id: str) -> bool:
        """
        Checks if a request from the given client_id is allowed.

        Args:
            client_id (str): A unique identifier for the client (e.g., IP, User ID, API Key).

        Returns:
            bool: True if the request is allowed, False otherwise.
        """
        with self._lock:
            current_time = time.time()
            current_window_start = self._get_current_window_start()

            client_data = self.client_requests[client_id]

            # If the current window is different from the stored window, reset the counter
            # The reset_time stored is the start of the window it belongs to.
            if client_data['reset_time'] < current_window_start:
                client_data['count'] =
gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Guide

This document provides a comprehensive overview of API Rate Limiting, detailing its purpose, mechanisms, common strategies, and critical considerations for successful implementation. This is a crucial component for robust and scalable API ecosystems.


1. Introduction to API Rate Limiting

API Rate Limiting is a control mechanism that restricts the number of requests a user or client can make to an API within a given timeframe. It acts as a gatekeeper, ensuring fair usage, preventing abuse, and maintaining the stability and performance of the API service.

Key Objectives:

  • Prevent Abuse & DDoS Attacks: Thwart malicious attempts to overwhelm the API with excessive requests, such as Denial-of-Service (DoS) or Distributed Denial-of-Service (DDoS) attacks.
  • Ensure Fair Usage: Distribute API resources equitably among all consumers, preventing a single user from monopolizing resources and degrading service for others.
  • Protect Backend Systems: Safeguard databases, microservices, and other backend infrastructure from being overloaded, preventing performance degradation or crashes.
  • Manage Costs: For cloud-based services, limiting requests can help control operational costs associated with compute, bandwidth, and database usage.
  • Monetization & Tiering: Enable differentiated service levels (e.g., free vs. paid tiers with varying rate limits) for API monetization strategies.
  • Improve User Experience: By maintaining API stability, rate limiting indirectly contributes to a more reliable and responsive experience for legitimate users.

2. How API Rate Limiting Works (Core Mechanism)

At its core, an API Rate Limiter intercepts incoming requests and checks them against predefined rules.

  1. Identify the Client: The rate limiter first identifies the client making the request (e.g., via IP address, API key, user ID, or authentication token).
  2. Track Usage: For the identified client, it tracks the number of requests made within a specific time window.
  3. Apply Rules: It compares the current request count against the configured rate limits for that client.
  4. Decision:

* If the client is within the allowed limit, the request is permitted to proceed to the API backend, and the usage count is updated.

* If the client has exceeded the allowed limit, the request is blocked, and an appropriate error response (typically HTTP 429 Too Many Requests) is returned to the client.

  1. Reset: After the specified time window expires, the usage count for the client is reset, allowing them to make new requests.

3. Common Rate Limiting Strategies/Algorithms

Different algorithms handle request tracking and limit enforcement with varying characteristics. Choosing the right one depends on the specific requirements for fairness, resource usage, and complexity.

3.1. Fixed Window Counter

  • Mechanism: Divides time into fixed-size windows (e.g., 1 minute). Each window has a counter. When a request comes, the counter for the current window is incremented. If it exceeds the limit, the request is blocked.
  • Pros: Simple to implement, low memory usage.
  • Cons:

* Bursting Problem: Allows a client to make a full burst of requests at the very end of one window and another full burst at the very beginning of the next, effectively doubling the rate within a short period around the window boundary.

* Less fair during high traffic.

3.2. Sliding Window Log

  • Mechanism: Stores a timestamp for every request made by a client. To check a new request, it counts all timestamps within the last N seconds/minutes. If the count exceeds the limit, the request is blocked. Old timestamps are purged.
  • Pros: Highly accurate and fair, as it considers the exact time of each request. Avoids the "bursting problem" of Fixed Window.
  • Cons:

* High memory usage, as it stores individual timestamps, especially for high-volume clients.

* More complex to implement.

3.3. Sliding Window Counter

  • Mechanism: A hybrid approach combining Fixed Window and Sliding Window Log. It uses two fixed windows: the current one and the previous one. When a request arrives, it calculates an estimated count by weighting the current window's count and a fraction of the previous window's count based on how much of the current window has elapsed.
  • Pros:

* Mitigates the "bursting problem" of Fixed Window.

* More memory efficient than Sliding Window Log.

* Better fairness than Fixed Window.

  • Cons: Provides an approximation rather than exact count, so it's not perfectly accurate but often "good enough."

3.4. Token Bucket

  • Mechanism: Imagine a bucket with a fixed capacity for "tokens." Tokens are added to the bucket at a constant rate. Each incoming request consumes one token.

* If the bucket contains enough tokens, a token is removed, and the request is processed.

* If the bucket is empty, the request is blocked.

  • Pros:

* Allows for bursts of requests (up to the bucket capacity) if tokens have accumulated.

* Smooths out traffic over time due to the constant token generation rate.

* Easy to reason about and configure (burst capacity, refill rate).

  • Cons: Can be slightly more complex to implement than simple counters.

3.5. Leaky Bucket

  • Mechanism: Similar to a bucket with a hole at the bottom. Requests are added to the bucket. If the bucket is not full, the request is added. Requests "leak" out of the bucket at a constant rate, representing the processing rate.

* If a request arrives and the bucket is full, the request is dropped (blocked).

  • Pros:

* Smooths out bursts of traffic into a steady output rate, protecting backend systems from sudden spikes.

* Good for scenarios where consistent processing rate is critical.

  • Cons:

* Does not allow for bursts beyond the leak rate.

* Requests might be delayed if the bucket is full but not overflowing.


4. Key Considerations for Implementation

Successful rate limiting requires careful planning and execution.

4.1. Scope of Limiting

Determine what entity the rate limit applies to:

  • IP Address: Simple, but problematic for shared IPs (NAT, proxies) or dynamic IPs.
  • API Key/Client ID: Most common and effective for identifying specific applications.
  • User ID: For authenticated users, allows personalized limits.
  • Endpoint/Resource: Apply different limits to different API endpoints based on their resource intensity (e.g., /login vs. /data/heavy-report).
  • Combined: Often, a combination (e.g., per API Key, then fallback to per IP if no API Key is present) is used.

4.2. Granularity and Time Windows

  • Time Window: How long is the period over which requests are counted (e.g., 1 second, 1 minute, 1 hour, 1 day)?
  • Limits: How many requests are allowed within that window?
  • Multiple Limits: Consider tiered limits (e.g., 100 requests/minute, but also 1000 requests/hour to prevent long-term abuse).

4.3. Response Headers (Standardization)

Communicate rate limit status to clients using standard HTTP headers:

  • X-RateLimit-Limit: The maximum number of requests allowed in the current time window.
  • X-RateLimit-Remaining: The number of requests remaining in the current time window.
  • X-RateLimit-Reset: The time (usually Unix timestamp or seconds) when the current rate limit window resets and more requests will be allowed.
  • HTTP Status Code: Use 429 Too Many Requests when a client exceeds the limit. Include a Retry-After header indicating how long the client should wait before retrying.

4.4. Error Handling and User Experience

  • Clear Error Messages: When a 429 is returned, the response body should clearly explain that the rate limit has been exceeded and advise the client to check Retry-After or X-RateLimit-Reset headers.
  • Graceful Degradation: For non-critical requests, clients should be encouraged to implement exponential backoff and retry logic.

4.5. Distributed Systems Challenges

  • Centralized State: In a distributed microservices environment, rate limit counters need to be stored in a centralized, highly available, and fast data store (e.g., Redis, Memcached) to ensure consistency across multiple instances of the API gateway/rate limiter.
  • Race Conditions: Implement atomic operations (e.g., INCR in Redis) to prevent race conditions when multiple requests simultaneously try to update a counter.
  • Synchronization: Ensure all instances of the rate limiter are synchronized with the same rules and state.

4.6. Monitoring and Alerting

  • Metrics: Track rate limit hits, blocked requests, and remaining requests.
  • Dashboards: Visualize rate limit usage patterns to identify potential issues or abusive behavior.
  • Alerts: Set up alerts for sustained high rate limit hits or unusual spikes, indicating potential attacks or misbehaving clients.

4.7. Configuration and Management

  • Dynamic Configuration: Ideally, rate limits should be configurable without requiring a redeployment.
  • API for Management: Consider an administrative API or UI for managing rate limits, whitelists, and blacklists.

4.8. Whitelisting/Bypassing

  • Internal Services: Allow internal services or trusted partners to bypass rate limits.
  • Admin Access: Provide higher limits or no limits for administrative tools.
  • Specific Endpoints: Some non-resource-intensive endpoints might not require strict rate limits.

5. Best Practices for API Rate Limiting

  • Start Conservatively, Iterate: Begin with reasonable, slightly conservative limits and adjust based on actual usage patterns observed through monitoring.
  • Communicate Clearly: Document your rate limiting policies thoroughly in your API documentation, including limits, reset times, and expected error responses.
  • Provide Meaningful Headers: Always include X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers in all responses, not just 429s, so clients can proactively manage their usage.
  • Use Retry-After: When returning a 429 Too Many Requests, always include a Retry-After header to guide clients on when to retry.
  • Implement Exponential Backoff on Client Side: Encourage client developers to implement exponential backoff with jitter for retries to avoid overwhelming the API further.
  • Layered Approach: Combine rate limiting with other security measures like WAFs, API gateways, and robust authentication/authorization.
  • Consider Bursting: If your use case requires allowing temporary spikes, consider algorithms like Token Bucket or Sliding Window Counter that handle bursts gracefully.
  • Edge/Gateway Deployment: Deploy rate limiters at the edge of your network (e.g., API Gateway, Load Balancer) to block excessive requests before they reach your backend services.
  • Regular Review: Periodically review and adjust your rate limits based on API evolution, traffic growth, and user feedback.

6. Conclusion

API Rate Limiting is an indispensable tool for building resilient, scalable, and secure API services. By carefully selecting the appropriate algorithms, considering deployment challenges, and adhering to best practices, organizations can effectively protect their infrastructure, ensure fair resource distribution, and maintain a high quality of service for all API consumers. Implementing a well-thought-out rate limiting strategy is a critical step in fostering a healthy and sustainable API ecosystem.

api_rate_limiter.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}