API Rate Limiter
Run ID: 69b6fa06896970b0894649b72026-03-29Development
PantheraHive BOS
BOS Dashboard

Workflow Execution Summary

Workflow Name: API Rate Limiter

Category: Development

Description: Test run

Topic: AI Technology

Execution Time Allotment: 5 min (+100 cr)

This execution details the critical aspects of implementing API Rate Limiting specifically within the context of AI Technology. Given the unique computational demands, cost implications, and potential for abuse associated with AI models, robust rate limiting is paramount for maintaining service quality, managing resources, and ensuring fair usage.


Introduction to API Rate Limiting for AI Technology

API Rate Limiting is a control mechanism that restricts the number of requests a user or client can make to an API within a given timeframe. For AI technologies, this mechanism is not merely about preventing server overload but also about managing expensive computational resources (GPUs, TPUs), controlling operational costs, ensuring fair access to scarce resources, and protecting against various forms of abuse, including denial-of-service (DoS) attacks, data scraping, and model exploitation.

The dynamic nature and often high computational cost of AI model inferences make intelligent rate limiting an indispensable component of any production-grade AI API.


Why Rate Limit AI APIs?

Implementing rate limiting for AI APIs offers several critical benefits:

  1. Resource Protection: AI model inferences can be extremely resource-intensive, consuming significant CPU, GPU, memory, and network bandwidth. Rate limiting prevents a single client or a small group of clients from monopolizing these resources, ensuring the API remains responsive for all users.
  2. Cost Management: Cloud-based AI infrastructure (e.g., GPU instances, specialized AI accelerators) is often billed based on usage. Uncontrolled API access can lead to unexpected and exorbitant operational costs. Rate limiting directly helps control these expenditures.
  3. Fair Usage & Quality of Service (QoS): It ensures that all legitimate users receive a reasonable share of the API's capacity, preventing "noisy neighbor" issues where one user's excessive requests degrade performance for others. This maintains a consistent and predictable quality of service.
  4. Abuse Prevention:

* DDoS Attacks: Mitigates distributed denial-of-service attacks aimed at overwhelming the AI service.

* Data Scraping: Prevents automated bots from excessively querying the API to extract data or model outputs for unauthorized purposes.

* Model Exploitation: Limits the ability of malicious actors to probe or reverse-engineer AI models by making an excessive number of queries.

  1. System Stability: Prevents cascading failures by protecting downstream services (databases, message queues, other microservices) from being overwhelmed by a flood of requests originating from the AI API layer.
  2. Operational Insights: Provides valuable data on API usage patterns, helping identify popular endpoints, peak usage times, and potential areas for scaling or optimization.

Common Rate Limiting Strategies

Several algorithms exist for implementing API rate limits, each with its strengths and weaknesses:

  1. Fixed Window Counter:

* How it works: A counter is maintained for a fixed time window (e.g., 60 seconds). All requests within that window increment the counter. Once the window expires, the counter resets.

* Pros: Simple to implement and understand.

* Cons: Can lead to "burstiness" at the window boundaries (e.g., a client can make N requests at t=59s and another N requests at t=61s, effectively doubling the rate).

* AI Context: Might be acceptable for low-volume, non-critical AI endpoints.

  1. Sliding Window Log:

* How it works: For each client, the timestamps of all their requests are stored. When a new request arrives, all timestamps older than the current window are discarded. If the number of remaining timestamps exceeds the limit, the request is denied.

* Pros: Very accurate and prevents burstiness at window edges.

* Cons: High memory consumption, as it stores a log of timestamps for each client.

* AI Context: Ideal for high-value AI APIs where precise control and smooth traffic are critical, and memory is less of a concern than accuracy.

  1. Sliding Window Counter:

* How it works: A hybrid approach. It uses two fixed windows (current and previous) and estimates the request count for the current sliding window based on the weighted average of the two fixed windows.

* Pros: Better at preventing burstiness than Fixed Window, less memory-intensive than Sliding Window Log.

* Cons: Not perfectly accurate, as it's an estimation.

* AI Context: A good balance for many AI APIs, offering decent accuracy without excessive resource overhead.

  1. Token Bucket:

* How it works: Requests consume "tokens" from a bucket. Tokens are added to the bucket at a fixed rate. If the bucket is empty, requests are denied until new tokens are added. The bucket has a maximum capacity, allowing for some burstiness up to that capacity.

* Pros: Allows for controlled bursts (up to bucket capacity) while maintaining a steady average rate.

* Cons: Can be slightly more complex to implement than simple counters.

* AI Context: Excellent for AI APIs that might experience legitimate, short bursts of activity (e.g., processing a small batch of images) but need to be limited over the long run.

  1. Leaky Bucket:

* How it works: Requests are added to a queue (the "bucket"). Requests are processed from the queue at a fixed rate (the "leak rate"). If the bucket overflows (queue is full), new requests are dropped.

* Pros: Smooths out bursty traffic into a steady stream, ideal for backend processing systems.

* Cons: Introduces latency for requests when the bucket is partially full.

* AI Context: More suitable for asynchronous AI processing jobs or internal queues where a consistent processing rate is more important than immediate response, and some queuing latency is acceptable.


Key Considerations for AI API Rate Limiting

When designing rate limiting for AI APIs, specific factors related to AI workloads must be carefully considered:

* Per User/Client ID: Standard practice for fair usage.

* Per IP Address: Useful for unauthenticated requests or preventing abuse from shared networks.

* Per Endpoint/Model: Different AI models or endpoints might have vastly different resource requirements. Rate limit a complex LLM endpoint more aggressively than a simple image classification endpoint.

* Per Resource (e.g., GPU core-hour): Advanced rate limiting could track actual resource consumption rather than just requests.


Implementation Patterns and Technologies

Implementing API rate limiting for AI APIs can be done at various layers:

  1. API Gateway/Proxy Level:

* Description: The most common and recommended approach. Rate limiting is enforced before requests even reach your AI service.

* Technologies:

* AWS API Gateway: Built-in throttling and quota features.

* Azure API Management: Policy-based rate limiting.

* Google Cloud Apigee: Comprehensive API management with rate limiting.

* Nginx/Envoy Proxy: Can be configured to perform rate limiting based on IP, headers, or request parameters.

* Kong Gateway: Open-source API gateway with robust rate limiting plugins.

* Pros: Decouples rate limiting logic from your application code, scales well, protects your backend from traffic surges.

* Cons: May lack fine-grained control over application-specific logic (e.g., "cost units").

  1. Application Level:

* Description: Rate limiting logic is embedded directly within your AI application code.

* Technologies:

* Distributed Cache (e.g., Redis): Use Redis to store counters, timestamps, or token bucket states. This allows for distributed rate limiting across multiple instances of your AI service. Libraries like redis-py (Python) or lettuce (Java) can interact with Redis.

* In-Memory Libraries (e.g., Guava RateLimiter for Java, ratelimit for Python): Suitable for single-instance applications or for secondary, very specific rate limits within a microservice. Not distributed.

* Pros: Highly customizable, can implement complex AI-specific logic (e.g., cost-based limiting).

* Cons: Adds complexity to application code, requires careful state management for distributed systems.

  1. Hybrid Approach:

* Description: Combine gateway-level rate limiting for basic protection (e.g., per IP, overall request count) with application-level rate limiting for AI-specific, fine-grained control (e.g., per model, cost-based).

* Pros: Best of both worlds, robust and flexible.

* Cons: Increased complexity in configuration and monitoring.


Actionable Recommendations

  1. Start with Sensible Defaults: Implement basic rate limits (e.g., 100 requests per minute per API key) at the API Gateway level for all AI endpoints. Iterate and refine based on real-world usage data.
  2. Tiered Rate Limits: Design different rate limits for authenticated vs. unauthenticated users, and for different subscription tiers (e.g., free, developer, enterprise).
  3. Clear Error Handling: When a client hits a rate limit, return an HTTP 429 Too Many Requests status code. Include X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers in the response to inform clients when they can retry.
  4. Educate Your Users: Clearly document your API rate limits and how clients should handle 429 responses (e.g., implement exponential backoff).
  5. Monitor and Alert: Track rate limit hits as a key metric. Set up alerts for excessive 429 responses or unusual traffic patterns that might indicate an attack or misbehaving client.
  6. Consider Soft vs. Hard Limits:

* Soft Limits: Allow slight overages for a short period before enforcing a hard limit, useful for bursty but legitimate traffic.

* Hard Limits: Strict enforcement, requests are immediately rejected upon hitting the limit.

  1. Implement Cost-Aware Limiting (Advanced): For highly variable AI model costs, consider a system where each request consumes "credits" based on its estimated computational cost. Rate limit based on credit consumption over time rather than just request count.
  2. Utilize Distributed Caching for State: For scalable AI services, use Redis or similar distributed caches to store rate limiting counters and timestamps, ensuring consistency across multiple service instances.

Structured Data: Example Rate Limiting Policies for AI APIs

Here's an example of how you might structure tiered rate limiting policies for different AI API endpoints and user types:

| Policy Name | Target Scope | Endpoint/Model | Algorithm | Limit (Req/Min) | Burst (Tokens) | Notes |

| :---------------- | :------------------- | :---------------------- | :--------------- | :-------------- | :------------- | :----------------------------------------------------------------- |

| Free Tier | Unauthenticated IP | /classify/image | Fixed Window | 5 | N/A | Basic demo access, low priority. |

| Free Tier | Authenticated User | /generate/text/small | Token Bucket | 1 | 3 | Limits for small text generation, allows small bursts. |

| Developer Tier| Authenticated User | /classify/image | Sliding Window C | 100 | N/A | Standard image classification, higher volume. |

| Developer Tier| Authenticated User | /generate/text/large | Token Bucket | 5 | 10 | More expensive LLM, controlled bursts. |

| Developer Tier| Authenticated User | /embed/text | Sliding Window C | 500 | N/A | High volume embedding, less computationally intensive. |

| Enterprise Tier| Authenticated User | /generate/text/large | Token Bucket | 50 | 100 | High-throughput LLM for enterprise clients. |

| Enterprise Tier| Authenticated User | /custom/model/{id} | Token Bucket | 20 | 40 | Custom, potentially very expensive models. |

| System-Wide | All Requests | Global (Gateway) | Fixed Window | 5000 | N/A | Overall protection against overwhelming the gateway. |

Headers to Include in Response:

text • 355 chars
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1678886400 (Unix timestamp for when the limit resets)
Retry-After: 60 (seconds to wait before retrying)

{
  "error": "Too Many Requests",
  "message": "You have exceeded your rate limit. Please try again after 60 seconds."
}
Sandboxed live preview

Monitoring and Best Practices

  • Key Metrics to Track:

* rate_limit_hits_total: Total number of requests denied due to rate limits.

* rate_limit_by_client_id: Breakdown of hits per client.

* rate_limit_by_endpoint: Breakdown of hits per API endpoint.

* rate_limit_remaining_gauge: A gauge showing the remaining requests for active users (for proactive monitoring).

* api_latency_p99: To ensure rate limiting isn't causing unexpected latency spikes for valid requests.

  • Alerting: Set up alerts for:

* Spikes in rate_limit_hits_total.

* Specific clients consistently hitting limits (could indicate misbehavior or legitimate high usage needing an upgrade).

* Unusual patterns (e.g., sudden drop in successful requests, increase in 429s).

  • Regular Review: Periodically review your rate limiting policies. As your AI models evolve, their computational costs might change, or user behavior might shift. Adjust limits accordingly.
  • Load Testing: Include rate limiting scenarios in your load testing to understand how your system behaves under throttled conditions and to validate your 429 responses.

Conclusion

API Rate Limiting is an essential defense mechanism and resource management tool for AI APIs. By carefully selecting and implementing appropriate strategies, considering AI-specific challenges like computational cost and variable latency, and providing clear communication to users, you can build robust, cost-effective, and fair AI services that scale efficiently and resist abuse. This comprehensive approach ensures the long-term stability and success of your AI-powered applications.

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}