API Rate Limiter
Run ID: 69cbf2d761b1021a29a8dd6c2026-03-31Development
PantheraHive BOS
BOS Dashboard

This document provides a comprehensive, detailed, and professional output for the "API Rate Limiter" step of your workflow. It includes an overview, a discussion of common algorithms, and production-ready code examples in Python, demonstrating both in-memory and distributed (Redis-based) implementations, along with integration into a web framework.


API Rate Limiter: Comprehensive Implementation & Best Practices

1. Introduction: The Importance of API Rate Limiting

API rate limiting is a critical component for managing and protecting web services. It controls the number of requests a user or client can make to an API within a given timeframe. Implementing effective rate limiting offers several key benefits:

2. Core Concepts & Common Rate Limiting Algorithms

Several algorithms are used to implement rate limiting, each with its own trade-offs regarding accuracy, memory usage, and complexity.

* Concept: Divides time into fixed-size windows (e.g., 1 minute). Each request increments a counter for the current window. If the counter exceeds the limit, requests are blocked.

* Pros: Simple to implement, low memory usage.

Cons: Prone to "burstiness" at window edges. For example, a user could make N requests just before a window ends and N more requests just after it begins, effectively making 2N* requests in a very short period.

* Concept: Stores a timestamp for every request made by a client. To check a request, it counts how many timestamps fall within the current sliding window (e.g., the last 60 seconds). Old timestamps are removed.

* Pros: Highly accurate, handles bursts gracefully.

* Cons: High memory usage, especially for high request volumes, as every request's timestamp must be stored.

* Concept: A hybrid approach that addresses the burstiness of Fixed Window while reducing memory overhead compared to Sliding Window Log. It uses two fixed windows: the current window and the previous window. The count for the current window is taken directly, while a weighted portion of the previous window's count is added, based on how much of the current window has elapsed.

* Pros: Good balance of accuracy and memory efficiency. Less susceptible to edge-case burstiness than Fixed Window.

* Cons: More complex than Fixed Window, requires careful calculation of the weighted average.

* Concept: Imagine a bucket that holds "tokens." Tokens are added to the bucket at a fixed rate. Each request consumes one token. If the bucket is empty, the request is denied. The bucket has a maximum capacity, preventing an infinite buildup of tokens.

* Pros: Allows for some burstiness (up to the bucket capacity), simple to understand.

* Cons: Does not guarantee a maximum rate over an arbitrary interval, only average rate.

* Concept: Similar to a bucket, but requests are added to the bucket (queue) and "leak out" (are processed) at a constant rate. If the bucket is full, new requests are dropped.

* Pros: Smooths out request bursts, ensures a constant output rate.

* Cons: Can introduce latency if the bucket fills up, requests might be dropped even if the average rate is low if a sudden burst exceeds capacity.

For this deliverable, we will focus on implementing the Sliding Window Counter algorithm due to its practical balance of accuracy and resource efficiency, and also briefly touch on the Sliding Window Log using Redis, which is very common in distributed systems.

3. Implementation: Sliding Window Counter Rate Limiter (Python)

We will provide two implementations:

  1. In-Memory: Suitable for single-instance applications or development.
  2. Distributed (with Redis): Essential for scalable, production-grade microservices or multi-instance deployments.

3.1. In-Memory Implementation

This implementation uses a simple dictionary to store request counts. It's suitable for single-process applications where state doesn't need to be shared across multiple instances.

text • 2,277 chars
**Explanation:**

*   **`__init__(...)`**: Initializes the rate limiter with the `requests_per_window` limit and the `window_seconds` duration. It uses `defaultdict` to automatically create nested dictionaries for clients and their window counts. A `threading.Lock` is included for basic thread safety if used in a multi-threaded application.
*   **`allow_request(client_id)`**:
    *   `current_time`: Gets the current Unix timestamp.
    *   `current_window_start`, `previous_window_start`: Calculates the start timestamp for the current and previous fixed windows. This is done by rounding down the current time to the nearest multiple of `window_seconds`.
    *   **Lock**: Ensures that read/write operations on `self.client_windows` are atomic in a multi-threaded context.
    *   `_cleanup_old_windows`: An internal helper to remove counts for windows that are no longer relevant, preventing unbounded memory growth.
    *   `current_window_count`, `previous_window_count`: Retrieves the request counts for the current and previous fixed windows.
    *   `overlap_percentage`: Calculates the proportion of the current window that has already elapsed since its start.
    *   `effective_count`: This is the core of the Sliding Window Counter. It sums the requests made in the current fixed window with a *weighted fraction* of requests from the previous fixed window. The weighting accounts for the overlap between the logical sliding window and the previous fixed window.
    *   **Decision**: If `effective_count` is below the `requests_per_window` limit, the request is allowed, and the `current_window_count` is incremented. Otherwise, it's denied.
*   **`_cleanup_old_windows(...)`**: Iterates through stored windows for a client and deletes any that are older than the `oldest_window_to_keep` timestamp.

#### 3.2. Distributed Implementation (with Redis)

For production environments with multiple application instances, an in-memory solution is insufficient because each instance would have its own independent rate limit state. A distributed cache like Redis is ideal for sharing state across instances.

This implementation uses Redis's `ZSET` (Sorted Set) data structure, which is highly efficient for managing timestamps and counting elements within a range.

Sandboxed live preview

API Rate Limiter: Architecture Planning Study Plan

This document outlines a comprehensive study plan for understanding, designing, and implementing robust API Rate Limiters. This plan is tailored to equip you with the necessary knowledge and practical skills for architecting scalable and efficient rate limiting solutions, directly addressing the "plan_architecture" step of the workflow.


1. Learning Objectives

By the end of this study plan, you will be able to:

  • Understand the Fundamentals: Grasp the core concepts, necessity, and various use cases of API rate limiting in modern systems.
  • Master Rate Limiting Algorithms: Identify, explain, and differentiate between common rate limiting algorithms (Fixed Window Counter, Sliding Log, Sliding Window Counter, Token Bucket, Leaky Bucket), including their pros, cons, and appropriate use cases.
  • Design Scalable Architectures: Develop high-level and detailed architectural designs for distributed API rate limiters, considering factors like consistency, fault tolerance, and high availability.
  • Select Appropriate Technologies: Choose suitable data structures, storage solutions (e.g., Redis, in-memory caches), and deployment strategies (e.g., API Gateway, service mesh, application-level) for a given rate limiting scenario.
  • Implement Practical Solutions: Write functional code for various rate limiting algorithms and integrate them into a basic service.
  • Evaluate and Optimize: Analyze the performance, resource utilization, and effectiveness of different rate limiting implementations and propose optimization strategies.
  • Address Edge Cases: Understand and plan for challenges such as burst traffic, distributed environments, and multi-region deployments.

2. Weekly Schedule

This 4-week study plan is designed for a focused, hands-on learning experience. Each week builds upon the previous, culminating in a practical architectural design and prototype.

Week 1: Fundamentals & Basic Algorithms

  • Focus: Introduction to rate limiting, its importance, and foundational algorithms.
  • Topics:

* What is API Rate Limiting? Why do we need it? (Security, resource protection, fair usage, cost control).

* Use cases and common scenarios.

* Fixed Window Counter: Concept, mechanism, implementation details, advantages, disadvantages (burst issue at window edges).

* Sliding Log: Concept, mechanism, implementation details, advantages (accuracy), disadvantages (memory usage for logs).

  • Activities:

* Read introductory articles and watch conceptual videos.

* Implement Fixed Window Counter and Sliding Log algorithms in your chosen programming language (e.g., Python, Go, Node.js).

* Experiment with different rate limits and test their behavior.

Week 2: Advanced Algorithms & Data Structures

  • Focus: Delving into more sophisticated and production-ready rate limiting algorithms.
  • Topics:

* Sliding Window Counter: Concept, mechanism (hybrid approach), implementation details, advantages (balancing accuracy and memory), disadvantages.

* Token Bucket: Concept (tokens, refill rate, bucket capacity), mechanism, implementation details, advantages (burst handling, simplicity), disadvantages.

* Leaky Bucket: Concept (fixed output rate, queue), mechanism, implementation details, advantages (smooth traffic), disadvantages (queue overflow).

* Data structures for rate limiting: Redis INCR, sorted sets (ZADD, ZRANGE), hashes, EXPIRE commands.

  • Activities:

* Implement Sliding Window Counter, Token Bucket, and Leaky Bucket algorithms.

* Experiment with Redis as a backend for storing rate limiting data (e.g., using INCR for counters, ZADD/ZRANGE for logs).

* Compare the performance and resource usage of different implementations.

Week 3: System Design & Scalability

  • Focus: Architecting a distributed, scalable, and resilient API rate limiter.
  • Topics:

* Challenges of Distributed Rate Limiting: Consistency, clock synchronization, network latency, race conditions.

* Choosing a Storage Layer: In-memory vs. distributed cache (Redis, Memcached) vs. database. Pros and cons.

* Deployment Strategies:

* API Gateway Level: (e.g., Nginx, Kong, AWS API Gateway) - benefits, limitations.

* Service Mesh Level: (e.g., Istio, Linkerd) - benefits, limitations.

* Application Level: Implementing rate limiting within the service itself - benefits, limitations.

* Scalability, High Availability, and Fault Tolerance: Redundancy, sharding, replication.

* Handling Edge Cases: Large-scale bursts, multi-region deployments, sticky sessions.

  • Activities:

* Read case studies from companies like Stripe, Uber, Netflix on their rate limiting solutions.

* Draft a high-level architecture diagram for a distributed API rate limiter for a hypothetical large-scale application.

* Justify your choice of algorithm, storage, and deployment strategy.

Week 4: Implementation & Evaluation

  • Focus: Bringing it all together: building a functional prototype, testing, and refining the design.
  • Topics:

* Refining the chosen algorithm and architecture from Week 3.

* Implementing a full-fledged rate limiter service (e.g., a simple microservice using Redis).

* Testing Strategies: Unit tests, integration tests, performance tests (load testing).

* Monitoring and Alerting: Key metrics to track (rate limit hits, throttled requests, errors), setting up alerts.

* Review and Optimization: Identifying bottlenecks, improving efficiency, considering future enhancements.

  • Activities:

* Build a working prototype of your distributed API rate limiter using your chosen language and Redis.

* Write comprehensive tests for your rate limiter.

* Simulate traffic to test its behavior under load and measure its performance.

* Prepare a brief presentation or documentation outlining your design, implementation choices, and findings.


3. Recommended Resources

Leverage a mix of theoretical and practical resources to deepen your understanding.

  • Books:

* "System Design Interview – An Insider's Guide" by Alex Xu: Chapter on Rate Limiter provides a structured approach to design.

* "Designing Data-Intensive Applications" by Martin Kleppmann: Essential for understanding distributed systems, consistency, and data storage.

  • Online Articles & Blogs:

* Grokking System Design / Educative.io: "Design a Rate Limiter" course/module.

* Medium/Dev.to: Search for "API Rate Limiting Algorithms," "Distributed Rate Limiter," "Redis Rate Limiter" from engineering blogs (e.g., Stripe, Uber, Netflix, DigitalOcean).

* Redis Documentation: Explore commands like INCR, EXPIRE, ZADD, ZRANGE, pipeline for efficient rate limiting implementations.

  • Video Tutorials:

* YouTube: Search for "System Design Rate Limiter" from channels like ByteByteGo, Tushar Roy, Gaurav Sen.

* Conference Talks: Look for talks on API Gateways, microservices, and distributed systems from major tech conferences.

  • Tools & Technologies:

* Programming Language: Python, Go, Java, Node.js (choose one you are comfortable with for implementation).

* Redis: Local installation or cloud-managed Redis for hands-on distributed state management.

* HTTP Client/Server Library: For building a simple API to test the rate limiter.

* Load Testing Tools: Apache JMeter, k6, or simple Python scripts for simulating traffic.


4. Milestones

These milestones provide clear checkpoints for your progress throughout the study plan.

  • End of Week 1:

* Successful implementation and testing of Fixed Window Counter and Sliding Log algorithms.

* Clear understanding of their basic trade-offs.

  • End of Week 2:

* Successful implementation and testing of Sliding Window Counter, Token Bucket, and Leaky Bucket algorithms, potentially using Redis as a backend.

* Demonstrated ability to select an appropriate algorithm based on specific requirements.

  • End of Week 3:

* A high-level architectural design document (or presentation) for a distributed API rate limiter, including:

* Chosen primary rate limiting algorithm.

* Selected storage solution (e.g., Redis cluster).

* Deployment strategy (e.g., API Gateway with sidecars).

* Discussion of scalability and fault tolerance considerations.

  • End of Week 4:

* A functional prototype of the distributed API rate limiter based on your Week 3 design.

* Comprehensive unit and integration tests.

* Basic performance benchmarks demonstrating its behavior under load.

* A concise summary of the design choices, implementation challenges, and potential future enhancements.


5. Assessment Strategies

Regular assessment will ensure a deep understanding and practical application of the concepts.

  • Code Reviews:

* Self-Review: Critically evaluate your own code for correctness, efficiency, readability, and adherence to best practices.

* Peer Review: Exchange code with a study partner for feedback on logic, edge cases, and potential improvements.

  • Design Document Walkthroughs:

* Present your architectural designs to a mentor or peer, explaining your choices, trade-offs, and how you address potential challenges.

* Engage in Q&A to defend your design decisions and explore alternative approaches.

  • Technical Presentations:

* Deliver a short presentation on a specific algorithm, your final project, or a challenging aspect of rate limiting.

* Focus on clarity, technical accuracy, and the ability to articulate complex concepts.

python

import time

import redis

import os

from typing import Optional, Tuple

class RedisRateLimiter:

"""

A distributed API Rate Limiter using Redis and the Sliding Window Log algorithm.

This implementation stores timestamps for each request and counts them within the window.

This is suitable for distributed microservice architectures.

"""

def __init__(self,

redis_client: redis.Redis,

requests_per_window: int,

window_seconds: int,

key_prefix: str = "rate_limit:"):

"""

Initializes the Redis-based Rate Limiter.

Args:

redis_client (redis.Redis): An initialized Redis client instance.

requests_per_window (int): The maximum number of requests allowed

within the window_seconds timeframe.

window_seconds (int): The duration of the sliding window in seconds.

key_prefix (str): Prefix for Redis keys to avoid collisions.

"""

if requests_per_window <= 0 or window_seconds <= 0:

raise ValueError("Requests per window and window seconds must be positive.")

if not isinstance(redis_client, redis.Redis):

raise TypeError("redis_client must be an instance of redis.Redis")

self.redis_client = redis_client

self.requests_per_window = requests_per_window

self.window_seconds = window_

gemini Output

API Rate Limiter: Comprehensive Overview and Implementation Guide

This document provides a detailed, professional overview of API Rate Limiting, covering its importance, common strategies, implementation considerations, and best practices. This deliverable is designed to equip your team with a thorough understanding to effectively design and implement robust rate limiting solutions for your APIs.


1. Introduction to API Rate Limiting

API Rate Limiting is a control mechanism that restricts the number of requests a user or client can make to an API within a defined timeframe. It acts as a gatekeeper, ensuring fair usage, preventing abuse, and maintaining the stability and performance of your API infrastructure.

2. Why API Rate Limiting is Essential

Implementing effective API rate limiting offers numerous critical benefits:

  • Security & Abuse Prevention:

* DDoS Attack Mitigation: Prevents malicious actors from overwhelming your servers with an excessive volume of requests, which could lead to service disruption.

* Brute-Force Attack Prevention: Thwarts attempts to guess credentials (passwords, API keys) by limiting the number of login or authentication attempts.

* Scraping Prevention: Makes it more difficult for bots to rapidly extract large amounts of data from your services.

  • System Stability & Performance:

* Resource Protection: Safeguards your backend resources (databases, CPU, memory, network bandwidth) from being exhausted by a single or small group of clients.

* Fair Usage: Ensures that all legitimate users have equitable access to the API by preventing a few heavy users from monopolizing resources.

* Predictable Performance: Helps maintain consistent response times and service quality even under varying load conditions.

  • Cost Management:

* Reduced Infrastructure Costs: By preventing excessive usage, you can avoid over-provisioning resources, leading to lower operational costs, especially in cloud environments where you pay for consumption.

  • Monetization & Tiering:

* Service Level Agreements (SLAs): Enables the definition and enforcement of different access tiers (e.g., free vs. premium, different subscription levels) with varying rate limits, supporting monetization strategies.

* Quality of Service (QoS): Allows prioritization of requests from higher-tier customers.

3. Common Rate Limiting Strategies (Algorithms)

Choosing the right algorithm depends on your specific requirements for fairness, burst tolerance, and resource usage.

3.1. Fixed Window Counter

  • How it works: Divides time into fixed-size windows (e.g., 60 seconds). Each window has a counter. When a request arrives, the counter for the current window is incremented. If it exceeds the limit, the request is denied.
  • Pros: Simple to implement, low overhead.
  • Cons:

* Burst Problem: Allows a client to make requests at the very end of one window and the very beginning of the next, effectively doubling the rate limit for a short period.

* Edge Case Inaccuracy: Does not provide a smooth rate limit over time.

3.2. Sliding Log

  • How it works: Stores a timestamp for every request made by a client. When a new request arrives, it removes all timestamps older than the current time minus the window duration. If the number of remaining timestamps exceeds the limit, the request is denied.
  • Pros: Highly accurate, no burst problem at window boundaries.
  • Cons:

* High Memory Usage: Requires storing a log of timestamps for each client, which can be significant for high-traffic APIs.

* Performance Overhead: Deleting and adding timestamps can be computationally intensive at scale.

3.3. Sliding Window Counter

  • How it works: A hybrid approach combining Fixed Window and Sliding Log for better accuracy with less memory. It uses two fixed windows: the current window and the previous window. When a request comes in, it calculates a weighted average of the counts from the current and previous windows based on how much of the current window has passed.
  • Pros: Better accuracy than Fixed Window, less memory-intensive than Sliding Log. Mitigates the burst problem significantly.
  • Cons: More complex to implement than Fixed Window. Still has a slight potential for inaccuracy compared to Sliding Log, but often a good compromise.

3.4. Token Bucket

  • How it works: A "bucket" holds a certain number of "tokens." Tokens are added to the bucket at a fixed rate. Each incoming request consumes one token. If the bucket is empty, the request is denied. The bucket has a maximum capacity, preventing unlimited token accumulation.
  • Pros:

* Allows Bursts: Clients can make requests faster than the refill rate as long as there are tokens in the bucket.

* Smooth Rate: Provides a smooth average rate over time.

* Easy to understand: Intuitive metaphor.

  • Cons: Requires careful tuning of bucket size and refill rate.

3.5. Leaky Bucket

  • How it works: Requests are added to a "bucket" (queue). Requests "leak" out of the bucket at a constant rate, meaning they are processed at a steady pace. If the bucket is full, new requests are dropped.
  • Pros:

* Smooth Output Rate: Ensures a very steady processing rate for the backend.

* Queuing: Can absorb temporary spikes in traffic without dropping requests immediately, up to the bucket capacity.

  • Cons:

* Latency: Requests might experience increased latency if the bucket is often full.

* No Bursts: Does not allow for bursts of requests.

4. Key Considerations for Implementation

When designing and implementing your API rate limiting solution, consider the following:

4.1. Granularity of Limiting

  • By User/API Key: Most common. Limits requests per authenticated user or API key.
  • By IP Address: Useful for unauthenticated endpoints or to catch broad abuse, but can be problematic for users behind shared NATs or proxies.
  • By Endpoint/Resource: Apply different limits to different API endpoints based on their resource intensity (e.g., /search might have a lower limit than /profile).
  • By Method: Differentiate limits for GET vs. POST/PUT/DELETE.
  • Combined: Often, a combination (e.g., IP address for unauthenticated, API key for authenticated) is best.

4.2. Burst Allowance

Decide if your rate limiter should allow for short bursts of traffic above the average rate. Token Bucket is excellent for this.

4.3. Error Handling and User Experience

  • HTTP Status Code: Return 429 Too Many Requests for rate-limited requests.
  • Response Headers: Include standard Retry-After header (indicating when the client can retry) and custom headers like:

* X-RateLimit-Limit: The total number of requests allowed in the current window.

* X-RateLimit-Remaining: The number of requests remaining in the current window.

* X-RateLimit-Reset: The time (in UTC epoch seconds or similar) when the current rate limit window resets.

  • Clear Messaging: Provide a clear, human-readable message in the response body explaining the rate limit and how to resolve it (e.g., "You have exceeded your rate limit. Please try again in X seconds.").
  • Documentation: Clearly document your rate limits in your API documentation.

4.4. Distributed Systems

  • Centralized Store: In a distributed microservices environment, rate limit counters must be stored in a centralized, highly available, and low-latency data store (e.g., Redis, Memcached, distributed database) to ensure consistent limits across all instances of your service.
  • Atomic Operations: Use atomic increment/decrement operations provided by the data store to avoid race conditions.

4.5. Client-Side vs. Server-Side

  • Server-Side (Mandatory): Rate limiting must be enforced on the server-side as clients cannot be trusted.
  • Client-Side (Optional but Recommended): Clients should ideally implement their own rate limiting logic to avoid hitting server limits unnecessarily. This improves the client's experience and reduces unnecessary load on your API.

4.6. Edge Cases

  • Whitelisting: Allow certain internal services or partners to bypass rate limits.
  • Grace Periods: Consider a grace period for new users or during initial setup.
  • Temporary Blocks: Implement mechanisms to temporarily block clients exhibiting highly abusive patterns beyond standard rate limits.

5. Implementation Approaches

5.1. API Gateway / Reverse Proxy

  • Recommended: Implement rate limiting at the API Gateway or reverse proxy layer (e.g., Nginx, Envoy, AWS API Gateway, Azure API Management, Google Apigee).
  • Benefits: Decouples rate limiting logic from your core application services, provides a unified enforcement point, and often offers high performance.

5.2. Application Layer

  • Implement rate limiting directly within your application code.
  • Pros: Highly customizable, can access rich application context (e.g., user roles, subscription tiers).
  • Cons: Adds complexity to application code, can be less performant than gateway-level solutions, and requires careful handling in distributed environments.

5.3. Dedicated Rate Limiting Service

  • For very large-scale or complex requirements, a dedicated microservice specifically for rate limiting can be developed or utilized (e.g., Uber's Go-RateLimit).

6. Monitoring and Analytics

  • Track Denied Requests: Monitor the number of requests denied due to rate limiting. High numbers might indicate misconfigured limits, application bugs, or persistent abuse.
  • Usage Patterns: Analyze API usage patterns to fine-tune rate limits. Identify peak usage times, common endpoints, and client behaviors.
  • Alerting: Set up alerts for unusual spikes in rate-limited requests or sudden drops in overall API traffic.
  • Logs: Log detailed information about rate-limited requests for auditing and debugging.

7. Best Practices

  • Start Conservatively: Begin with slightly stricter limits and gradually loosen them based on observed usage patterns and performance.
  • Clear Documentation: Provide comprehensive and clear documentation of your rate limits, including the rules, headers, and error responses.
  • Communicate Changes: Inform your API consumers in advance about any changes to rate limits.
  • Educate Clients: Encourage clients to implement exponential backoff and jitter for retries to avoid hammering your API when rate-limited.
  • Test Thoroughly: Rigorously test your rate limiting implementation under various load conditions to ensure it behaves as expected.
  • Consider Soft Limits: For non-critical services, consider "soft limits" where requests exceeding the limit are still processed but flagged for review or throttled at a lower priority, rather than immediately denied.

8. Conclusion

API Rate Limiting is a fundamental component of robust API design and operation. By carefully selecting the right strategy, considering key implementation factors, and adhering to best practices, you can protect your infrastructure, ensure fair access, and deliver a reliable and performant API experience to your users. This detailed guide should serve as a strong foundation for your API rate limiting initiatives.

api_rate_limiter.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}