API Rate Limiter
Run ID: 69cbc8a161b1021a29a8c4092026-03-31Development
PantheraHive BOS
BOS Dashboard

Deliverable: Comprehensive Study Plan for API Rate Limiter Design and Implementation

This document outlines a detailed study plan designed to equip you with a thorough understanding of API Rate Limiters, from fundamental concepts and algorithms to advanced distributed system design and practical implementation strategies. This plan is structured to provide a professional, step-by-step learning journey, enabling you to confidently design and discuss robust API Rate Limiting solutions.


1. Introduction: Mastering API Rate Limiting

API Rate Limiting is a critical component in modern system design, essential for ensuring API stability, preventing abuse, and managing resource consumption. By controlling the number of requests a user or client can make to an API within a given timeframe, rate limiters protect against denial-of-service (DoS) attacks, brute-force attempts, and resource exhaustion, while also enforcing fair usage policies.

This study plan is tailored for engineers, architects, and technical leads who need to understand, design, and potentially implement API Rate Limiting solutions within complex distributed systems.

2. Learning Objectives

Upon successful completion of this study plan, you will be able to:

  • Understand Core Concepts: Articulate the purpose, benefits, and common use cases of API Rate Limiting.
  • Master Algorithms: Differentiate and explain the mechanics of various rate limiting algorithms (e.g., Fixed Window Counter, Sliding Window Log, Sliding Window Counter, Leaky Bucket, Token Bucket), including their pros and cons.
  • Address Distributed Challenges: Identify and propose solutions for the complexities of implementing rate limiters in a distributed environment (e.g., consistency, synchronization, data storage).
  • Explore Implementation Strategies: Evaluate and compare different architectural patterns for deploying rate limiters (e.g., API Gateway, sidecar proxy, dedicated service, application-level middleware).
  • Select Appropriate Technologies: Understand how technologies like Redis, Apache Kafka, and various databases can be utilized for rate limiting data storage and synchronization.
  • Design a Robust System: Architect a scalable and resilient API Rate Limiter system for various scenarios, considering factors like high availability, fault tolerance, and performance.
  • Consider Operational Aspects: Discuss monitoring, alerting, configuration management, and bypass mechanisms for rate limiting systems.

3. Weekly Schedule

This 4-week study plan provides a structured approach to learning about API Rate Limiters. Each week builds upon the previous, progressing from foundational concepts to advanced design and practical considerations.

Week 1: Fundamentals and Algorithms

  • Focus: Introduction to API Rate Limiting, its importance, and a deep dive into core algorithms.
  • Topics:

* What is API Rate Limiting? Why is it needed? (Security, Stability, Cost Management)

* Common use cases and business implications.

* Detailed exploration of Rate Limiting Algorithms:

* Fixed Window Counter

* Sliding Window Log

* Sliding Window Counter

* Leaky Bucket

* Token Bucket

* Pros and Cons of each algorithm, and scenarios where each is best suited.

  • Activities:

* Read foundational articles and watch introductory videos.

* Draw diagrams illustrating each algorithm's logic.

* Solve small conceptual problems, e.g., "Given X requests, how would each algorithm handle them?"

Week 2: Distributed Systems Challenges and Implementation Patterns

  • Focus: Understanding the complexities of rate limiting in distributed environments and exploring various architectural patterns.
  • Topics:

* Challenges in Distributed Rate Limiting:

* Consistency across multiple instances/nodes.

* Synchronization overhead.

* Race conditions.

* Data storage and retrieval at scale.

* Handling clock skew.

* Architectural Patterns for Rate Limiter Deployment:

* Client-side Rate Limiting (Less common for APIs, but good to know).

* Server-side (Application-level) Middleware.

* API Gateway/Reverse Proxy (e.g., Nginx, Envoy, AWS API Gateway).

* Dedicated Rate Limiting Service.

* Service Mesh integration.

  • Activities:

* Research how large companies (e.g., Netflix, Stripe, Uber) implement distributed rate limiting.

* Compare and contrast the different deployment patterns, considering scalability, performance, and operational complexity.

* Sketch basic architecture diagrams for a distributed rate limiter using different patterns.

Week 3: Deep Dive into Technologies and Practical Implementation

  • Focus: Exploring the technologies commonly used to build rate limiters and hands-on understanding through examples.
  • Topics:

* Data Stores for Rate Limiting:

* Redis: Atomic operations (INCR, EXPIRE), sorted sets, Lua scripting for complex logic.

* Databases (SQL/NoSQL): When and why they might be used (less common for real-time high-throughput, but for quota management).

* Messaging Queues (e.g., Apache Kafka): For asynchronous processing, logging, and metrics.

* Cloud-native Rate Limiting services (e.g., AWS WAF, Azure API Management, Google Cloud Endpoints).

* Open-source tools and libraries (e.g., Nginx limit_req, Envoy rate_limit filter, specific language libraries).

  • Activities:

* Set up a local Redis instance and experiment with INCR, EXPIRE, and simple Lua scripts to simulate a fixed-window counter.

* Explore configuration examples for Nginx or Envoy to implement basic rate limiting.

* (Optional but Recommended): Implement a simple in-memory rate limiter using a language of your choice (e.g., Python, Java) for one of the algorithms (e.g., Token Bucket).

Week 4: Advanced Topics and System Design

  • Focus: Advanced considerations, system design exercises, and operational best practices.
  • Topics:

* Advanced Rate Limiting Concepts:

* Burst limits vs. sustained rates.

* Differentiation based on user, IP, API key, endpoint.

* Global vs. local rate limits.

* Tiered rate limits (e.g., free vs. premium users).

* Bypass mechanisms for internal services or trusted partners.

* Monitoring, Alerting, and Observability:

* Key metrics to track (rate limited requests, allowed requests, latency).

* Alerting strategies for threshold breaches.

* Configuration Management and Dynamic Updates.

* Scaling and High Availability of the Rate Limiter service itself.

* Cost implications of different designs.

  • Activities:

* System Design Exercise: Design a distributed API Rate Limiter for a hypothetical large-scale application (e.g., a social media platform, an e-commerce API) considering all learned concepts. Document your design choices, trade-offs, and scaling strategy.

* Review case studies of real-world rate limiter implementations.

* Prepare a presentation or whiteboarding session to explain your design choices.

4. Recommended Resources

This section provides a curated list of resources to aid your learning journey.

  • Conceptual & System Design:

* Book: "Designing Data-Intensive Applications" by Martin Kleppmann (Chapters on distributed systems, consistency, and scalability are highly relevant).

* Online Course: "Grokking the System Design Interview" (Educative.io or similar platforms often have a dedicated section on Rate Limiters).

* Articles:

* "How to Design a Scalable Rate Limiting Algorithm" (Stripe Engineering Blog)

* "Rate Limiting in a Distributed System" (Netflix Tech Blog or similar deep dives from major tech companies)

* "System Design Interview – Rate Limiter" (ByteByteGo or similar system design blogs/YouTube channels)

  • Algorithms & Implementation Specific:

* Redis Documentation: Focus on INCR, EXPIRE, SET, GET, EVAL (for Lua scripting).

* Nginx Documentation: limit_req module.

* Envoy Proxy Documentation: Rate Limit Filter.

* Blog Posts/Tutorials: Search for "implement token bucket Redis" or "distributed rate limiter implementation" to find practical code examples in your preferred language.

  • Videos:

* YouTube channels like "ByteByteGo", "System Design Interview", "Code with Coder" often have excellent visualizations and explanations of rate limiting algorithms and system designs.

5. Milestones

Achieving these milestones will mark significant progress in your understanding and practical skills.

  • End of Week 1: Clearly articulate and differentiate between Fixed Window, Sliding Window Log, Sliding Window Counter, Leaky Bucket, and Token Bucket algorithms.
  • End of Week 2: Produce basic architectural diagrams illustrating at least three different patterns for deploying a distributed API Rate Limiter.
  • End of Week 3: Successfully implement a basic in-memory rate limiter (e.g., Token Bucket) in a programming language of your choice, and demonstrate a simple Redis-based fixed-window counter.
  • End of Week 4: Present a detailed system design for a distributed API Rate Limiter for a complex scenario, including technology choices, scalability considerations, and operational aspects.

6. Assessment Strategies

To solidify your learning and measure progress, employ the following assessment strategies:

  • Self-Assessment Quizzes: Create or find quizzes focusing on the definitions, pros/cons of algorithms, and distributed system challenges.
  • Whiteboard Design Exercises: Regularly practice drawing and explaining rate limiter architectures for various hypothetical scenarios. Record yourself or present to a peer.
  • Code Challenges: Implement simple versions of different rate limiting algorithms. Focus on correctness, edge cases, and efficiency.
  • Design Document Creation: For the final system design exercise, write a comprehensive design document that covers requirements, chosen algorithms, architecture, technology stack, scaling, and monitoring.
  • Peer Review/Discussion: Discuss your designs and implementations with colleagues or in online communities. Explaining concepts to others is a powerful learning tool.
  • Blog Post/Documentation: Write short articles or internal documentation explaining specific algorithms or design choices. This reinforces understanding and provides a valuable reference.

By diligently following this study plan, you will gain a deep and actionable understanding of API Rate Limiters, making you a valuable asset in designing and managing robust, scalable, and secure API ecosystems.

gemini Output

This document outlines the design and provides production-ready code for an API Rate Limiter using Python and Redis. This solution employs a robust "Sliding Window Counter" strategy, ensuring fair usage and protecting your backend resources from abuse.


API Rate Limiter: Detailed Implementation

1. Introduction to API Rate Limiting

API Rate Limiting is a critical mechanism used to control the number of requests a client can make to an API within a given timeframe. Its primary goals are to:

  • Prevent Abuse: Mitigate denial-of-service (DoS) attacks, brute-force attempts, and other malicious activities.
  • Ensure Fair Usage: Distribute API access equitably among all users, preventing a single client from monopolizing resources.
  • Protect Backend Resources: Safeguard servers, databases, and other infrastructure from being overloaded by excessive requests.
  • Manage Costs: For cloud-based services, limiting requests can help control operational expenses.
  • Maintain API Stability: Ensure consistent performance and availability for all legitimate users.

2. Rate Limiting Strategy: Sliding Window Counter

We will implement the Sliding Window Counter rate limiting strategy. This approach offers a good balance between accuracy and resource efficiency compared to simpler methods like Fixed Window and more complex ones like Sliding Window Log.

How it Works:

  1. Time Window: Define a fixed duration (e.g., 60 seconds) and a maximum number of requests allowed within that window.
  2. Request Timestamps: For each client (identified by IP, API Key, User ID, etc.), we store the timestamps of their recent requests. Redis Sorted Sets (ZSETs) are ideal for this, where the score and member can both be the timestamp.
  3. Window Calculation: When a new request arrives:

* The system first removes all timestamps from the client's record that fall outside the current sliding window (i.e., older than current_time - window_size).

* It then counts the number of remaining requests within the window.

* If the count is less than the allowed maximum, the current request's timestamp is added, and the request is permitted.

* If the count meets or exceeds the maximum, the request is denied.

  1. Atomicity: The operations of trimming old timestamps, counting, and adding the new timestamp must be atomic to prevent race conditions in a concurrent environment. Redis Lua scripts are used to
gemini Output

API Rate Limiter: Comprehensive Review and Documentation

This document provides a detailed overview of API Rate Limiters, their critical role in API management, core concepts, design considerations, implementation strategies, and best practices. This deliverable is designed to equip you with a thorough understanding necessary for effective deployment and management.


1. Introduction to API Rate Limiters

An API Rate Limiter is a mechanism that controls the number of requests a client can make to an API within a defined time window. It acts as a gatekeeper, preventing abuse, ensuring fair usage, and maintaining the stability and performance of your API infrastructure. Implementing a robust rate limiting strategy is fundamental for any production-grade API.


2. Core Concepts and Purpose

2.1. Definition

An API Rate Limiter restricts the number of API calls a user or application can make over a specific period (e.g., 100 requests per minute, 5000 requests per hour).

2.2. Key Purposes and Benefits

  • Preventing Abuse and DDoS Attacks: Mitigates the impact of malicious activities like denial-of-service (DoS) or brute-force attacks by blocking excessive requests from a single source.
  • Ensuring Fair Usage: Distributes API access equitably among all legitimate users, preventing a single user from monopolizing resources.
  • Maintaining System Stability and Performance: Protects backend services from being overloaded by a surge of requests, ensuring consistent response times and availability.
  • Cost Control: For cloud-based APIs, limiting requests can help manage infrastructure costs by preventing uncontrolled scaling due to excessive traffic.
  • Monetization and Tiered Services: Enables the creation of different service tiers (e.g., free, standard, premium) with varying rate limits, aligning usage with subscription levels.
  • Improved User Experience: By maintaining API stability, it ensures a smoother and more reliable experience for legitimate users.

2.3. Common Use Cases

  • Public APIs: Essential for any API exposed to external developers.
  • Microservices Architectures: Controls inter-service communication to prevent cascading failures.
  • Authentication Endpoints: Protects login and registration endpoints from brute-force attacks.
  • Data Ingestion APIs: Manages the flow of data into a system to prevent overloading.
  • Financial APIs: Critical for security and compliance, preventing rapid-fire transactions.

3. Key Components and Design Considerations

Designing an effective API Rate Limiter involves several critical decisions and components.

3.1. Rate Limiting Algorithms

The choice of algorithm significantly impacts accuracy, resource usage, and fairness.

  • Fixed Window Counter:

* How it works: Divides time into fixed-size windows (e.g., 60 seconds). Each request increments a counter. If the counter exceeds the limit within the window, subsequent requests are blocked until the next window starts.

* Pros: Simple to implement, low resource usage.

* Cons: Prone to "bursts" at the window edges (e.g., 100 requests at 59s and 100 requests at 61s, effectively 200 in 2 seconds).

  • Sliding Window Log:

* How it works: Stores a timestamp for every request made by a client. When a new request comes in, it counts how many timestamps fall within the current time window (e.g., the last 60 seconds). Old timestamps are discarded.

* Pros: Very accurate, no "burst" issue at window edges.

* Cons: High memory consumption, especially for high request volumes, as it stores individual timestamps.

  • Sliding Window Counter:

* How it works: Combines Fixed Window Counter with a sliding window concept. It calculates the weighted average of the current window's count and the previous window's count, based on the elapsed time in the current window.

* Pros: Good balance between accuracy and resource usage, mitigates the "burst" problem better than fixed window.

* Cons: More complex to implement than fixed window.

  • Token Bucket:

* How it works: A "bucket" holds a fixed number of "tokens." Tokens are added to the bucket at a constant rate. Each request consumes one token. If the bucket is empty, the request is denied.

* Pros: Allows for bursts up to the bucket capacity, smooths out traffic, simple to understand.

* Cons: Can be complex to tune parameters (bucket size, refill rate).

  • Leaky Bucket:

* How it works: Requests are added to a queue (the "bucket"). Requests are processed (leak out) from the queue at a constant rate. If the queue is full, new requests are dropped.

* Pros: Smooths out traffic effectively, good for preventing resource exhaustion.

* Cons: Can introduce latency if the queue is long, requests might be dropped even if the processing rate isn't at its peak if the queue is full.

3.2. Client Identification

To apply rate limits, the system needs to identify the client making the request.

  • IP Address: Simplest method, but problematic for users behind NATs or proxies, or for mobile networks where IPs change frequently.
  • API Key/Client ID: Most common and reliable for authenticated clients. Requires clients to send a unique key.
  • User ID: For authenticated users, allows for personalized limits.
  • Session ID/JWT Token: Can be used for session-based rate limiting.

3.3. Storage Mechanisms

The state of the rate limiter (counters, timestamps, tokens) needs to be stored.

  • In-Memory: Fastest, but non-persistent and not suitable for distributed systems.
  • Redis: Ideal for distributed rate limiting due to its speed, atomic operations, and data structures (counters, sorted sets for timestamps).
  • Database (SQL/NoSQL): More persistent but generally slower than Redis, suitable for less stringent limits or for storing historical data.

3.4. Enforcement Points

Where in the request flow should the rate limit be applied?

  • API Gateway/Reverse Proxy (e.g., NGINX, Kong, AWS API Gateway, Azure API Management):

* Pros: Decoupled from application logic, protects all upstream services, easy to configure.

* Cons: May not have fine-grained context (e.g., specific user ID without authentication).

  • Load Balancer (e.g., HAProxy, AWS ALB):

* Pros: Can handle high traffic volumes, acts early in the request lifecycle.

* Cons: Limited in sophisticated logic, typically IP-based.

  • Application Layer:

* Pros: Most flexible, can apply highly specific limits based on user roles, resource types, or request content.

* Cons: Adds complexity to application code, consumes application resources, requires careful distributed state management.

3.5. Rate Limit Headers

Standard HTTP headers communicate rate limit status to clients:

  • X-RateLimit-Limit: The maximum number of requests allowed in the current window.
  • X-RateLimit-Remaining: The number of requests remaining in the current window.
  • X-RateLimit-Reset: The time (usually in UTC epoch seconds) when the current rate limit window resets.
  • Retry-After: (Sent with 429 response) Indicates how long the client should wait before making another request.

3.6. Error Handling

When a client exceeds the rate limit, the API should respond with:

  • HTTP Status Code 429 Too Many Requests: The standard response for rate limiting.
  • Meaningful Error Body: A JSON or XML body explaining the error and potentially suggesting to check Retry-After or X-RateLimit-Reset headers.

4. Implementation Strategies (High-Level)

4.1. Client-Side Considerations

  • Respect 429 Responses: Clients must be designed to handle 429 responses gracefully, backing off and retrying after the specified Retry-After duration.
  • Exponential Backoff: Implement an exponential backoff strategy for retries to avoid hammering the API again immediately.
  • Monitor Headers: Clients should read and respect X-RateLimit-* headers to proactively manage their request rates.

4.2. Server-Side Considerations

  • Centralized vs. Distributed: For microservices or multiple instances, a centralized storage like Redis is crucial for consistent rate limiting across all instances.
  • Granularity: Define limits per endpoint, per user, per IP, or a combination.
  • Tiered Limits: Implement different limits for different subscription levels (e.g., free tier vs. paid tier).
  • Whitelisting: Allow specific internal services or critical partners to bypass rate limits.
  • Bursting: Allow a temporary spike in requests beyond the steady-state limit, which is useful for applications with intermittent high demands (often implemented with Token Bucket).

4.3. Deployment Options

  • Managed API Gateway Services: Cloud providers (AWS API Gateway, Azure API Management, GCP API Gateway) offer built-in, scalable rate limiting capabilities. This is often the easiest and most robust solution for cloud-native applications.
  • Self-Hosted Gateways: Deploying NGINX, Kong, or similar gateways provides more control and can be integrated into existing infrastructure.
  • Library/Middleware Integration: For application-level rate limiting, use battle-tested libraries specific to your programming language/framework (e.g., express-rate-limit for Node.js, flask-limiter for Python).

5. Advanced Topics and Best Practices

  • Monitoring and Alerting: Implement robust monitoring to track rate limit breaches, identify potential abuse patterns, and alert administrators. This includes tracking 429 responses, client IPs, and usage trends.
  • Transparency and Documentation: Clearly document your rate limiting policies in your API documentation, including limits, reset periods, and error responses. This helps developers build compliant clients.
  • Grace Periods/Soft Limits: Consider allowing a small grace period or a "soft limit" where requests are still processed but logs are generated, before enforcing a hard block. This can help identify misbehaving clients without immediately disrupting service.
  • Dynamic Limits: Implement logic to dynamically adjust limits based on real-time system load or other operational metrics.
  • Security Implications:

* Ensure your rate limiter itself is not a performance bottleneck or a single point of failure.

* Be mindful of how client identification (e.g., IP addresses) is handled, especially with proxies or VPNs.

* Protect against rate limit bypass techniques (e.g., IP rotation, distributed bots).

  • Edge Cases: Consider how to handle long-running requests, retries, and potential for false positives.
  • Scalability: Ensure your chosen rate limiting solution can scale horizontally with your API infrastructure.

6. Actionable Recommendations and Next Steps

To effectively implement or enhance your API Rate Limiter strategy, consider the following actions:

  1. Define Business Requirements:

* Identify Critical Endpoints: Determine which API endpoints are most vulnerable to abuse or resource exhaustion.

* Establish Usage Tiers: Define different rate limits for various user groups (e.g., unauthenticated, free tier, premium tier).

* Determine Acceptable Usage Patterns: Based on your application's expected usage, establish initial rate limits (e.g., requests per minute/hour/day).

  1. Select an Algorithm:

* For High Accuracy & Bursts: Consider Token Bucket or Sliding Window Counter.

* For Strict Flow Control: Leaky Bucket might be appropriate.

* For Simplicity (with caveats): Fixed Window Counter can be a starting point but be aware of its limitations.

  1. Choose an Enforcement Point:

* Recommendation: API Gateway: For most modern deployments, leveraging a dedicated API Gateway (managed cloud service or self-hosted) is the most efficient and scalable approach.

* Supplement with Application-Level: For highly specific, context-aware limits (e.g., user-specific limits on a particular resource), supplement gateway limits with application-level logic.

  1. Implement and Test:

* Proof of Concept: Implement the chosen algorithm and enforcement point for a non-critical endpoint.

* Thorough Testing: Conduct load testing to validate the rate limiter's effectiveness under various scenarios, including burst traffic and sustained high loads. Test both valid requests and requests that should be rate-limited.

* Client-Side Integration: Ensure your client applications correctly handle 429 responses and implement backoff strategies.

  1. Monitor and Refine:

* Dashboarding: Set up dashboards to visualize rate limit usage, 429 responses, and identify potential bottlenecks or abuse.

* Alerting: Configure alerts for high rates of 429 responses or unusual traffic patterns.

* Iterative Adjustment: Be prepared to adjust your rate limits based on real-world usage data and feedback.

  1. Document and Communicate:

* Update API Documentation: Clearly publish your rate limiting policies, including specific limits per endpoint, reset periods, and expected error responses.

* Developer Communication: Inform your API consumers about any changes to rate limits or best practices for interacting with your API.


7. Conclusion

A well-designed and implemented API Rate Limiter is an indispensable component of a resilient, secure, and performant API ecosystem. By carefully considering the algorithms, enforcement points, and best practices outlined in this document, you can effectively protect your services, ensure fair usage, and provide a reliable experience for your API consumers.

api_rate_limiter.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}