Caching System
Run ID: 69cbd6ab61b1021a29a8cc192026-03-31Development
PantheraHive BOS
BOS Dashboard

Caching System: Comprehensive Design and Implementation Guide

This document outlines a comprehensive approach to designing and implementing a robust caching system, including key concepts, architectural considerations, common patterns, and production-ready code examples. A well-implemented caching system significantly improves application performance, reduces database load, and enhances user experience.


1. Introduction to Caching Systems

A caching system stores copies of frequently accessed data in a temporary, high-speed storage layer (the cache). When a request for data arrives, the system first checks the cache. If the data is present and valid (a "cache hit"), it's retrieved quickly from the cache, bypassing slower data sources like databases or external APIs. If the data is not in the cache or is invalid (a "cache miss"), it's fetched from the primary data source, stored in the cache for future requests, and then returned.

Benefits of Caching:


2. Key Caching Concepts and Terminology


3. Choosing a Caching Strategy

The optimal caching strategy depends on your application's specific requirements, data volume, access patterns, and scalability needs.

* Description: Data is stored directly within the application's memory space.

* Pros: Extremely fast access, simple to implement for single-instance applications.

* Cons: Not shareable across multiple application instances, data is lost if the application restarts, limited by application memory.

* Use Cases: Memoization for function results, small lookup tables, user-specific session data in a single-server setup.

* Description: A dedicated, external caching server or cluster (e.g., Redis, Memcached) that can be accessed by multiple application instances.

* Pros: Shareable across multiple application instances, scalable (can be clustered), persistent independent of application restarts (if configured), offers advanced data structures.

* Cons: Network latency overhead (though typically low), requires separate infrastructure to manage.

* Use Cases: Session management in load-balanced applications, shared data across microservices, large-scale data caching, leaderboards, real-time analytics.

* Description: A geographically distributed network of proxy servers and their data centers, providing high availability and performance by distributing service spatially relative to end-users. Primarily for static assets (images, CSS, JS) and sometimes dynamic content.

* Pros: Reduces latency for geographically dispersed users, offloads traffic from origin servers, improves SEO.

* Cons: Can be complex to configure, costs can increase with bandwidth, invalidation can be challenging.

* Use Cases: Serving static files, caching entire web pages or API responses at the edge.

* Description: Some databases offer built-in caching mechanisms (e.g., query cache in MySQL, buffer pool in PostgreSQL).

* Pros: Automatic for database operations.

* Cons: Often less granular control, can sometimes cause contention or be inefficient for highly dynamic data.


4. Common Caching Patterns

These patterns dictate how your application interacts with the cache and the primary data source.

* Description: The application is responsible for checking the cache before querying the database. If data is in the cache, it's returned. If not, the application fetches it from the database, stores it in the cache, and then returns it.

* Pros: Simple to implement, only requested data is cached, preventing the cache from being filled with unused data.

* Cons: Initial requests for data will always be a cache miss, leading to higher latency. Data can become stale if not explicitly invalidated.

* Use Cases: Most common pattern for read-heavy workloads where data freshness isn't hyper-critical or where explicit invalidation is manageable.

* Description: Data is written simultaneously to both the cache and the primary data store.

* Pros: Cache always stays consistent with the primary data store. Reads from the cache will always be up-to-date.

* Cons: Higher write latency due to dual writes.

* Use Cases: Scenarios where data freshness is critical, and write latency is acceptable.

* Description: Data is initially written only to the cache. The cache then asynchronously writes the data to the primary data store in the background.

* Pros: Very low write latency for the application.

* Cons: Risk of data loss if the cache server fails before data is persisted to the primary store. More complex to implement.

* Use Cases: High-throughput write-heavy systems where immediate persistence is not critical, and some data loss tolerance exists (e.g., analytics, logging).

* Description: Similar to Cache-Aside, but the cache itself (or a caching library) is responsible for fetching data from the primary data source on a cache miss, rather than the application. The application only interacts with the cache.

* Pros: Simplifies application logic, as the caching layer handles data retrieval.

* Cons: Requires the caching layer to have knowledge of the primary data source.

* Use Cases: Often seen with caching frameworks or ORMs that integrate caching deeply.


5. Implementation Considerations

* Time-based: Rely on TTLs. Simple but can lead to stale data if the underlying data changes before TTL expires.

* Event-based: Invalidate cache entries when the underlying data changes (e.g., publish an event from the database, use webhooks, direct invalidation calls). More complex but ensures freshness.

* Manual/On-demand: Flush specific keys or the entire cache.


6. Production-Ready Code Examples (Python)

Below are examples demonstrating different caching strategies using Python.

6.1. Basic In-Memory Cache with TTL

This example shows a simple, non-thread-safe in-memory cache using a dictionary, suitable for basic memoization within a single process.

text • 430 chars
#### 6.3. Distributed Cache with Redis (Cache-Aside Pattern)

This example demonstrates integrating with Redis, a popular open-source, in-memory data store, using the Cache-Aside pattern. It includes serialization for complex objects and robust error handling.

**Prerequisites:**
*   Install `redis-py`: `pip install redis`
*   Have a Redis server running (e.g., via Docker: `docker run --name my-redis -p 6379:6379 -d redis`)

Sandboxed live preview

Caching System: Architecture Planning - Detailed Study Plan

This document outlines a comprehensive, five-week study plan designed to provide a deep understanding of caching systems, from fundamental concepts to advanced architectural patterns and implementation strategies. This plan is tailored to enable effective planning and design of robust and scalable caching solutions.


Overall Goal

To equip you with the knowledge and practical skills necessary to effectively plan, design, implement, and manage high-performance, resilient caching systems for various application architectures.


Weekly Study Plan

Each week builds upon the previous, progressing from foundational knowledge to practical application and advanced topics.

Week 1: Fundamentals of Caching & Core Concepts

  • Focus Areas:

* Introduction to Caching: What is caching, its purpose, benefits (latency reduction, throughput increase, cost savings), and common challenges.

* Key Metrics: Cache hit ratio, cache miss ratio, eviction rate, latency, throughput, memory usage.

* Types of Caching: Browser/Client-side, CDN (Content Delivery Network), Proxy, Application-level, Database-level, Object Caching.

* Cache Invalidation Strategies: Time-to-Live (TTL), Least Recently Used (LRU), Least Frequently Used (LFU), First-In-First-Out (FIFO), Most Recently Used (MRU).

* Cache Consistency Models: Strong consistency vs. eventual consistency in distributed caching.

  • Learning Objectives:

* Articulate the "why" and "what" of caching.

* Understand and interpret key caching performance metrics.

* Identify different layers where caching can be applied.

* Differentiate between various cache invalidation policies and their trade-offs.

* Recognize the challenges of cache consistency.

Week 2: Caching Architectures & Strategies

  • Focus Areas:

* Caching Patterns:

* Cache-Aside (Lazy Loading): Read-through, write-through.

* Write-Through: Synchronous writes to cache and database.

* Write-Back (Write-Behind): Asynchronous writes to the database.

* Read-Through: Cache is responsible for loading data from the database.

* Distributed vs. Local Caching: Pros and cons, use cases.

* In-Memory vs. Persistent Caches: When to use each.

* Multi-Tier Caching: Combining different caching layers for optimal performance.

* Cache Topologies: Client-server, peer-to-peer.

  • Learning Objectives:

* Analyze and select appropriate caching patterns for specific use cases (e.g., high read vs. high write workloads).

* Understand the trade-offs between local and distributed caching.

* Design multi-tier caching strategies to maximize efficiency.

* Evaluate the implications of different cache topologies on system design.

Week 3: Popular Caching Technologies in Depth

  • Focus Areas:

* Redis:

* Overview: In-memory data structure store, uses, advantages.

* Data Structures: Strings, Hashes, Lists, Sets, Sorted Sets.

* Persistence Options: RDB, AOF.

* Advanced Features: Pub/Sub, Transactions, Lua scripting.

* Clustering and High Availability (Redis Sentinel, Redis Cluster).

* Memcached:

* Overview: Simple, high-performance distributed memory object caching system.

* Key-Value store, slab allocation.

* Comparison with Redis: Use cases, strengths, weaknesses.

* Content Delivery Networks (CDNs):

* How CDNs work (edge locations, caching static/dynamic content).

* Major providers: AWS CloudFront, Cloudflare, Akamai.

* Configuration and optimization for web assets.

* Database-Specific Caching: Query caches, object-relational mapping (ORM) caches.

  • Learning Objectives:

* Gain hands-on familiarity with Redis and Memcached commands and concepts.

* Select the most appropriate caching technology based on project requirements.

* Understand the role and configuration of CDNs for web performance.

* Identify and utilize database and ORM caching mechanisms.

Week 4: Designing & Implementing Caching Systems

  • Focus Areas:

* Capacity Planning: Estimating cache size, memory requirements, and number of instances.

* Monitoring & Alerting: Key metrics to track (hit rate, eviction rate, memory usage, network I/O, CPU), tools (Grafana, Prometheus, built-in cloud monitoring).

* Troubleshooting Common Caching Issues: Stale data, cache stampede/dog-piling, thundering herd problem, memory exhaustion.

* Security Considerations: Securing cache instances, data encryption, access control.

* Hands-on Exercise: Design a caching layer for a hypothetical e-commerce product catalog or social media feed.

  • Learning Objectives:

* Develop a methodology for capacity planning of caching infrastructure.

* Implement effective monitoring and alerting strategies for cache health.

* Diagnose and resolve common issues in caching systems.

* Integrate security best practices into caching system design.

* Produce a high-level design document for a caching solution.

Week 5: Advanced Caching Topics & Best Practices

  • Focus Areas:

* Cache Warming & Pre-fetching: Strategies to populate caches proactively.

* Eventual Consistency in Distributed Caches: Understanding its implications and management.

* Preventing Cache Stampede/Dog-Piling: Using locks, semaphores, or intelligent invalidation.

* Idempotency and Caching: Ensuring operations can be repeated without unintended side effects.

* Cache Sharding & Partitioning: Scaling distributed caches.

* Case Studies: Analyze real-world caching implementations (e.g., Netflix, Meta, Amazon) and extract best practices.

  • Learning Objectives:

* Apply advanced techniques like cache warming and pre-fetching.

* Manage eventual consistency challenges in large-scale distributed caches.

* Implement robust solutions to prevent common caching pitfalls like stampede.

* Understand and apply principles of idempotency in cached environments.

* Learn from industry leaders through case study analysis.


Recommended Resources

  • Books:

* "Designing Data-Intensive Applications" by Martin Kleppmann: Chapters on distributed systems, consistency, and caching are invaluable.

* "System Design Interview – An Insider's Guide" by Alex Xu: Provides practical patterns and examples for caching in system design.

  • Online Courses & Tutorials:

* Redis University (university.redis.com): Official, free courses on Redis fundamentals, data structures, and advanced topics.

* Pluralsight/Udemy/Coursera: Search for "System Design," "Redis Deep Dive," or "Distributed Caching" courses.

* DigitalOcean Community Tutorials: Excellent practical guides for setting up and configuring Redis and Memcached.

* AWS/Azure/GCP Documentation: Explore their managed caching services (ElastiCache, Azure Cache for Redis, Memorystore).

  • Official Documentation:

* Redis Documentation (redis.io/documentation): Comprehensive and up-to-date.

* Memcached Documentation (memcached.org): For understanding core concepts.

* CDN Provider Docs: AWS CloudFront, Cloudflare, Akamai.

  • Blogs & Articles:

* Engineering blogs from companies like Netflix, Meta, Uber, Amazon, Google (search for "caching at [company name]").

* Medium articles and technical blogs on specific caching strategies and challenges.

  • Tools:

* Redis-cli & RedisInsight: For interacting with and visualizing Redis data.

* memcached-tool: For basic Memcached interaction.

* Monitoring Tools: Prometheus, Grafana, Datadog (for hands-on practice or understanding metrics).


Milestones

  • End of Week 1: Successfully explain the core concepts of caching and distinguish between various invalidation strategies (e.g., in a brief presentation or concept map).
  • End of Week 2: Propose suitable caching architectures (e.g., Cache-Aside with Write-Through) for two distinct application scenarios, justifying the choice.
  • End of Week 3: Set up a local Redis instance, use its primary data structures, and demonstrate basic Pub/Sub functionality.
  • End of Week 4: Complete a high-level design document for a caching layer for a specified application, including capacity estimates, monitoring plan, and chosen technology.
  • End of Week 5: Present a comprehensive caching strategy, including advanced considerations and best practices, for a complex system, drawing insights from case studies.

Assessment Strategies

  • Weekly Concept Quizzes: Short, multiple-choice or short-answer quizzes to test understanding of theoretical concepts and terminology.
  • Practical Lab Assignments: Hands-on tasks involving setting up and interacting with Redis/Memcached, configuring CDN rules, or writing simple caching logic.
  • Design Document Reviews: Evaluation of proposed caching architectures for completeness, feasibility, scalability, and adherence to best practices.
  • Code Reviews (Optional): If implementing actual caching logic, review for efficiency, error handling, and correct application of patterns.
  • Case Study Analysis & Presentation: Analyze a real-world system's caching strategy, identify strengths/weaknesses, and propose improvements.
  • Final Project/Architecture Review: A comprehensive presentation and discussion of a designed caching system for a given problem statement, covering all aspects learned.

This detailed study plan provides a structured path to mastering caching systems, ensuring a solid foundation for designing and implementing high-performance architectures.

python

import redis

import json

import time

from typing import Any, Callable, Optional

import logging

Configure logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

class RedisCacheClient:

"""

A client for interacting with Redis as a distributed cache.

Implements the Cache-Aside pattern.

"""

def __init__(self, host: str = 'localhost', port: int = 6379, db: int = 0, default_ttl: int = 300):

self.default_ttl = default_ttl

try:

self.redis_client = redis.StrictRedis(host=host, port=port, db=db, decode_responses=True)

# Ping to check connection immediately

self.redis_client.ping()

logging.info(f"Successfully connected to Redis at {host}:{port}/{db}")

except redis.exceptions.ConnectionError as e:

logging.error(f"Could not connect to Redis: {e}. Caching will be disabled.")

self.redis_client = None # Disable caching if connection fails

def _serialize(self, value: Any) ->

gemini Output

Caching System: Comprehensive Review and Documentation

This document provides a detailed review and documentation of the proposed Caching System. It consolidates the design principles, architectural considerations, implementation strategies, and operational best practices to ensure a robust, high-performance, and scalable caching solution. This output is designed to serve as a foundational reference for the system's development, deployment, and ongoing management.


1. Introduction to the Caching System

A caching system is a critical component in modern application architectures, designed to improve data retrieval performance, reduce load on primary data stores, and enhance overall system responsiveness. By storing frequently accessed data in a fast, temporary storage layer (the cache), we can significantly decrease latency and increase throughput for read-heavy workloads.

This document outlines a comprehensive approach to integrating and managing a caching layer, ensuring it aligns with performance, scalability, and reliability requirements.

2. Core Objectives and Benefits

The primary objectives of implementing a robust caching system include:

  • Reduced Latency: Serve frequently requested data much faster than retrieving it from the primary data source (e.g., database).
  • Increased Throughput: Handle a higher volume of requests by offloading read operations from the backend database.
  • Reduced Database Load: Minimize the strain on the primary database, improving its performance and stability.
  • Improved User Experience: Deliver faster response times for end-users, leading to higher satisfaction.
  • Cost Optimization: Potentially reduce infrastructure costs associated with database scaling, especially for read replicas.
  • Enhanced Scalability: Facilitate easier scaling of the application by distributing data access patterns.

3. Key Design Principles

A well-designed caching system adheres to several fundamental principles:

  • Consistency: Managing the balance between data freshness in the cache and the primary data source.
  • Scalability: The cache solution must be able to scale horizontally to accommodate growing data volumes and request rates.
  • Fault Tolerance: The caching system should be resilient to failures, with mechanisms for high availability and data recovery.
  • Performance: Optimized for low latency and high throughput operations.
  • Eviction Policies: Clear strategies for managing cache size and removing stale or less-used data.
  • Cache Invalidation: Effective mechanisms to ensure cached data remains up-to-date or is promptly removed when underlying data changes.

4. Architectural Components (Conceptual)

A typical caching system integrates with various parts of the application architecture:

  • Application Layer: The primary interface for interacting with the cache. Applications will attempt to read from the cache first.
  • Caching Service/Store: The dedicated service responsible for storing and retrieving cached data (e.g., Redis, Memcached). This can be an in-memory, distributed, or local cache.
  • Primary Data Source: The authoritative source of data (e.g., SQL Database, NoSQL Database, external API).
  • Cache Invalidation Mechanism: A system (e.g., message queue, webhooks, TTL) to notify or trigger cache updates/deletions when the primary data source changes.

+-------------------+
|                   |
|  User / Client    |
|                   |
+--------+----------+
         |
         | HTTP/API Requests
         v
+-------------------+
|                   |
|  Application Layer|  (1. Check Cache)
|                   |  (2. If not found, Query DB)
+--------+----------+  (3. Store in Cache)
         |
         | Cache Read/Write
         v
+-------------------+      (Cache Invalidation)
|                   | <--------------------------+
|  Caching Service  |                            |
|  (e.g., Redis)    |                            |
+--------+----------+                            |
         |                                       |
         | Data Retrieval (Cache Miss)           |
         v                                       |
+-------------------+                            |
|                   |                            |
| Primary Data Source | (e.g., Database, API)    |
|                   |----------------------------+
+-------------------+  (Data Change Notifications)

5. Implementation Details and Strategy

5.1. Data Types Suitable for Caching

  • Frequently Accessed Read-Heavy Data: User profiles, product listings, configuration settings, static content.
  • Computationally Expensive Results: Aggregated reports, complex query results, rendered HTML fragments.
  • Session Data: User session information (though often handled by dedicated session stores, they are a form of cache).
  • API Responses: External API calls that are idempotent and relatively stable.

5.2. Cache Keys Design

Effective cache key design is crucial for efficient retrieval and management.

  • Specificity: Keys should uniquely identify the cached data (e.g., user:123, product:category:electronics).
  • Readability: Keys should be understandable for debugging and monitoring.
  • Consistency: Use a consistent naming convention across the application.
  • Versioning (Optional): Include version numbers for data schemas if necessary (user:v2:123).

5.3. Serialization

Data stored in the cache should be serialized and deserialized efficiently.

  • JSON: Human-readable, widely supported, good for complex objects.
  • Protocol Buffers/Thrift: Binary serialization, more compact, faster for large datasets.
  • MessagePack: Binary JSON, good balance of compactness and readability.

5.4. Caching Patterns

  • Cache-Aside (Lazy Loading): The application is responsible for checking the cache, and if data is not found (cache miss), it fetches from the primary data source and then populates the cache.

* Pros: Simple to implement, application controls data freshness.

* Cons: Initial requests might be slow (cache cold start), potential for stale data if not invalidated.

  • Write-Through: Data is written simultaneously to both the cache and the primary data source.

* Pros: Cache is always consistent with the primary data store (for writes).

* Cons: Slower write operations due to dual writes.

  • Write-Back (Write-Behind): Data is written only to the cache, and the cache asynchronously writes it to the primary data source.

* Pros: Very fast write operations.

* Cons: Risk of data loss if the cache fails before data is persisted; complex to implement.

5.5. Cache Eviction Policies

When the cache reaches its capacity or data needs to be removed, an eviction policy determines which items to remove.

  • Time-To-Live (TTL): Items expire after a set duration. Simplest and most common.
  • Least Recently Used (LRU): Removes the item that has not been accessed for the longest time.
  • Least Frequently Used (LFU): Removes the item that has been accessed the fewest times.
  • First-In, First-Out (FIFO): Removes the item that was added first.
  • Random: Removes a random item.

5.6. Cache Invalidation Strategies

Maintaining data consistency between the cache and the primary data source is critical.

  • Time-To-Live (TTL): Automatically expires cached items after a predefined period. Simple, but can lead to temporary staleness.
  • Explicit Deletion/Update: When data in the primary source changes, the application explicitly deletes or updates the corresponding item in the cache. This is often triggered by database write operations.
  • Publish/Subscribe (Pub/Sub): The primary data source (or a service modifying it) publishes events about data changes. The caching service subscribes to these events and invalidates relevant cache entries.
  • Versioned Caching: Cache keys include a version identifier. When data changes, a new version is created, and the application requests data using the new version key. Old versions eventually expire.

6. Operational Best Practices

Successful caching requires ongoing management and monitoring.

  • Monitoring & Alerting:

* Cache Hit Ratio: Percentage of requests served from the cache (target: high).

* Cache Miss Rate: Percentage of requests not found in the cache (target: low).

* Memory Usage: Track cache memory consumption to prevent evictions due to capacity.

* CPU/Network Usage: Monitor for bottlenecks in the caching service.

* Latency: Measure read/write latency to the cache.

* Error Rates: Track errors during cache operations.

* Set up alerts for critical thresholds (e.g., low hit ratio, high memory usage, high error rate).

  • Capacity Planning: Regularly review cache usage patterns and provision resources (memory, CPU, network) to accommodate growth.
  • Security Considerations:

* Network Segmentation: Isolate cache servers within private networks.

* Authentication/Authorization: Secure access to the caching service (e.g., Redis password, TLS).

* Encryption: Encrypt data in transit (TLS) and potentially at rest if sensitive data is cached.

  • High Availability & Disaster Recovery:

* Clustering/Replication: Deploy cache services in a clustered or replicated setup (e.g., Redis Cluster, Sentinel) to ensure availability during node failures.

* Backup/Persistence: For durable caches, ensure data persistence mechanisms are configured (e.g., Redis RDB snapshots, AOF logs). Understand the recovery point objective (RPO) and recovery time objective (RTO).

  • Testing Strategy:

* Unit Tests: Verify cache interaction logic in application code.

* Integration Tests: Ensure the application correctly interacts with the caching service.

* Performance Tests: Benchmark cache hit rates, latency, and throughput under various loads. Simulate cache cold starts.

* Resilience Tests: Test cache failure scenarios (e.g., cache server going down, network partition) and ensure the application degrades gracefully.

7. Benefits and Impact Summary

Implementing this comprehensive caching system will significantly impact the application by:

  • Boosting Performance: Users will experience faster load times and more responsive interactions.
  • Enhancing Scalability: The system will be better equipped to handle increased user traffic without proportionate database load increases.
  • Improving Reliability: Reduced strain on primary data sources enhances their stability and availability.
  • Optimizing Resource Usage: Potentially lower operational costs by reducing the need for extensive database scaling.

8. Recommendations and Next Steps

To successfully deploy and manage the caching system, we recommend the following actionable steps:

  1. Technology Selection: Finalize the specific caching technology (e.g., Redis, Memcached, distributed cache library) based on detailed requirements and existing infrastructure.
  2. Pilot Implementation: Start with caching a critical, high-read, low-write data set to gain experience and validate the chosen patterns.
  3. Detailed Key Design: Develop a comprehensive cache key naming convention and data serialization standard.
  4. Instrumentation & Monitoring Setup: Implement robust monitoring and alerting for all cache metrics before full production rollout.
  5. Documentation: Create internal runbooks and guides for cache management, troubleshooting, and disaster recovery.
  6. Performance Testing: Conduct thorough load and stress testing to validate performance gains and identify bottlenecks.
  7. Iterative Rollout: Gradually introduce caching to different parts of the application, monitoring impact at each stage.
  8. Training: Ensure development and operations teams are trained on cache interaction patterns, invalidation strategies, and monitoring tools.

This detailed documentation provides a solid framework for the successful implementation and operation of a high-performance caching system. By adhering to these principles and strategies, we can significantly enhance the application's overall performance, scalability, and user experience.

caching_system.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}