Caching System
Run ID: 69cb661961b1021a29a88a8f2026-03-31Development
PantheraHive BOS
BOS Dashboard

Caching System: Comprehensive Study Plan

This document outlines a detailed and actionable study plan for understanding and implementing Caching Systems. This plan is designed to provide a structured learning path, covering fundamental concepts, popular technologies, architectural patterns, and advanced considerations.


1. Introduction and Overview

Caching is a critical technique in software engineering used to improve the performance and scalability of applications by storing frequently accessed data in a faster, more readily available location. This study plan will equip you with the knowledge and practical skills required to design, implement, and manage effective caching solutions.

Goal: To develop a deep understanding of caching principles, various caching strategies, popular caching technologies (e.g., Redis, Memcached), and their application in real-world system design.


2. Weekly Schedule

This 5-week schedule provides a structured approach to mastering caching systems. Each week builds upon the previous, progressing from foundational concepts to advanced architectural patterns and practical implementations.

Week 1: Fundamentals of Caching & In-Memory Caching

  • Focus: Understanding the "why" and "what" of caching, basic concepts, and simple in-memory caching.
  • Key Topics:

* Introduction to Caching: Definition, purpose, benefits (performance, scalability, cost reduction), drawbacks (complexity, staleness).

* Cache Hit vs. Cache Miss.

* Cache Coherence and Consistency.

* Cache Eviction Policies: LRU (Least Recently Used), LFU (Least Frequently Used), FIFO (First-In, First-Out), MRU (Most Recently Used), Random, TTL (Time-To-Live).

* Types of Caches: In-process (application-level), distributed, CDN, browser, database query cache.

* Basic In-Memory Caching: Using hash maps/dictionaries, simple eviction logic.

* Java/Python/Node.js built-in caching mechanisms (e.g., Guava Cache in Java, functools.lru_cache in Python).

Week 2: Distributed Caching Concepts & Memcached

  • Focus: Transitioning from single-node to distributed caching, understanding its challenges, and exploring Memcached.
  • Key Topics:

* Challenges of Distributed Caching: Data distribution, consistency, fault tolerance, network latency.

* Consistent Hashing: How it enables scalable and resilient distributed caches.

* Cache Topologies: Cache-aside, Write-through, Write-back, Read-through.

* Introduction to Memcached: Architecture, data model (key-value store), common commands (set, get, add, delete, expire).

* Integrating Memcached with an application (e.g., using a client library).

* Monitoring Memcached instances.

Week 3: Redis Deep Dive & Advanced Caching Patterns

  • Focus: In-depth exploration of Redis, its versatile data structures, and advanced caching use cases.
  • Key Topics:

* Introduction to Redis: Architecture, comparison with Memcached (persistence, data structures, Pub/Sub, transactions).

* Redis Data Structures: Strings, Hashes, Lists, Sets, Sorted Sets.

* Redis as a Cache: Implementing Cache-aside, using TTL.

* Advanced Redis Features for Caching:

* Atomic operations and transactions.

* Lua scripting for complex cache logic.

* Pub/Sub for cache invalidation.

* Redis Cluster for high availability and scalability.

* Common Caching Patterns with Redis: Full-page caching, object caching, session store, rate limiting.

Week 4: Web Caching, Database Caching & CDNs

  • Focus: Caching at different layers of a typical web application stack, including browser, reverse proxy, database, and Content Delivery Networks.
  • Key Topics:

* Browser Caching: HTTP caching headers (Cache-Control, Expires, ETag, Last-Modified).

* Proxy Caching: Varnish, Nginx as a cache.

* Database Caching: Query caching, object-relational mapper (ORM) caches (e.g., Hibernate second-level cache).

* Content Delivery Networks (CDNs): How they work, benefits, cache invalidation strategies for CDNs.

* Cache Invalidation Strategies: Active (push-based), Passive (pull-based/TTL), Cache busting.

* Dealing with Cache Stampede/Thundering Herd problem.

Week 5: System Design, Optimization & Troubleshooting

  • Focus: Applying caching knowledge to system design problems, optimizing cache performance, and handling common issues.
  • Key Topics:

* Choosing the Right Caching Strategy: Factors to consider (data volatility, read/write patterns, consistency requirements, cost).

* Designing a Caching Layer: Placement, sizing, scaling, security.

* Monitoring Caching Systems: Key metrics (hit ratio, latency, memory usage, eviction rates), tools.

* Troubleshooting Common Caching Issues: Stale data, low hit ratio, cache misses, performance bottlenecks.

* Case Studies: Analyzing caching strategies in real-world systems (e.g., Facebook, Twitter, Netflix).

* Introduction to other caching technologies/concepts: Apache Ignite, Hazelcast, distributed data grids.


3. Learning Objectives

Upon completion of this study plan, you will be able to:

  • Understand Core Concepts: Explain the fundamental principles of caching, including cache hits/misses, eviction policies, and consistency models.
  • Evaluate Caching Strategies: Differentiate between various caching patterns (e.g., Cache-aside, Write-through) and select the most appropriate strategy for specific use cases.
  • Implement In-Memory Caches: Develop simple application-level caches using standard programming language features or libraries.
  • Utilize Distributed Caches: Design and implement solutions using popular distributed caching systems like Memcached and Redis.
  • Master Redis: Leverage Redis's diverse data structures and advanced features (Pub/Sub, transactions, scripting) for complex caching requirements.
  • Design Multi-Layer Caching Architectures: Integrate caching at different levels of a system (browser, CDN, API gateway, application, database).
  • Optimize and Troubleshoot: Monitor cache performance, identify bottlenecks, and resolve common caching issues such as stale data and cache stampedes.
  • Apply to System Design: Incorporate caching effectively into broader system architectures to enhance performance, scalability, and resilience.

4. Recommended Resources

This section provides a curated list of resources to support your learning journey.

Books

  • "System Design Interview – An Insider's Guide" by Alex Xu (Volume 1 & 2): Excellent for understanding caching in the context of broader system design.
  • "Redis in Action" by Josiah L. Carlson: A comprehensive guide to Redis, covering caching extensively.
  • "Designing Data-Intensive Applications" by Martin Kleppmann: Chapter 3 (Storage and Retrieval) and Chapter 5 (Replication) touch upon distributed systems concepts relevant to caching.

Online Courses & Tutorials

  • Educative.io: "Grokking the System Design Interview": Features dedicated sections on caching strategies.
  • Pluralsight / Udemy / Coursera: Search for courses on "System Design," "Redis," or "Distributed Caching."
  • Redis University (university.redis.com): Offers free, high-quality courses directly from Redis Labs, including "RU101: Introduction to Redis" and "RU201: Redis for Developers."
  • Memcached Official Documentation: [memcached.org](https://memcached.org/)
  • Redis Official Documentation: [redis.io/documentation](https://redis.io/documentation)

Articles & Blogs

  • High Scalability Blog: Numerous articles on caching strategies employed by large-scale systems.
  • Martin Fowler's Blog: Articles on "Cache-Aside Pattern" and other architectural patterns.
  • Medium / Dev.to: Search for "caching strategies," "Redis vs Memcached," "consistent hashing explained."
  • AWS, Azure, GCP Documentation: Explore their managed caching services (e.g., ElastiCache, Azure Cache for Redis, Memorystore).

Hands-on Practice

  • Docker: Use Docker to quickly set up local instances of Redis and Memcached for experimentation.
  • Online Sandboxes: Platforms like Katacoda or Play-with-Redis for quick command-line practice.
  • GitHub: Explore open-source projects that utilize caching for real-world examples.

5. Milestones

Achieving these milestones will demonstrate progressive mastery of caching systems.

  • Milestone 1 (End of Week 1):

* Implement a simple in-memory cache with LRU eviction in your preferred programming language.

* Clearly articulate the trade-offs of different cache eviction policies.

  • Milestone 2 (End of Week 2):

* Set up a local Memcached instance and integrate it with a basic application to cache database queries or API responses.

* Explain the concept of consistent hashing and its role in distributed caching.

  • Milestone 3 (End of Week 3):

* Build a small application using Redis that demonstrates at least three different Redis data structures for caching (e.g., string for simple key-value, hash for object, sorted set for leaderboard).

* Implement a cache invalidation mechanism using Redis Pub/Sub.

  • Milestone 4 (End of Week 4):

* Analyze a provided system design scenario and propose a multi-layer caching strategy including browser, CDN, and application-level caches. Justify your choices.

* Identify and propose solutions for potential cache stampede scenarios.

  • Milestone 5 (End of Week 5 - Final Project):

* Design and implement a caching layer for a simulated web application (e.g., e-commerce product catalog, social media feed), integrating at least two different caching technologies (e.g., Redis for hot data, Memcached for general object caching).

* Include monitoring metrics for cache hit ratio and latency.


6. Assessment Strategies

Your progress and understanding will be assessed through a combination of practical application, theoretical knowledge, and critical thinking.

  • Weekly Quizzes/Exercises: Short assessments at the end of each week to test understanding of core concepts and practical implementation details.
  • Coding Challenges: Implement specific caching scenarios (e.g., LRU cache, distributed cache client) to assess coding proficiency and understanding of algorithms.
  • System Design Discussions: Participate in whiteboard sessions or written exercises to design caching solutions for given problems, articulating choices and trade-offs.
  • Code Reviews: Review and provide feedback on caching implementations, focusing on best practices, efficiency, and correctness.
  • Final Project Presentation: Present your final caching system design and implementation, demonstrating its functionality, performance characteristics, and the rationale behind your architectural decisions. This will include a demonstration of your monitoring setup and a discussion of potential improvements or scaling strategies.
  • Peer-to-Peer Learning: Engage in discussions with peers, explain concepts, and critique solutions to solidify understanding.

This detailed study plan provides a robust framework for mastering caching systems. By diligently following this schedule and utilizing the recommended resources, you will gain the essential knowledge and practical skills required for designing and implementing high-performance, scalable applications.

gemini Output

As part of the "Caching System" workflow, this deliverable provides comprehensive, detailed, and production-ready code examples for implementing caching mechanisms. Caching is a critical strategy for enhancing application performance, reducing database load, and improving scalability by storing frequently accessed data in a faster, more accessible location.

This output covers various caching approaches, from simple in-memory solutions to distributed caching with Redis, alongside best practices and actionable recommendations.


Caching System: Code Generation

1. Introduction to Caching

Caching is a technique that stores copies of data so that future requests for that data can be served faster. The primary benefits include:

  • Improved Performance: Reduces latency by serving data from a fast cache instead of slower data sources (e.g., databases,
gemini Output

Caching System: Comprehensive Review and Documentation

Date: October 26, 2023

Project: Caching System Implementation

Deliverable: Comprehensive Review and Documentation


Executive Summary

This document provides a detailed review and comprehensive documentation of the implemented Caching System. The primary objective of this system is to significantly enhance the performance, responsiveness, and scalability of your applications by reducing the load on backend services and databases. By strategically storing frequently accessed data in high-speed, temporary storage, the Caching System aims to deliver a superior user experience, optimize resource utilization, and ensure the stability of your infrastructure under varying load conditions. This document outlines the system's design, key features, operational considerations, and recommendations for ongoing management and future enhancements.


1. Introduction to the Caching System

The Caching System is a critical component designed to optimize data retrieval and processing across your application ecosystem. It acts as an intermediary layer between your applications and their primary data sources (e.g., databases, external APIs), storing copies of data that are expensive to compute or retrieve.

Primary Goals:

  • Performance Enhancement: Drastically reduce data retrieval times for frequently accessed information.
  • Reduced Backend Load: Minimize the number of requests hitting primary databases and services, preventing bottlenecks and improving their longevity.
  • Improved Scalability: Enable applications to handle a higher volume of user requests without proportional increases in backend infrastructure.
  • Enhanced User Experience: Deliver faster response times and a more fluid interaction for end-users.
  • Cost Optimization: Potentially reduce infrastructure costs by minimizing the need for over-provisioning backend resources.

2. Core Caching Strategy and Architecture

Our caching strategy is designed for efficiency, reliability, and maintainability, leveraging a multi-tiered approach and robust invalidation mechanisms.

2.1 Cache Tiers and Levels

The system employs a layered caching approach to maximize hit rates and minimize latency:

  • Application-Level Cache (In-Memory):

* Purpose: Stores highly specific, frequently accessed data directly within the application's memory space.

* Characteristics: Extremely fast access, but limited by application instance memory and not shared across instances. Best for short-lived, localized data.

* Example: User session data, frequently computed results within a single request context.

  • Distributed Cache (e.g., Redis/Memcached):

* Purpose: A shared, external cache service accessible by multiple application instances.

* Characteristics: Provides a centralized, highly available, and scalable caching layer. Ideal for data shared across the entire application fleet. Offers persistence options (Redis) and various data structures.

* Example: Product catalogs, user profiles, API responses, configuration settings.

  • Content Delivery Network (CDN) Cache (Optional/External):

* Purpose: Caches static and dynamic content geographically closer to end-users.

* Characteristics: Reduces latency for global users, offloads traffic from origin servers. Managed by third-party providers.

* Example: Images, JavaScript files, CSS, videos, static HTML pages.

2.2 Cache Eviction Policies

To manage cache memory effectively and ensure data freshness, the following eviction policies are primarily utilized:

  • Time-To-Live (TTL): Each cached item is assigned an expiration time. After this period, the item is automatically removed from the cache. This is the primary mechanism for ensuring data freshness.
  • Least Recently Used (LRU): When the cache reaches its memory limit, the item that has not been accessed for the longest period is evicted first. This is crucial for managing capacity in distributed caches.
  • Least Frequently Used (LFU): (Used selectively) Evicts items that have been accessed the fewest times. Useful for items that are accessed frequently but might have long periods of inactivity.

2.3 Cache Invalidation Strategies

Maintaining data consistency between the cache and the primary data source is critical. We implement a combination of strategies:

  • Time-Based Invalidation (TTL): The primary method. Data is considered valid for a predetermined period. After expiration, the next request will fetch fresh data from the source.
  • Write-Through/Write-Around:

* Write-Through: Data is written to both the cache and the primary data store simultaneously. Ensures data consistency but can introduce write latency.

* Write-Around: Data is written directly to the primary data store, bypassing the cache. The item is then fetched into the cache on the next read. Useful for data that is written once but read many times.

  • Event-Driven/Programmatic Invalidation:

* When data in the primary data source is modified (e.g., a database update), a specific event or API call is triggered to explicitly remove or update the corresponding item(s) in the cache. This ensures immediate consistency for critical data.

* Example: A publish/subscribe mechanism (e.g., Redis Pub/Sub, Kafka) can notify interested services to invalidate relevant cache entries upon data changes.

  • Lazy Loading (Cache Aside): Data is only loaded into the cache when it is requested and not found (a cache miss). This is the most common read strategy.

2.4 Data Consistency Model

The Caching System primarily operates under an eventual consistency model. While immediate programmatic invalidation is used for critical updates, the inherent nature of caching with TTLs means there can be a brief window where cached data is slightly stale before it expires or is explicitly invalidated. This trade-off is acceptable for the performance benefits gained, as the window of inconsistency is managed and minimized.

2.5 Key Components

The Caching System typically leverages industry-standard, high-performance technologies:

  • Redis: Utilized as the primary distributed caching layer. Redis offers excellent performance, supports various data structures (strings, hashes, lists, sets, sorted sets), and provides persistence, replication, and clustering capabilities for high availability and scalability.
  • Application Framework Caching Libraries: Integrated within application code (e.g., Spring Cache for Java, Django Cache for Python, custom in-memory solutions) to manage application-level caching and interact with the distributed cache.

3. Key Features and Benefits

The implemented Caching System delivers a range of significant advantages:

  • Significantly Improved Application Performance:

* Reduced latency for data retrieval, leading to faster page loads and API response times.

* Direct impact on user satisfaction and engagement.

  • Reduced Load on Backend Systems:

* Offloads read traffic from databases, message queues, and other microservices.

* Allows backend systems to focus on write operations and complex computations, improving their stability and throughput.

  • Enhanced Scalability and Reliability:

* Applications can handle higher concurrent user loads without requiring proportional scaling of expensive backend resources.

* Provides a buffer against temporary backend outages or performance degradation, as some data may still be served from the cache.

  • Optimized Resource Utilization:

* More efficient use of existing database and server resources.

* Potential for reduced operational costs by delaying or minimizing the need for expensive infrastructure upgrades.

  • Flexible Data Handling:

* Supports caching of various data types, including JSON objects, serialized domain models, API responses, and HTML fragments.

* Configurable TTLs and eviction policies allow fine-grained control over data freshness.


4. Implementation Details and Best Practices

The Caching System is integrated into the application architecture following established best practices.

4.1 Integration Points

Caching logic is strategically placed at various layers:

  • API Gateway/Edge Layer: Caches static content and common API responses for public-facing endpoints (often via CDN or an edge cache).
  • Application Service Layer: The primary integration point. Business logic determines what data to cache, how long, and how to invalidate it. Cache-aside pattern is commonly used here.
  • Data Access Layer (Repository/DAO): While less common for direct caching, this layer interacts with the cache when fetching data from the primary data source, ensuring data consistency.

4.2 Data Serialization/Deserialization

  • Data stored in the distributed cache is serialized into a compact and efficient format.
  • JSON: Commonly used for its human-readability and widespread support.
  • MessagePack/Protocol Buffers: Considered for scenarios requiring maximum efficiency and smaller payload sizes, especially for high-volume data.
  • Binary Serialization: For specific application objects where native language serialization offers performance benefits.

4.3 Error Handling and Fallbacks

Robust error handling is implemented to ensure the application remains functional even if the caching service becomes unavailable:

  • Cache Miss: If data is not found in the cache, the application always falls back to the primary data source.
  • Cache Service Unavailability: If the distributed cache service (e.g., Redis) is unreachable, the application is configured to bypass the cache and directly retrieve data from the primary source. This ensures application continuity, albeit with potentially reduced performance.
  • Circuit Breaker Pattern: Employed to prevent cascading failures by temporarily preventing requests to a failing cache service, allowing it time to recover.

4.4 Cache Warm-up Strategies

For critical data that must be available immediately or for systems with predictable peak loads, cache warm-up strategies are implemented:

  • On-Demand Warm-up: Specific data sets are proactively loaded into the cache during application startup or via an administrative trigger.
  • Scheduled Warm-up: Background jobs periodically fetch and populate the cache with frequently accessed data before anticipated peak usage times.

5. Operational Considerations

Effective operation and maintenance are crucial for the long-term success of the Caching System.

5.1 Monitoring and Alerting

Comprehensive monitoring is in place to track the health and performance of the caching infrastructure:

  • Key Metrics Monitored:

* Cache Hit Ratio: Percentage of requests served from the cache (critical indicator of effectiveness).

* Cache Miss Ratio: Percentage of requests that required fetching from the primary source.

* Latency: Time taken to retrieve data from the cache.

* Memory Usage: Total memory consumed by the cache.

* CPU/Network Utilization: Resources consumed by the caching service.

* Evictions: Number of items evicted due to memory pressure or TTL.

* Connection Count: Number of active client connections.

  • Alerting: Automated alerts are configured for critical thresholds (e.g., low hit ratio, high memory usage, service unavailability) to ensure prompt intervention.

5.2 Maintenance and Management

  • Cache Flushing: Tools and procedures are available for selectively or completely flushing cache entries, useful for urgent data updates or troubleshooting.
  • Scaling: The distributed cache (e.g., Redis Cluster) is designed for horizontal scalability, allowing for easy expansion of capacity as data volume or request load increases.
  • Configuration Management: Cache settings (TTLs, eviction policies) are managed through configuration files or environment variables, allowing for dynamic adjustments without code changes.

5.3 Disaster Recovery and High Availability

  • Replication: The distributed cache is configured with replication (e.g., Redis primary-replica setup) to ensure data availability and fault tolerance. In case of a primary node failure, a replica can be promoted.
  • Clustering: For larger deployments, a clustered setup (e.g., Redis Cluster) provides sharding and automatic failover, distributing data across multiple nodes and enhancing resilience.
  • Data Persistence (Redis): While primarily an in-memory store, Redis can be configured for data persistence (RDB snapshots, AOF logs) to minimize data loss in catastrophic scenarios, though this can impact performance.

6. Security Aspects

Security is a paramount concern for any data-handling system.

  • Network Segmentation: The caching service is deployed within a secure, private network segment, isolated from public access.
  • Access Control: Strict authentication and authorization mechanisms are enforced for applications and services connecting to the cache. This includes using strong credentials, IAM roles, or client certificates.
  • Data Encryption:

* In Transit: All communication between applications and the caching service is encrypted using TLS/SSL.

* At Rest: Data stored persistently by the caching service (if configured for persistence) is encrypted at the storage level.

  • Vulnerability Management: Regular security audits and vulnerability scanning are performed on the caching infrastructure and software components to identify and mitigate potential risks.
  • Sensitive Data Handling: Caching of highly sensitive data (e.g., PII, financial details) is carefully evaluated. If cached, it must be encrypted end-to-end and adhere to all relevant compliance standards.

7. Scalability and Performance Tuning

The Caching System is designed to scale and is subject to continuous performance optimization.

  • Horizontal Scaling: The distributed cache can be scaled horizontally by adding more nodes to the cluster, increasing both memory capacity and throughput.
  • Sharding: Data can be sharded across multiple cache instances based on a hashing strategy, distributing the load and allowing for greater total capacity.
  • Benchmarking and Load Testing: Regular load testing is conducted to identify performance bottlenecks and validate the system's ability to handle anticipated peak loads.
  • Capacity Planning: Based on growth projections and performance metrics, proactive capacity planning ensures that the caching infrastructure can meet future demands.
  • Code Optimization: Application code interacting with the cache is continuously reviewed for efficiency, ensuring optimal key generation, serialization, and retrieval patterns.

8. Recommendations and Next Steps

To maximize the value and ensure the ongoing success of the Caching System, we recommend the following:

8.1 Deployment Strategy

  • Phased Rollout: Implement caching for less critical or high-impact areas first, gradually expanding to other parts of the application.
  • A/B Testing: For critical features, consider A/B testing cached vs. non-cached versions to quantitatively measure performance improvements.

8.2 Training and Documentation

  • Developer Training: Provide ongoing training for development teams on best practices for interacting with the caching system, including key design patterns, invalidation strategies, and monitoring.
  • Internal Documentation: Maintain up-to-date internal documentation covering cache topologies, key naming conventions, common TTLs, and operational playbooks.

8.3 Performance Baselines and SLOs

  • Establish Baselines: Define clear performance baselines (e.g., average response times, cache hit ratios)
caching_system.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}