Caching System
Run ID: 69cb3be661b1021a29a872212026-03-31Development
PantheraHive BOS
BOS Dashboard

This document outlines the comprehensive details of the newly implemented Caching System, providing an overview of its architecture, features, implementation, and operational guidelines. This system is designed to significantly enhance the performance, scalability, and user experience of your applications.


Caching System Documentation

Document Version: 1.0

Date: October 26, 2023

Prepared For: Valued Customer

Prepared By: PantheraHive Team


1. Executive Summary

This document details the Caching System designed and implemented to optimize your application's performance and scalability. By strategically storing frequently accessed data, the caching system drastically reduces the load on primary databases and backend services, leading to faster response times, improved user experience, and enhanced system resilience. This deliverable covers the architecture, key features, implementation specifics, operational best practices, and future considerations for the caching solution.

2. Introduction

The primary goal of integrating a robust Caching System is to address common performance bottlenecks associated with data retrieval and computation. This system aims to:

This document serves as a comprehensive guide for understanding, operating, and extending the Caching System.

3. Caching System Architecture Overview

The implemented Caching System integrates a distributed cache layer to offload data retrieval from the primary data stores.

3.1. High-Level Architecture

The caching system is designed as a separate, highly available service that interacts with both the application layer and the primary database.

text • 1,083 chars
+-------------------+       +--------------------+       +---------------------+
|   User/Client     |       | Application Servers|       | Primary Data Store  |
| (Web/Mobile App)  |------>| (Backend Services) |------>| (e.g., PostgreSQL)  |
+-------------------+       |                    |       |                     |
                            |  1. Request Data   |       |                     |
                            |  2. Check Cache    |<----->| 4. Fetch from DB    |
                            |  3. Cache Hit/Miss |       |    (on cache miss)  |
                            |  5. Store in Cache |       |                     |
                            +--------------------+       +---------------------+
                                     | ^
                                     | | (Read/Write)
                                     v |
                               +--------------------+
                               |   Caching Layer    |
                               | (e.g., Redis Cluster)|
                               +--------------------+
Sandboxed live preview

Caching System Architecture Study Plan

This document outlines a comprehensive and detailed study plan designed to equip you with a deep understanding of Caching System Architecture. The goal is to provide a structured approach to mastering the principles, design patterns, and practical implementation of caching solutions, enabling you to make informed architectural decisions for high-performance, scalable systems.


1. Executive Summary

The "Caching System Architecture Study Plan" is a structured program spanning approximately 4-6 weeks, focusing on fundamental caching concepts, architectural patterns, popular technologies, and advanced design considerations. It is designed to move from theoretical understanding to practical application, culminating in the ability to design and evaluate robust caching strategies for various system requirements. This plan emphasizes hands-on learning and critical thinking, ensuring a holistic grasp of caching systems as a critical component of modern software architecture.


2. Learning Objectives

Upon successful completion of this study plan, you will be able to:

  • Understand Core Caching Concepts: Articulate the purpose, benefits, and challenges of caching, including cache hits/misses, eviction policies, and invalidation strategies.
  • Evaluate Caching Strategies: Analyze and compare different caching patterns (e.g., Cache-Aside, Write-Through, Write-Back, Read-Through) and determine their suitability for specific use cases.
  • Design Caching Layers: Architect effective caching solutions for various application types (web, API, database), considering factors like data consistency, scalability, and fault tolerance.
  • Proficiency with Caching Technologies: Gain practical experience with leading caching technologies such as Redis and Memcached, understanding their strengths, weaknesses, and appropriate applications.
  • Address Advanced Caching Challenges: Identify and mitigate common issues in distributed caching, including cache stampede, hot-key problems, consistency models, and monitoring requirements.
  • Integrate Caching into System Design: Incorporate caching effectively into broader system architectures, demonstrating an understanding of its interaction with databases, CDNs, and application logic.

3. Weekly Study Schedule

This schedule is designed for dedicated study, assuming approximately 10-15 hours per week. It is flexible and can be adapted based on individual pace and prior knowledge.

Week 1: Fundamentals of Caching

  • Learning Objectives: Grasp the core concepts of caching, its importance, and basic operational mechanisms.
  • Topics:

* What is Caching? Why is it essential for performance and scalability?

* Types of Caching: Client-side, Server-side (Application, Database), CDN caching.

* Key Caching Metrics: Cache hit ratio, latency reduction.

* Cache Eviction Policies: LRU (Least Recently Used), LFU (Least Frequently Used), FIFO (First In, First Out), ARC (Adaptive Replacement Cache).

* Cache Invalidation Strategies: Time-To-Live (TTL), explicit invalidation, consistency models.

  • Activities:

* Read foundational articles and documentation.

* Conceptual exercises: Given a scenario, propose an appropriate eviction policy.

* Discuss the trade-offs between different invalidation strategies.

Week 2: Caching Architectures & Patterns

  • Learning Objectives: Understand common caching architectural patterns and their implications for system design.
  • Topics:

* Single-Node vs. Distributed Caching: Advantages and challenges.

* In-Memory vs. Persistent Caching: When to use each.

* Common Caching Patterns:

* Cache-Aside: Application manages cache reads and writes.

* Write-Through: Data written simultaneously to cache and database.

* Write-Back: Data written to cache first, then asynchronously to database.

* Read-Through: Cache fetches missing data from the database.

* CDN Integration: How Content Delivery Networks enhance caching for static assets.

* Database Caching: Query caching, result set caching, object caching (ORM level).

  • Activities:

* Analyze real-world case studies of systems employing different caching patterns.

* Design exercises: Sketch architectural diagrams for a web application using Cache-Aside and Write-Through patterns.

* Compare the complexity and consistency guarantees of each pattern.

Week 3: Popular Caching Technologies - Practical Deep Dive

  • Learning Objectives: Gain hands-on experience with industry-standard caching technologies.
  • Topics:

* Redis:

* Data structures (Strings, Hashes, Lists, Sets, Sorted Sets).

* Persistence (RDB, AOF).

* Pub/Sub, Transactions, Lua scripting.

* Clustering and High Availability (Redis Sentinel, Redis Cluster).

* Memcached:

* Key-value store simplicity.

* Distributed hash table architecture.

* Comparison with Redis: Use cases, feature sets.

* Cloud Caching Services: Overview of AWS ElastiCache (Redis/Memcached), Azure Cache for Redis, GCP Memorystore.

  • Activities:

* Hands-on Lab: Set up local instances of Redis and Memcached (e.g., using Docker).

* Perform basic operations (SET, GET, DEL, INCR, etc.) and experiment with different Redis data types.

* Implement a simple application (e.g., a Python/Node.js script) that interacts with both Redis and Memcached.

* Explore configuration options for persistence and memory limits.

Week 4: Advanced Design Considerations & Operational Aspects

  • Learning Objectives: Address complex challenges in caching and understand operational best practices.
  • Topics:

* Cache Consistency: Strong vs. eventual consistency, techniques for maintaining consistency in distributed systems.

* Distributed Cache Challenges: CAP theorem implications, split-brain problem, network partitions.

* Performance Bottlenecks: Cache stampede (thundering herd), dog-piling, hot-key issues, cache preloading.

* Monitoring and Alerting: Key metrics to track (hit ratio, latency, memory usage, evictions), setting up alerts.

* Security: Securing cache access, data encryption (in-transit, at-rest).

* Scaling Caching Systems: Horizontal vs. vertical scaling, sharding, replication.

* Cost Optimization: Balancing performance gains with infrastructure costs.

  • Activities:

* Design problem-solving: Propose solutions for a cache stampede scenario.

* Discuss how to monitor a caching system effectively.

* Analyze potential security vulnerabilities in a cached application.

* Review strategies for handling data consistency in a highly distributed environment.


4. Recommended Resources

Books & Publications:

  • "Designing Data-Intensive Applications" by Martin Kleppmann: Chapters on "Partitioning Data," "Transactions," and "The Trouble with Distributed Transactions" provide excellent foundational knowledge relevant to distributed caching.
  • "System Design Interview – An Insider's Guide" by Alex Xu: Contains dedicated sections and examples on caching strategies and system design.
  • "Redis in Action" by Josiah L. Carlson: A practical guide to using Redis effectively.

Online Courses & Tutorials:

  • Redis University: Offers free, comprehensive courses directly from the creators of Redis, covering fundamentals to advanced topics.
  • Memcached Official Documentation: Essential for understanding Memcached's architecture and usage.
  • Cloud Provider Documentation:

* AWS ElastiCache Documentation

* Azure Cache for Redis Documentation

* Google Cloud Memorystore Documentation

  • System Design Primer (GitHub Repository): A highly regarded resource for system design concepts, including extensive sections on caching.
  • High Scalability Blog: Features numerous articles on real-world caching implementations and challenges from major tech companies.

Tools & Platforms:

  • Docker: For easily setting up and experimenting with Redis and Memcached locally.
  • Redis CLI: Command-line interface for interacting with Redis.
  • ab (ApacheBench), JMeter, Locust: Performance testing tools to simulate load and measure cache effectiveness.
  • Grafana/Prometheus: For monitoring and visualizing caching metrics (integrates well with Redis Exporter).

5. Milestones

Achieving these milestones will demonstrate progressive mastery of caching system architecture:

  • Milestone 1 (End of Week 1): Foundational Understanding Achieved

* Successfully articulate the core benefits and challenges of caching, including different eviction policies and invalidation strategies.

* Pass a self-assessment quiz on fundamental caching concepts.

  • Milestone 2 (End of Week 2): Architectural Pattern Proficiency

* Ability to describe and differentiate between Cache-Aside, Write-Through, Write-Back, and Read-Through patterns.

* Present a high-level architectural design for a hypothetical application, clearly justifying the chosen caching pattern.

  • Milestone 3 (End of Week 3): Hands-on Technology Competence

* Successfully set up and interact with local instances of Redis and Memcached.

* Implement a basic caching layer in a simple application using one of the learned technologies.

* Demonstrate understanding of Redis data structures and basic commands.

  • Milestone 4 (End of Week 4): Advanced Design & Operational Insight

* Identify and propose solutions for common distributed caching problems (e.g., cache stampede, consistency issues).

* Outline a monitoring strategy for a production caching system.

* Participate in a design review, offering informed critiques and suggestions regarding caching implementations.

  • Final Milestone (End of Plan): Comprehensive Caching System Design

* Deliver a detailed architectural plan for integrating a caching system into a complex application, covering technology selection, strategy, consistency model, and operational considerations.


6. Assessment Strategies

To ensure thorough understanding and practical application, the following assessment strategies will be employed:

  • Weekly Self-Assessments/Quizzes: Short conceptual quizzes at the end of each week to reinforce learning and identify areas needing further review.
  • Design Exercises: Regular design challenges requiring you to propose caching solutions for specific scenarios, including justification of choices and architectural diagrams.
  • Hands-on Labs & Code Reviews: Evaluation of practical implementations, such as setting up caching technologies, writing client code, and configuring advanced features. Code will be reviewed for best practices, efficiency, and correctness.
  • Case Study Analysis: Presenting an analysis of a real-world caching system (e.g., how Netflix uses caching) and discussing its strengths, weaknesses, and potential improvements.
  • Final Project/Presentation: A capstone project where you design a comprehensive caching strategy for a given application requirement. This will involve documenting architectural choices, technology selection, consistency models, and operational plans. A presentation of this design to a peer group or mentor will be required.
  • Interactive Discussions & Q&A Sessions: Active participation in discussions on complex topics, challenging assumptions, and articulating solutions to nuanced problems.

This detailed study plan provides a robust framework for mastering Caching System Architecture. By diligently following the schedule, engaging with the recommended resources, and actively participating in the assessments, you will develop the expertise necessary to design, implement, and manage high-performing and resilient caching solutions.

gemini Output

Caching System: Code Generation and Implementation Guide

This document provides a comprehensive overview and production-ready code examples for implementing a robust caching system. Caching is a critical technique for improving application performance, scalability, and reducing the load on primary data stores.


1. Introduction to Caching Systems

A caching system stores copies of frequently accessed data in a faster, more readily available location (the cache) than the original data source. When an application requests data, it first checks the cache. If the data is found (a "cache hit"), it's retrieved quickly. If not (a "cache miss"), the data is fetched from the primary source, stored in the cache for future use, and then returned to the application.

Key Benefits:

  • Improved Performance: Faster data retrieval dueances to reduced latency and I/O operations on the primary data source.
  • Reduced Database/API Load: Offloads read requests from backend databases, APIs, or computationally expensive processes, preventing bottlenecks and improving system stability.
  • Enhanced Scalability: Allows applications to handle a higher volume of requests without proportionally increasing the load on the primary data store.
  • Cost Efficiency: Can reduce operational costs by minimizing resource usage on expensive database servers or external API calls.

2. Core Caching Concepts

Understanding these concepts is crucial for designing an effective caching strategy:

  • Cache Hit: Occurs when requested data is found in the cache.
  • Cache Miss: Occurs when requested data is not found in the cache and must be fetched from the original data source.
  • Time-To-Live (TTL): A duration after which a cached item is considered stale and automatically removed or invalidated.
  • Cache Eviction Policies: Algorithms used to decide which items to remove from the cache when it reaches its capacity limit. Common policies include:

* LRU (Least Recently Used): Discards the least recently used items first.

* LFU (Least Frequently Used): Discards the least frequently used items first.

* FIFO (First-In, First-Out): Discards the first item added to the cache.

* MRU (Most Recently Used): Discards the most recently used items first (less common).

  • Cache Coherency/Invalidation: Ensuring that data in the cache remains consistent with the original data source. This is a complex challenge, often managed through TTLs, explicit invalidation, or write-through/write-back strategies.

3. Caching Strategies

Different approaches to integrating caching into your application flow:

  • Cache-Aside (Lazy Loading):

* How it works: The application is responsible for checking the cache first. If a cache miss occurs, the application fetches data from the database, stores it in the cache, and then returns it.

* Pros: Simple to implement, tolerant to cache failures, good for read-heavy workloads.

* Cons: Data might be stale until the TTL expires or explicit invalidation occurs.

  • Write-Through:

* How it works: Data is written simultaneously to both the cache and the primary data store.

* Pros: Data in cache is always consistent with the database, simpler read logic.

* Cons: Higher write latency, as both operations must complete.

  • Write-Back:

* How it works: Data is written only to the cache initially. The cache then asynchronously writes the data to the primary data store.

* Pros: Very low write latency, can coalesce multiple writes.

* Cons: Data loss risk if the cache fails before data is persisted, more complex to implement.

  • Read-Through:

* How it works: Similar to Cache-Aside, but the cache itself (or a caching library/service) is responsible for fetching data from the primary data store on a cache miss. The application only interacts with the cache.

* Pros: Simplifies application logic, cache manages data fetching.

* Cons: Requires the cache to have knowledge of the data source.


4. Implementation Example 1: In-Memory LRU Cache (Python)

This example demonstrates a basic, thread-safe in-memory cache using the Least Recently Used (LRU) eviction policy. It's suitable for single-instance applications or when caching small, frequently accessed datasets locally.


import collections
import threading
import time
from typing import Any, Optional, Tuple

class LRUCache:
    """
    A thread-safe, in-memory LRU (Least Recently Used) cache with Time-To-Live (TTL) support.

    This cache maintains a fixed maximum capacity. When the cache is full and a new
    item needs to be added, the least recently used item is evicted.
    Each item can also have an optional Time-To-Live (TTL), after which it's considered
    stale and will be re-fetched or removed upon access.
    """

    def __init__(self, capacity: int, default_ttl_seconds: Optional[int] = None):
        """
        Initializes the LRUCache.

        Args:
            capacity (int): The maximum number of items the cache can hold. Must be > 0.
            default_ttl_seconds (Optional[int]): Default Time-To-Live for items in seconds.
                                                 If None, items don't expire by default.
        """
        if capacity <= 0:
            raise ValueError("Cache capacity must be greater than 0.")

        self.capacity = capacity
        self.default_ttl_seconds = default_ttl_seconds
        # OrderedDict maintains insertion order, which we'll use to track recency.
        # When an item is accessed, we move it to the end (most recently used).
        # When an item is added and capacity is exceeded, we remove from the beginning (least recently used).
        # Value stored will be a tuple: (data, expiry_timestamp)
        self._cache = collections.OrderedDict()
        self._lock = threading.Lock() # For thread-safety

    def _get_expiry_timestamp(self, ttl_seconds: Optional[int]) -> Optional[float]:
        """Calculates the expiry timestamp based on current time and TTL."""
        if ttl_seconds is None:
            return None
        return time.monotonic() + ttl_seconds

    def get(self, key: Any) -> Optional[Any]:
        """
        Retrieves an item from the cache.

        If the item is found and is not expired, it's marked as most recently used
        and returned. Otherwise, None is returned.

        Args:
            key (Any): The key of the item to retrieve.

        Returns:
            Optional[Any]: The cached value if found and not expired, otherwise None.
        """
        with self._lock:
            if key not in self._cache:
                return None

            data, expiry_timestamp = self._cache[key]

            # Check for expiry if TTL is set
            if expiry_timestamp is not None and time.monotonic() > expiry_timestamp:
                del self._cache[key] # Remove expired item
                return None

            # Move the accessed item to the end to mark it as most recently used
            self._cache.move_to_end(key)
            return data

    def put(self, key: Any, value: Any, ttl_seconds: Optional[int] = None):
        """
        Adds or updates an item in the cache.

        If the cache is at capacity, the least recently used item is evicted.
        The new or updated item is marked as most recently used.

        Args:
            key (Any): The key of the item to store.
            value (Any): The value to store.
            ttl_seconds (Optional[int]): Specific TTL for this item in seconds.
                                         If None, uses the default_ttl_seconds of the cache.
        """
        with self._lock:
            current_ttl = ttl_seconds if ttl_seconds is not None else self.default_ttl_seconds
            expiry_timestamp = self._get_expiry_timestamp(current_ttl)

            if key in self._cache:
                # Update existing item and mark as most recently used
                self._cache[key] = (value, expiry_timestamp)
                self._cache.move_to_end(key)
            else:
                # Add new item
                if len(self._cache) >= self.capacity:
                    # Evict the least recently used item (first item)
                    self._cache.popitem(last=False)
                self._cache[key] = (value, expiry_timestamp)

    def delete(self, key: Any) -> bool:
        """
        Removes an item from the cache.

        Args:
            key (Any): The key of the item to remove.

        Returns:
            bool: True if the item was found and removed, False otherwise.
        """
        with self._lock:
            if key in self._cache:
                del self._cache[key]
                return True
            return False

    def clear(self):
        """Clears all items from the cache."""
        with self._lock:
            self._cache.clear()

    def size(self) -> int:
        """Returns the current number of items in the cache."""
        with self._lock:
            # Optionally, one could clean expired items here, but usually,
            # expiry check is done on 'get' to avoid constant background cleanup.
            # For accurate size, one might iterate and remove expired, but for
            # a simple size count, this is sufficient.
            return len(self._cache)

    def __repr__(self) -> str:
        """String representation of the cache."""
        with self._lock:
            return f"LRUCache(capacity={self.capacity}, size={self.size()}, items={list(self._cache.keys())})"


# --- Usage Example ---
if __name__ == "__main__":
    print("--- Testing LRU Cache without TTL ---")
    cache = LRUCache(capacity=3)

    cache.put("key1", "value1")
    cache.put("key2", "value2")
    cache.put("key3", "value3")
    print(f"Cache after initial puts: {cache}") # Expected: key1, key2, key3

    print(f"Get key2: {cache.get('key2')}") # Access key2, it should become MRU
    print(f"Cache after getting key2: {cache}") # Expected: key1, key3, key2

    cache.put("key4", "value4") # Cache is full, key1 (LRU) should be evicted
    print(f"Cache after putting key4: {cache}") # Expected: key3, key2, key4

    print(f"Get key1: {cache.get('key1')}") # key1 should be None (evicted)
    print(f"Cache after trying to get key1: {cache}")

    print("\n--- Testing LRU Cache with Default TTL ---")
    ttl_cache = LRUCache(capacity=2, default_ttl_seconds=1) # Items expire in 1 second

    ttl_cache.put("data_a", "content_a")
    ttl_cache.put("data_b", "content_b")
    print(f"TTL Cache initial: {ttl_cache}")

    print(f"Get data_a immediately: {ttl_cache.get('data_a')}") # Should be 'content_a'
    print(f"TTL Cache after getting data_a: {ttl_cache}") # data_a should be MRU

    print("Waiting for 1.1 seconds for items to expire...")
    time.sleep(1.1)

    print(f"Get data_b after TTL: {ttl_cache.get('data_b')}") # Should be None (expired)
    print(f"TTL Cache after getting expired data_b: {ttl_cache}") # data_b should be removed

    print(f"Get data_a after TTL: {ttl_cache.get('data_a')}") # Should be None (expired)
    print(f"TTL Cache after getting expired data_a: {ttl_cache}") # data_a should be removed

    print("\n--- Testing LRU Cache with Specific TTL ---")
    specific_ttl_cache = LRUCache(capacity=2)
    specific_ttl_cache.put("item_short", "short_lived_data", ttl_seconds=0.5)
    specific_ttl_cache.put("item_long", "long_lived_data", ttl_seconds=5)
    print(f"Specific TTL Cache initial: {specific_ttl_cache}")

    print(f"Get item_long immediately: {specific_ttl_cache.get('item_long')}")
    print("Waiting for 0.6 seconds...")
    time.sleep(0.6)

    print(f"Get item_short after its TTL: {specific_ttl_cache.get('item_short')}") # Should be None
    print(f"Get item_long after item_short's TTL: {specific_ttl_cache.get('item_long')}") # Should still be 'long_lived_data'
    print(f"Specific TTL Cache after short item expired: {specific_ttl_cache}")

    print("\n--- Testing Deletion ---")
    delete_cache = LRUCache(capacity=3)
    delete_cache.put("k1", "v1")
    delete_cache.put("k2", "v2")
    print(f"Cache before delete: {delete_cache}")
    print(f"Deleting k1: {delete_cache.delete('k1')}") # True
    print(f"Cache after deleting k1: {delete_cache}")
    print(f"Deleting k3 (non-existent): {delete_cache.delete('k3')}") # False
  • Handling Stale Data:

* Understand the TTLs set for different data types. Communicate acceptable staleness levels with product owners.

* Implement explicit cache invalidation immediately after data modifications in the primary data store to ensure consistency.

  • Error Handling (Cache Fallback):

* Design your application to gracefully handle scenarios where the cache is unavailable or returns an error. The application should fall back to querying the primary data store.

* Implement circuit breakers or timeouts to prevent the cache from becoming a bottleneck during outages.

*

caching_system.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}