Caching System
Run ID: 69cc40146beabe319cec8e5e2026-03-31Development
PantheraHive BOS
BOS Dashboard

Caching System: Architecture Planning Study Plan

This document outlines a comprehensive study plan designed to equip your team with the foundational and advanced knowledge required to effectively plan, design, and implement robust caching systems. This plan is crucial for ensuring optimal performance, scalability, and cost-efficiency for your applications.


1. Introduction and Study Goal

The primary goal of this study plan is to develop a deep understanding of caching principles, technologies, and best practices. By the end of this program, participants will be able to:

  • Analyze application performance bottlenecks and identify opportunities for caching.
  • Design appropriate caching strategies and architectures tailored to specific use cases.
  • Evaluate and select suitable caching technologies (in-memory, distributed, CDN).
  • Implement effective cache invalidation and consistency mechanisms.
  • Monitor and optimize caching system performance.
  • Communicate trade-offs and justify architectural decisions related to caching.

This knowledge will directly inform the subsequent steps of the "Caching System" workflow, ensuring a well-engineered and efficient solution.


2. Weekly Schedule

This study plan is structured over five weeks, with each week focusing on a distinct set of concepts and building upon the previous one. Each week is estimated to require 10-15 hours of dedicated study, including reading, video lectures, and practical exercises.

Week 1: Caching Fundamentals and Core Concepts

  • Learning Objectives:

* Understand what caching is, its purpose, and benefits (performance, scalability, cost reduction).

* Identify common types of data suitable for caching.

* Differentiate between various cache levels (CPU, OS, application, distributed, CDN).

* Grasp core caching principles: hit/miss ratio, latency, throughput, capacity.

* Learn about common cache eviction policies (LRU, LFU, FIFO, MRU, ARC).

* Understand the trade-offs associated with caching (staleness, complexity, resource consumption).

  • Key Topics:

* Introduction to Caching

* Why Cache? Benefits and Drawbacks

* Cache Hierarchy and Scope

* Cache Eviction Policies

* Cache Coherence and Consistency (basic introduction)

Week 2: Caching Patterns and Invalidation Strategies

  • Learning Objectives:

* Identify and apply common caching patterns (Cache-Aside, Read-Through, Write-Through, Write-Back, Write-Around).

* Understand the implications of each pattern on data freshness, write performance, and complexity.

* Design effective cache invalidation strategies (Time-To-Live, explicit invalidation, publish/subscribe, versioning).

* Address common caching challenges like the "Thundering Herd" problem and cache warming.

  • Key Topics:

* Cache-Aside Pattern

* Read-Through, Write-Through, Write-Back, Write-Around Patterns

* Cache Invalidation Techniques (TTL, Event-Driven, Versioning)

* Handling Cache Stampede/Thundering Herd

* Cache Warming Strategies

Week 3: In-Memory and Distributed Caching Technologies

  • Learning Objectives:

* Differentiate between in-memory (local) and distributed caching solutions.

* Understand the architecture, features, and use cases for popular in-memory caches (e.g., Guava Cache, Caffeine, Ehcache).

* Explore distributed caching systems like Redis and Memcached: their data structures, persistence options, clustering, and high-availability features.

* Gain familiarity with other distributed caching options (e.g., Apache Ignite, Hazelcast).

* Understand data consistency models in distributed environments.

  • Key Topics:

* In-Process Caching (Guava Cache, Caffeine)

* Introduction to Distributed Caching

* Redis: Data Structures, Persistence, Pub/Sub, Clustering

* Memcached: Key-Value Store, Simplicity

* Other Distributed Caches (Overview)

* Distributed Cache Consistency Models

Week 4: Advanced Caching Topics and Content Delivery Networks (CDNs)

  • Learning Objectives:

* Understand the role and benefits of Content Delivery Networks (CDNs) in global content delivery.

* Learn how to integrate CDNs with application caching strategies.

* Explore advanced topics like cache security, monitoring, and observability.

* Understand scaling strategies for caching systems and common pitfalls.

* Consider caching within a microservices architecture.

  • Key Topics:

* Content Delivery Networks (CDNs): Principles, Edge Caching, Invalidation

* Cache Security Considerations

* Monitoring and Alerting for Caching Systems (Metrics: hit ratio, latency, eviction rate)

* Scaling Distributed Caches (Sharding, Replication)

* Caching in Microservices Architectures

* Common Caching Pitfalls and How to Avoid Them

Week 5: System Design Workshop & Best Practices

  • Learning Objectives:

* Consolidate all learned concepts through practical design exercises.

* Develop the ability to propose and justify a complete caching architecture for various real-world scenarios.

* Understand best practices for cache configuration, deployment, and maintenance.

* Review case studies of successful and challenging caching implementations.

  • Key Topics:

* Review of all concepts

* Case Studies of Caching Architectures (e.g., Netflix, Amazon, Facebook)

* Interactive Design Exercises:

* Designing a caching layer for a high-traffic e-commerce product catalog.

* Implementing user session caching for a web application.

* Caching API responses for a mobile backend.

* Performance Tuning and Optimization

* Cost Optimization with Caching


3. Recommended Resources

A curated list of resources to support your learning journey:

  • Books:

* "Designing Data-Intensive Applications" by Martin Kleppmann: Chapters on data models, consistency, and distributed systems are highly relevant.

* "System Design Interview – An insider's guide" by Alex Xu: Contains excellent chapters and examples on caching system design.

  • Online Courses (e.g., Coursera, Udemy, Pluralsight):

* Search for "System Design Interview," "Distributed Systems," "Redis Fundamentals," "Caching Strategies."

* Specific recommendations can be provided based on your team's preferred learning platform.

  • Official Documentation:

* [Redis Official Documentation](https://redis.io/docs/)

* [Memcached Official Documentation](https://memcached.org/documentation)

* [Guava Cache Documentation](https://github.com/google/guava/wiki/CachesExplained)

* [Caffeine Cache Documentation](https://github.com/ben-manes/caffeine/wiki)

  • Blogs and Articles:

* Engineering blogs from companies like Netflix, Amazon, Google, Facebook (search for "caching" or "system design").

* Medium articles by experienced software architects on caching patterns.

* Cloud provider documentation (AWS ElastiCache, Azure Cache for Redis, GCP Memorystore) for managed caching services.

  • YouTube Channels:

* "System Design Interview" channels (e.g., ByteByteGo, Gaurav Sen) often have dedicated videos on caching.

* Conference talks from industry events (e.g., QCon, AWS re:Invent) on caching best practices.


4. Milestones

Achieving these milestones will mark significant progress in the study plan:

  • End of Week 1: Solid understanding of caching principles, benefits, and common eviction policies. Ability to explain "why cache?"
  • End of Week 2: Proficiency in identifying and applying various caching patterns and designing basic cache invalidation schemes.
  • End of Week 3: Familiarity with the architecture and core features of major in-memory and distributed caching technologies (Redis, Memcached).
  • End of Week 4: Grasp of advanced caching topics, including CDN integration, monitoring, security, and scaling challenges.
  • End of Study (Week 5): Capability to critically evaluate caching requirements, propose a suitable architecture, and articulate design choices and trade-offs for a given application scenario.

5. Assessment Strategies

To ensure effective knowledge transfer and retention, a multi-faceted assessment approach will be used:

  • Weekly Self-Quizzes: Short, informal quizzes at the end of each week to test understanding of key concepts.
  • Concept Explanations: Participants will be encouraged to explain core caching concepts (e.g., "Explain LRU eviction policy," "Describe the Cache-Aside pattern") to peers or a mentor.
  • Mini-Design Exercises:

* Scenario-based problem-solving: Given a specific application scenario, propose a caching pattern and an invalidation strategy.

* Technology comparison: Justify the choice between Redis and Memcached for a particular use case.

* Architecture sketching: Draw a high-level diagram of a distributed caching system for a given requirement.

  • Final Design Presentation (End of Week 5): Each participant or small group will present a detailed caching architecture design for a complex problem statement, including:

* Problem analysis and caching opportunities.

* Chosen caching patterns and technologies.

* Invalidation strategy.

* Scaling and high-availability considerations.

* Monitoring strategy.

* Anticipated trade-offs and justifications.

  • Peer Review: Constructive feedback will be provided on design exercises and presentations to foster a collaborative learning environment.

This structured study plan provides a robust framework for mastering caching system architecture, directly preparing your team for the subsequent design and implementation phases of your project.

gemini Output

Caching System: Comprehensive Design and Implementation Guide

This document provides a detailed, professional output for the design and implementation of a Caching System. Caching is a fundamental technique in modern software architecture for improving application performance, reducing database load, and enhancing user experience.


1. Introduction: The Caching System

A caching system stores frequently accessed data in a high-speed data storage layer, typically RAM, to serve subsequent requests faster than retrieving the data from its primary, slower source (e.g., a database, external API, or disk). The goal is to reduce latency and increase throughput by minimizing the need to re-compute or re-fetch data that has not changed.

Why is a Caching System Needed?

  • Performance Improvement: Significantly reduces data retrieval times, leading to faster response times for users.
  • Reduced Load on Backend Services: Offloads requests from databases, APIs, and other expensive compute resources, preventing bottlenecks and improving their stability.
  • Cost Reduction: Less load on primary data stores can mean smaller or fewer database instances, reducing infrastructure costs.
  • Improved User Experience: Faster loading times and more responsive applications directly translate to a better experience for end-users.
  • Scalability: Enables applications to handle a higher volume of requests without linearly scaling the primary data sources.

2. Core Concepts and Design Principles

Effective caching requires careful consideration of several key principles:

2.1 Cache Hit and Cache Miss

  • Cache Hit: Occurs when the requested data is found in the cache. The data is served directly from the cache, providing fast access.
  • Cache Miss: Occurs when the requested data is not found in the cache. The system must then fetch the data from the primary data source, store it in the cache (optionally), and then return it.

2.2 Cache Eviction Policies

When a cache reaches its capacity, old or less useful items must be removed to make space for new ones. Eviction policies determine which items to remove:

  • Least Recently Used (LRU): Discards the item that has not been accessed for the longest time. Very common and effective.
  • Least Frequently Used (LFU): Discards the item with the fewest accesses. Requires tracking access counts.
  • First-In, First-Out (FIFO): Discards the item that was added to the cache first, regardless of usage. Simple but often less efficient.
  • Random Replacement (RR): Randomly discards an item. Simple to implement but generally inefficient.
  • Time-To-Live (TTL): Items expire after a predefined duration, regardless of usage. Essential for freshness.
  • Maximum Size/Count: Evict items when the cache exceeds a certain memory size or item count.

2.3 Cache Invalidation Strategies

Ensuring the cache holds fresh and accurate data is critical. Stale data can lead to incorrect application behavior.

  • Time-Based Invalidation (TTL): Each cached item has an associated expiration time. After this time, the item is considered stale and will be re-fetched on the next request. Simple and effective for data that can tolerate some staleness.
  • Write-Through: Data is written simultaneously to both the cache and the primary data store. Ensures data consistency but adds latency to write operations.
  • Write-Back (Write-Behind): Data is written only to the cache initially, and then asynchronously written to the primary data store later. Offers low-latency writes but risks data loss if the cache fails before data is persisted.
  • Write-Around: Data is written directly to the primary data store, bypassing the cache. Only read data is cached. Suitable for data that is written once and rarely read, or for bulk writes.
  • Event-Driven Invalidation: When the primary data changes, an event is triggered (e.g., via a message queue) that explicitly invalidates or updates the corresponding entry in the cache. Provides strong consistency but is more complex to implement.
  • Manual Invalidation: Explicitly removing items from the cache when changes are known. Useful for critical, high-consistency data.

2.4 Cache Consistency

Maintaining consistency between the cache and the primary data source is a significant challenge, especially in distributed systems. Strategies like write-through, event-driven invalidation, and careful TTL management help mitigate this.

2.5 Cache Sizing and Capacity

Determining the optimal size of your cache involves balancing memory usage, performance gains, and the cost of cache misses. Too small, and your hit ratio will be low; too large, and you waste resources. Monitoring cache hit ratios and memory usage is key to tuning.

2.6 Deployment Models: Local vs. Distributed Caching

  • Local (In-Memory) Cache: Resides within the application's process memory. Fastest access but limited by available RAM and not shared across multiple application instances. Ideal for frequently accessed, application-specific data.
  • Distributed Cache: A separate service or cluster of services that multiple application instances can connect to. Provides shared access, scalability, and persistence (optional). Examples include Redis and Memcached. Introduces network latency but is essential for scalable, fault-tolerant applications.

3. Common Caching Technologies

The choice of caching technology depends on the application's requirements regarding performance, scalability, consistency, and complexity.

  • In-Memory Libraries (Local Cache):

* Python: functools.lru_cache (decorator), custom dictionary-based implementations.

* Java: Guava Cache, Caffeine.

* Node.js: node-cache, lru-cache.

* Go: ristretto, go-cache.

* Use Cases: Per-instance caching, small datasets, frequently accessed static configuration.

  • Distributed Caching Systems:

* Redis: An open-source, in-memory data structure store, used as a database, cache, and message broker. Supports various data structures (strings, hashes, lists, sets, sorted sets), persistence, replication, and clustering. Excellent for high-performance, scalable caching.

* Memcached: A high-performance, distributed memory object caching system. Simpler than Redis, primarily for key-value storage. Good for large-scale, generic object caching.

* Use Cases: Shared cache across multiple application instances, session storage, real-time analytics, leaderboards.

  • Content Delivery Networks (CDNs):

* Cloudflare, Amazon CloudFront, Akamai: Caches static and dynamic content at edge locations geographically closer to users, reducing latency for web assets.

* Use Cases: Website assets (images, CSS, JS), video streaming, API responses for global audiences.

  • Database Caching:

* Many databases offer internal caching mechanisms (e.g., query cache, buffer pool). While useful, they typically complement, rather than replace, application-level caching.


4. Production-Ready Code Examples

We will provide examples for a simple in-memory LRU cache and a conceptual integration with Redis, a popular distributed caching solution.

4.1 Example 1: Simple In-Memory LRU Cache (Python)

This example demonstrates a basic Least Recently Used (LRU) cache implementation using Python's collections.OrderedDict. OrderedDict maintains insertion order, which can be leveraged to track item recency.


import collections
import time
from typing import Any, Callable, Dict, Optional

class LRUCache:
    """
    A simple in-memory LRU (Least Recently Used) cache implementation.

    This cache uses an OrderedDict to store key-value pairs, allowing
    efficient tracking of item access order. When the cache reaches
    its maximum capacity, the least recently used item is evicted.
    It also supports Time-To-Live (TTL) for cache entries.
    """

    def __init__(self, capacity: int, default_ttl_seconds: Optional[int] = None):
        """
        Initializes the LRUCache.

        Args:
            capacity: The maximum number of items the cache can hold.
            default_ttl_seconds: Optional default Time-To-Live for cache entries in seconds.
                                 If None, entries do not expire by default.
        """
        if capacity <= 0:
            raise ValueError("Cache capacity must be a positive integer.")
        self.capacity = capacity
        self.cache: collections.OrderedDict[Any, Any] = collections.OrderedDict()
        self.ttl_map: Dict[Any, float] = {}  # Stores expiration timestamps (time.monotonic())
        self.default_ttl_seconds = default_ttl_seconds
        print(f"LRUCache initialized with capacity: {self.capacity}, default TTL: {self.default_ttl_seconds}s")

    def _is_expired(self, key: Any) -> bool:
        """Checks if a cache entry has expired."""
        if key not in self.ttl_map:
            return False  # No TTL set, so not expired
        
        # Check if the current time is past the expiration timestamp
        return time.monotonic() > self.ttl_map[key]

    def get(self, key: Any) -> Optional[Any]:
        """
        Retrieves an item from the cache.

        If the item is found and not expired, it's moved to the end of the
        OrderedDict (most recently used) and returned.
        If the item is not found or expired, None is returned.

        Args:
            key: The key of the item to retrieve.

        Returns:
            The value associated with the key, or None if not found or expired.
        """
        if key not in self.cache:
            return None
        
        if self._is_expired(key):
            self.delete(key) # Remove expired item
            print(f"Cache miss: Key '{key}' expired and removed.")
            return None

        # Move the accessed item to the end to mark it as most recently used
        value = self.cache.pop(key)
        self.cache[key] = value
        print(f"Cache hit: Key '{key}' retrieved.")
        return value

    def put(self, key: Any, value: Any, ttl_seconds: Optional[int] = None) -> None:
        """
        Adds or updates an item in the cache.

        If the key already exists, its value is updated and it's moved to the
        most recently used position.
        If the cache is full, the least recently used item is evicted before
        adding the new item.

        Args:
            key: The key of the item to add/update.
            value: The value to store.
            ttl_seconds: Optional Time-To-Live for this specific entry. Overrides default.
        """
        if key in self.cache:
            # If key exists, update value and move to end (most recently used)
            self.cache.pop(key)
        elif len(self.cache) >= self.capacity:
            # If cache is full, evict the least recently used item
            lru_key = next(iter(self.cache)) # Get the first key (LRU)
            self.cache.popitem(last=False) # Remove LRU item
            self.ttl_map.pop(lru_key, None) # Remove its TTL entry
            print(f"Cache eviction: Key '{lru_key}' removed (LRU).")

        self.cache[key] = value
        
        # Determine TTL for this entry
        current_ttl = ttl_seconds if ttl_seconds is not None else self.default_ttl_seconds
        if current_ttl is not None:
            self.ttl_map[key] = time.monotonic() + current_ttl
        else:
            self.ttl_map.pop(key, None) # Ensure no old TTL remains if None is passed

        print(f"Cache put: Key '{key}' added/updated with value '{value}' (TTL: {current_ttl}s).")

    def delete(self, key: Any) -> None:
        """
        Deletes an item from the cache.
        """
        if key in self.cache:
            self.cache.pop(key)
            self.ttl_map.pop(key, None)
            print(f"Cache delete: Key '{key}' removed.")
        else:
            print(f"Cache delete: Key '{key}' not found.")

    def size(self) -> int:
        """Returns the current number of items in the cache."""
        return len(self.cache)

    def clear(self) -> None:
        """Clears all items from the cache."""
        self.cache.clear()
        self.ttl_map.clear()
        print("Cache cleared.")

# --- Usage Example ---
if __name__ == "__main__":
    # Initialize cache with capacity 3 and a default TTL of 5 seconds
    my_cache = LRUCache(capacity=3, default_ttl_seconds=5) 

    # Put items
    my_cache.put("user:1", {"name": "Alice", "email": "alice@example.com"})
    my_cache.put("product:101", {"name": "Laptop", "price": 1200})
    my_cache.put("order:ABC", {"status": "pending", "amount": 250})

    print(f"\nCurrent cache size: {my_cache.size()}")
    print(f"Cache content: {list(my_cache.cache.keys())}") # Shows order

    # Access an item (makes it MRU)
    print(f"\nGetting user:1: {my_cache.get('user:1')}")
    print(f"Cache content after getting user:1: {list(my_cache.cache.keys())}")

    # Add a new item - this should evict the LRU item ("product:101")
    my_cache.put("category:tech", "Electronics")
    print(f"\nCurrent cache size: {my_cache.size()}")
    print(f"Cache content after adding category:tech: {list(my_cache.cache.keys())}") # product:101 should be gone

    # Test TTL
    my_cache.put("temp_data", "Expires soon", ttl_seconds=2)
    print(f"\nGetting temp_data immediately: {my_cache.get('temp_data')}")
    print("Waiting for 3 seconds to test expiration...")
    time.sleep(3)
    print(f"Getting temp_data after 3 seconds: {my_cache.get('temp_data')}") # Should be None (expired)
    print(f"Cache content after temp_data expiration check: {list(my_cache.cache.keys())}")

    # Update an existing item
    my_cache.put("user:1", {"name": "Alice
gemini Output

Caching System: Comprehensive Review and Documentation

This document provides a detailed overview of caching systems, their core concepts, strategies, technologies, and best practices. It is designed to equip you with a thorough understanding necessary for effective implementation and management of caching solutions within your architecture.


1. Introduction to Caching Systems

A caching system is a high-speed data storage layer that stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than by accessing the data's primary storage location. The primary goal of caching is to improve data retrieval performance, reduce the load on primary data sources (like databases or APIs), and decrease latency for end-users.

Key Benefits:

  • Performance Improvement: Significantly reduces data retrieval times.
  • Reduced Load on Backend Systems: Offloads requests from databases, APIs, and other services.
  • Cost Efficiency: Can reduce operational costs by decreasing demand on expensive backend resources (e.g., database IOPS, network bandwidth).
  • Enhanced User Experience: Faster response times lead to a smoother and more responsive application.

2. Core Concepts of Caching

2.1 What is a Cache?

A cache is essentially a temporary storage area for frequently accessed data. When data is requested, the system first checks the cache. If the data is found in the cache (a "cache hit"), it's returned immediately. If not (a "cache miss"), the system retrieves the data from its original source, serves it, and then stores a copy in the cache for future use.

2.2 Where to Cache? (Caching Layers)

Caching can occur at various levels within an application's architecture:

  • Client-Side/Browser Cache: Stored directly on the user's device (e.g., browser cache for static assets like images, CSS, JS).
  • CDN (Content Delivery Network): Geographically distributed servers that cache static and sometimes dynamic content closer to the end-users.
  • Proxy Cache (Reverse Proxy/API Gateway): Caches responses from backend services before they reach the client (e.g., Nginx, Varnish).
  • Application-Level Cache (In-Memory): Caches data within the application's memory space (e.g., local object caches).
  • Distributed Cache: A shared, scalable cache layer accessible by multiple application instances, often running as a separate service (e.g., Redis, Memcached).
  • Database Cache: Built-in caching mechanisms within databases (e.g., query cache, result set cache, buffer pools).
  • OS/Hardware Cache: Low-level caching managed by the operating system or hardware (e.g., CPU cache, disk cache).

3. Key Caching Strategies and Patterns

The choice of caching strategy dictates how data is read from and written to the cache and the underlying data store.

3.1 Cache-Aside (Lazy Loading)

  • Description: The application is responsible for reading and writing directly to the data store. It checks the cache first. If data is not found (miss), it fetches from the data store, serves the data, and then writes it to the cache.
  • Pros: Simple to implement; cache only stores requested data, avoiding unnecessary writes.
  • Cons: Cache misses can incur initial latency; data can become stale if not explicitly invalidated.
  • Use Case: Most common pattern for general-purpose caching where data staleness can be tolerated for a short period.

3.2 Read-Through

  • Description: The cache acts as a primary data source. If data is not in the cache, the cache itself is responsible for fetching it from the underlying data store, populating itself, and then returning the data to the application.
  • Pros: Simplifies application logic; ensures cache consistency for reads.
  • Cons: Cache needs to know how to interact with the data store; initial reads can still be slow.
  • Use Case: Often seen with caching libraries or platforms that abstract the data loading logic (e.g., some ORMs, specific caching frameworks).

3.3 Write-Through

  • Description: When data is written, it is written synchronously to both the cache and the underlying data store.
  • Pros: Data in cache is always consistent with the data store; simplifies read-through operations.
  • Cons: Write operations incur higher latency due to dual writes.
  • Use Case: Scenarios where data consistency is critical, and the application needs to ensure data is durably stored immediately.

3.4 Write-Back (Write-Behind)

  • Description: When data is written, it is written only to the cache initially. The cache then asynchronously writes the data to the underlying data store.
  • Pros: Very low latency for write operations; can absorb write bursts.
  • Cons: Risk of data loss if the cache fails before data is persisted; complex to manage consistency.
  • Use Case: High-performance write-intensive applications where some data loss risk is acceptable (e.g., real-time analytics, IoT data ingestion).

3.5 Write-Around

  • Description: Data is written directly to the underlying data store, bypassing the cache entirely. The cache is only updated on a read miss (lazy loading).
  • Pros: Prevents the cache from being flooded with data that is written but never read; good for write-heavy, read-light workloads.
  • Cons: New data will not be in the cache immediately, leading to a cache miss on the first read.
  • Use Case: When data is written once but rarely read, or when cache space is a significant concern.

4. Cache Invalidation Strategies

Maintaining data freshness and consistency is crucial. Invalidation strategies determine when and how cached data is removed or updated.

  • Time-to-Live (TTL): Data is automatically evicted from the cache after a predefined period.

* Pros: Simple to implement; ensures eventual freshness.

* Cons: Data can be stale for the duration of the TTL; difficult to choose an optimal TTL.

  • Least Recently Used (LRU): Evicts the item that has not been accessed for the longest time when the cache is full.

* Pros: Efficiently keeps frequently used data in cache.

* Cons: Can evict useful data if it's accessed sporadically but still valuable.

  • Least Frequently Used (LFU): Evicts the item that has been accessed the fewest times when the cache is full.

* Pros: Prioritizes truly popular items.

* Cons: A recently popular item that becomes less popular might stay in cache longer than a new, more popular item.

  • Manual/Explicit Invalidation: Application code explicitly removes specific items from the cache when the underlying data changes.

* Pros: Ensures immediate consistency.

* Cons: Can be complex to manage, especially in distributed systems; prone to errors if not all invalidation points are covered.

  • Publish/Subscribe (Pub/Sub) Invalidation: When data changes in the primary store, an event is published, and all interested cache instances subscribe to this event to invalidate their relevant entries.

* Pros: Highly effective for distributed caches; ensures near real-time consistency.

* Cons: Adds complexity with message brokers and event handling.


5. Common Caching Technologies and Tools

5.1 In-Memory Caches (Local)

  • Description: Caches data within the memory of a single application instance.
  • Examples:

* Guava Cache (Java): Powerful, feature-rich local cache library.

* Ehcache (Java): Widely used, can be local or distributed.

* ConcurrentHashMap (Java): Basic in-memory map often used for simple caching.

* LRU Cache implementations in various languages: Custom implementations.

  • Use Case: Caching frequently accessed, non-critical data within a single application instance; microservices where data is local to the service.

5.2 Distributed Caches

  • Description: Caches data across multiple servers, accessible by all application instances. Provides scalability, high availability, and shared data.
  • Examples:

* Redis: In-memory data structure store, used as a database, cache, and message broker. Supports various data structures (strings, hashes, lists, sets, sorted sets).

* Memcached: Simple, high-performance distributed memory object caching system. Primarily stores key-value pairs (strings).

* Apache Ignite: Distributed in-memory data grid, can act as a cache, database, and processing platform.

* Hazelcast: In-memory data grid, offers distributed caching, computing, and streaming.

  • Use Case: High-traffic web applications, microservices architectures, data sharing across multiple application instances, real-time data processing.

5.3 CDN (Content Delivery Networks)

  • Description: A network of geographically dispersed servers that cache content (static assets, dynamic responses) closer to end-users.
  • Examples:

* Cloudflare: Comprehensive CDN, security, and edge computing platform.

* AWS CloudFront: Amazon's global content delivery network service.

* Akamai: Enterprise-grade CDN and cloud security solutions.

  • Use Case: Delivering static assets (images, CSS, JS), video streaming, accelerating dynamic content, improving global reach and performance.

5.4 Database Caching

  • Description: Mechanisms within database systems or ORMs to cache query results or data blocks.
  • Examples:

* PostgreSQL: Query plan cache, shared buffer cache.

* MySQL: Query cache (deprecated in 8.0), InnoDB buffer pool.

* Hibernate (ORM): First-level (session) and second-level (shared) caches.

  • Use Case: Reducing redundant database queries, optimizing frequently accessed data within the database layer.

6. Design Considerations and Best Practices

Implementing an effective caching system requires careful planning.

  • Identify Cacheable Data: Not all data should be cached. Prioritize data that is:

* Frequently accessed.

* Relatively static or tolerates some staleness.

* Expensive to generate or retrieve.

  • Cache Granularity: Decide whether to cache entire objects, specific attributes, or query results. Finer granularity offers more control but increases complexity.
  • Consistency vs. Freshness: Understand the trade-offs. Strict consistency often means less caching or more complex invalidation. Tolerate eventual consistency where possible.
  • Eviction Policies: Choose appropriate eviction policies (TTL, LRU, LFU) based on data access patterns and memory constraints.
  • Monitoring and Metrics: Implement robust monitoring for cache hit ratio, miss ratio, eviction rates, memory usage, and latency. These metrics are crucial for optimization.
  • Error Handling and Fallbacks: Design the application to gracefully handle cache failures (e.g., cache server down, network issues). Implement fallbacks to directly query the data store.
  • Scalability: Choose a distributed caching solution if your application needs to scale horizontally. Ensure the cache itself can scale independently.
  • Security: Protect your cache instances, especially if they contain sensitive data. Use authentication, authorization, and network isolation.
  • Serialization: Understand the overhead and implications of serializing/deserializing data to and from the cache. Choose efficient serialization formats (e.g., JSON, Protocol Buffers, Avro).
  • Cache Warming: For critical data, consider pre-populating the cache before peak load (e.g., on application startup or during off-peak hours) to avoid "thundering herd" issues on cache misses.

7. Actionable Recommendations for Implementation

To successfully integrate a caching system, consider the following steps:

  1. Performance Profiling: Start by profiling your application to identify actual bottlenecks (e.g., slow database queries, frequently called APIs). This will pinpoint where caching will have the most impact.
  2. Define Caching Scope: Clearly identify which data entities or API endpoints are suitable for caching. Begin with the most impactful areas.
  3. Select the Right Technology: Based on your requirements (scale, data types, consistency needs, existing infrastructure), choose an appropriate caching technology (e.g., Redis for distributed, Guava for local).
  4. Implement a Strategy: Choose a caching pattern (e.g., Cache-Aside) and an invalidation strategy (e.g., TTL with explicit invalidation for critical data).
  5. Develop with Fallbacks: Ensure your application logic can seamlessly fall back to the primary data source if the cache is unavailable or experiences a miss.
  6. Instrument and Monitor: Integrate comprehensive monitoring for your cache. Track cache hit/miss ratios, latency, memory usage, and CPU load. Set up alerts for critical thresholds.
  7. Test Thoroughly: Conduct performance and load testing with and without the cache to validate its effectiveness and identify potential issues. Test cache invalidation logic rigorously.
  8. Iterate and Optimize: Caching is an iterative process. Continuously monitor performance, adjust TTLs, refine invalidation strategies, and explore new caching opportunities.

8. Conclusion

A well-designed and implemented caching system is a powerful tool for enhancing application performance, improving user experience, and optimizing resource utilization. By understanding the core concepts, strategies, and technologies, and by following best practices, you can effectively leverage caching to build more robust, scalable, and responsive applications. Regular monitoring and iterative refinement are key to maintaining an optimal caching solution.

caching_system.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}