Caching System
Run ID: 69b6ee7695d85cd597625a972026-03-29Development
PantheraHive BOS
BOS Dashboard

Workflow Execution Summary: Caching System

Workflow Name: Caching System

Category: Development

User Inputs:

Execution Status: Successful

This report details the implementation and considerations for a caching system utilizing Redis with a write-through strategy. The goal is to provide a robust, high-performance, and consistent caching layer for your application, enhancing responsiveness and reducing database load.


1. Caching System Overview

A caching system is a high-speed data storage layer that stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than accessing the data's primary storage location.

1.1. Purpose of Caching

1.2. Selected Technologies: Redis & Write-Through Strategy


2. Redis Implementation Details

Redis is an excellent choice for a caching system due to its speed, flexibility, and operational maturity.

2.1. Why Redis for Caching?

2.2. Key Redis Features for Caching

2.3. Deployment Considerations


3. Write-Through Strategy Deep Dive

The write-through strategy ensures that the cache is always consistent with the primary data store.

3.1. How It Works

  1. Write Operation: When an application needs to write data, it first writes to the cache.
  2. Simultaneous Database Write: The cache then synchronously writes the same data to the primary database.
  3. Acknowledgement: Only after both the cache and the database confirm the write operation is complete, the application receives an acknowledgment.

Diagram:

text • 1,478 chars
### 4.2. Application Flow (Write Operation)

1.  Application receives a request to update data (e.g., `PUT /users/{id}`).
2.  Application's data access layer initiates a write operation.
3.  The data access layer first writes the data to Redis.
4.  Simultaneously, the data access layer writes the same data to the primary database.
5.  If both writes succeed, the operation is considered successful, and the application responds.
6.  If either write fails, the entire operation should typically be rolled back, or compensatory actions taken (e.g., retry mechanisms, eventual consistency patterns for recovery).

### 4.3. Application Flow (Read Operation)

1.  Application receives a request to read data (e.g., `GET /users/{id}`).
2.  Application's data access layer attempts to read the data from Redis using the appropriate key.
3.  **Cache Hit:** If the data is found in Redis, it is returned immediately.
4.  **Cache Miss:** If the data is *not* found in Redis (or has expired), the data access layer retrieves it from the primary database.
5.  The retrieved data is then stored in Redis (with an appropriate TTL, if desired) before being returned to the application. *Note: For pure write-through, data should always be in cache if written, so a miss implies data wasn't written through or expired.*

### 4.4. Integration Points & Pseudocode Example

The cache interaction logic should ideally reside within your data access layer (e.g., Repository pattern, ORM hooks).

Sandboxed live preview

5. Configuration & Setup Recommendations

Proper configuration of both Redis server and client is crucial for performance and stability.

5.1. Redis Server Configuration (redis.conf)

  • maxmemory <bytes>: Critical. Set a clear memory limit for Redis.
  • maxmemory-policy <policy>: Defines how Redis evicts keys when maxmemory is reached.

* noeviction: (Default) Returns errors on writes when memory limit is reached. Not suitable for caching.

allkeys-lru: Evicts least recently used keys among all* keys. Good general-purpose.

allkeys-lfu: Evicts least frequently used keys among all* keys. Good for hot data.

* volatile-lru/volatile-lfu/volatile-ttl/volatile-random: Only evicts keys with an explicit TTL set. Useful if some keys are meant to be persistent.

* allkeys-random: Evicts random keys.

  • save "": Disable persistence if Redis is purely a cache and data loss on restart is acceptable (data can be repopulated from DB). If not, configure RDB/AOF.
  • appendonly no: (If persistence disabled) Ensure AOF is also off.
  • tcp-backlog 511: Adjust for high connection rates.
  • timeout 0: (Default) Keep connections open indefinitely.
  • daemonize yes: Run Redis as a background process.
  • bind 127.0.0.1: Restrict network interface or bind to private IP for security.
  • protected-mode yes: (Default) Prevents access from outside if no bind or requirepass is set.
  • requirepass your_strong_password: Crucial for production security.

5.2. Client-Side Configuration

  • Connection Pooling: Use a connection pool to manage connections efficiently and reduce overhead.
  • Timeouts: Configure connect and read timeouts to prevent applications from hanging indefinitely if Redis is unresponsive.
  • Serialization: Choose an efficient serialization format (e.g., JSON, MessagePack, Protobuf) for complex objects.
  • Error Handling & Retries: Implement robust error handling, including retry mechanisms with exponential backoff for transient Redis failures.
  • Key Naming Conventions: Establish clear and consistent key naming (e.g., object_type:id, user:{id}:session).

6. Monitoring, Maintenance & Security

Effective operation requires continuous monitoring, proactive maintenance, and strong security measures.

6.1. Key Metrics to Monitor

  • Hit Ratio: keyspace_hits / (keyspace_hits + keyspace_misses) - Indicates cache effectiveness. Target >90%.
  • Memory Usage: used_memory, used_memory_rss - Track total memory, fragmentation, and ensure it stays within limits.
  • CPU Usage: used_cpu_sys, used_cpu_user - Monitor for spikes indicating bottlenecks or inefficient operations.
  • Network I/O: total_net_input_bytes, total_net_output_bytes - High traffic can indicate network bottlenecks.
  • Latency: latency_ms - Average command processing time. Look for spikes.
  • Connected Clients: connected_clients - Monitor for unexpected numbers of connections.
  • Evictions: evicted_keys - If maxmemory is hit, indicates how many keys are being removed. High numbers might mean insufficient cache size or poor maxmemory-policy.
  • Persistence: rdb_last_save_time, aof_last_rewrite_time - If persistence is enabled, monitor its health.

6.2. Recommended Tools

  • Redis INFO Command: Provides a wealth of real-time server information.
  • Redis CLI MONITOR: For debugging, shows all commands processed by the server.
  • Prometheus & Grafana: Popular open-source stack for time-series monitoring and visualization.
  • Datadog, New Relic, Dynatrace: Commercial monitoring solutions with Redis integrations.
  • RedisInsight: Official GUI for Redis, useful for development and basic monitoring.

6.3. Maintenance Best Practices

  • Regular Backups: If persistence is enabled, ensure backups are taken and tested.
  • Software Updates: Keep Redis and client libraries updated to benefit from bug fixes and performance improvements.
  • Capacity Planning: Regularly review usage patterns and scale resources (memory, CPU) as needed.
  • Key Expiration/Eviction Tuning: Adjust TTLs and maxmemory-policy based on data access patterns.
  • A/B Testing Cache Changes: Introduce changes cautiously and monitor impact.

6.4. Security Best Practices

  • Network Isolation: Deploy Redis in a private network (VPC/VNet) with strict firewall rules, only allowing access from application servers.
  • Authentication: Always use requirepass with a strong password. Consider Redis ACLs for granular access control (Redis 6+).
  • Encryption (TLS/SSL): Encrypt communication between clients and Redis, especially over untrusted networks. Cloud providers often offer managed Redis with TLS.
  • Least Privilege: Ensure application users/roles have only the necessary permissions to interact with Redis.
  • Disable Dangerous Commands: Rename or disable commands like FLUSHALL, FLUSHDB, KEYS in production using rename-command or disable-command in redis.conf.
  • Regular Audits: Periodically review Redis configurations and access logs.

7. Scalability & High Availability

For production environments, ensure Redis can scale and remain available.

7.1. Scaling Strategies

  • Vertical Scaling: Increase resources (CPU, RAM) of a single Redis instance. Limited by hardware.
  • Read Scaling (Master-Replica): Replicas can handle read requests, offloading the master. Requires application logic to direct reads to replicas.
  • Horizontal Scaling (Redis Cluster): Distributes data across multiple Redis nodes (sharding), allowing for very large datasets and high throughput. Automatic sharding and rebalancing.

7.2. High Availability Strategies

  • Redis Sentinel: Provides automatic failover for master-replica setups. Sentinels monitor Redis instances and promote a replica to master if the current master fails.
  • Redis Cluster: Built-in high availability. Each master node can have replicas, and the cluster can automatically failover to a replica if a master fails.
  • Cloud Managed Services: AWS ElastiCache, Azure Cache for Redis, Google Cloud Memorystore for Redis offer managed, highly available, and scalable Redis deployments with less operational overhead.

8. Cost Analysis

Implementing a Redis caching system incurs costs related to infrastructure and operations.

8.1. Infrastructure Costs

  • Compute Instances: VMs or containers for Redis servers. Costs vary by instance type (CPU, RAM), cloud provider, and region.
  • Memory: Redis is an in-memory database, so RAM is the primary cost driver. Larger caches require more expensive instances.
  • Network Egress: Data transfer costs can accumulate, especially for cross-region or internet-facing traffic.
  • Storage: If persistence (RDB/AOF) is enabled, disk storage costs.
  • Managed Services: Cloud-managed Redis services (ElastiCache, Memorystore) abstract away much of the operational burden but come with a premium over self-hosting.

8.2. Operational Costs

  • Monitoring Tools: Licensing for commercial monitoring solutions.
  • Staffing: Time spent by engineers on setup, configuration, monitoring, troubleshooting, and maintenance.
  • Development Time: Initial integration and ongoing optimization efforts.

Cost Optimization Tips:

  • Right-Sizing: Start with adequate resources and scale up/out as needed based on monitoring.
  • Reserved Instances/Savings Plans: For predictable workloads, commit to longer terms for discounts.
  • TTL & Eviction Policies: Effectively manage cache size to avoid over-provisioning memory.
  • Compression: If storing large objects, consider client-side compression to reduce memory footprint.

9. Actionable Recommendations & Next Steps

Here's a structured approach to implementing your Redis write-through caching system:

9.1. Immediate Actions

  1. Define Caching Scope: Identify which data entities and operations will benefit most from caching. Start with the most frequently read, rarely updated data.
  2. Establish Key Naming Conventions: Standardize how cache keys will be generated (e.g., service:entity_type:id, user:123:profile).
  3. Choose Deployment Model:

* Development/Test: Single standalone Redis instance.

* Production (Small/Medium): Master-replica with Redis Sentinel.

* Production (Large/High Scale): Redis Cluster or a fully managed cloud Redis service.

  1. Initial Redis Server Setup:

* Install Redis (or provision a managed service).

* Configure maxmemory and maxmemory-policy appropriately (e.g., allkeys-lru or volatile-lru with TTLs).

* Crucially, enable requirepass and configure network isolation (firewall rules).

  1. Integrate Client Library: Add the Redis client library to your application.
  2. Implement Data Access Layer: Modify your data access layer (e.g., UserRepository as shown in pseudocode) to incorporate the write-through logic for relevant entities.

9.2. Further Considerations

  • Error Handling and Rollbacks: Implement robust error handling for write operations. If the database write fails, the corresponding cache entry should ideally be rolled back or invalidated to maintain consistency. Consider using distributed transaction patterns (e.g., Saga) for complex scenarios.
  • Cache Invalidation: While write-through ensures consistency on write, external database changes (e.g., direct DB updates, batch jobs) will bypass the cache. Plan for mechanisms to invalidate these entries (e.g., Redis Pub/Sub for event-driven invalidation, scheduled cache purges).
  • Data Serialization: Choose an efficient serialization format (e.g., JSON, Protocol Buffers, MessagePack) for storing complex objects in Redis.
  • Time-To-Live (TTL): Even with write-through, using TTLs for cache entries is a good practice to prevent stale data from lingering indefinitely if an external update mechanism is missed, and to manage memory.
  • Hot-Warm-Cold Data Strategy: Consider different caching tiers for data with varying access patterns and freshness requirements.

9.3. Performance Tuning & Testing

  1. Benchmark: Perform load testing on your application with and without the caching layer to quantify performance improvements.
  2. Monitor Key Metrics: Pay close attention to cache hit ratio, Redis memory usage, CPU, and network I/O.
  3. Optimize Redis Configuration: Adjust maxmemory-policy, timeout, and other parameters based on observed performance and eviction rates.
  4. Profile Application Code: Identify any bottlenecks in your application's interaction with Redis or the database.

9.4. Documentation

  • Document your caching strategy, key naming conventions, TTL policies, and any specific error handling logic.
  • Maintain a clear record of Redis server configurations and deployment details.

By following these steps, you can successfully implement a high-performance and consistent caching system using Redis with a write-through strategy.

caching_system.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}