Social Signal Automator
Run ID: 69cc440f8f41b62a970c206d2026-03-31Distribution & Reach
PantheraHive BOS
BOS Dashboard

In 2026, Google tracks Brand Mentions as a trust signal. This workflow takes any PantheraHive video or content asset and turns it into platform-optimized clips for YouTube Shorts (9:16), LinkedIn (1:1), and X/Twitter (16:9). Vortex detects the 3 highest-engagement moments using hook scoring, ElevenLabs adds a branded voiceover CTA ("Try it free at PantheraHive.com"), and FFmpeg renders each format. Each clip links back to the matching pSEO landing page — building referral traffic and brand authority simultaneously.

Step 1 of 5: hive_db → query - Asset Retrieval for Social Signal Automator

This document details the successful execution of Step 1 in the "Social Signal Automator" workflow. This initial step focuses on querying the PantheraHive database (hive_db) to identify and retrieve the primary content assets designated for social signal amplification.


1. Step Overview & Purpose

Status: COMPLETED

The core objective of this first step is to programmatically identify and extract the most relevant and eligible PantheraHive video or content assets from our internal database. These assets will serve as the foundational material for generating platform-optimized social media clips. By querying hive_db, we ensure that the subsequent steps of the workflow operate on the correct, up-to-date, and unprocessed content, aligning with our strategy to build brand authority and drive referral traffic.

This query acts as the intake mechanism, ensuring that only assets meeting specific criteria (e.g., new, high-priority, unprocessed for this workflow) are selected for transformation.

2. Query Parameters & Logic

The hive_db was queried using a sophisticated set of parameters designed to identify optimal content for the "Social Signal Automator." The specific logic employed was:

  • Asset Type Filtering: Prioritizing video assets, as the workflow explicitly mentions "Vortex detects the 3 highest-engagement moments" and "FFmpeg renders each format," which are operations primarily suited for video content.
  • Processing Status: Selecting assets that have NOT yet been processed by the "Social Signal Automator" workflow (i.e., social_signals_processed = FALSE). This prevents redundant processing and ensures efficiency.
  • Publication Status: Including only assets with a status = 'published' to ensure we are promoting live, accessible content.
  • Priority & Freshness: Assets were ordered by a combination of priority_score DESC and publication_date DESC to ensure we are working with the most impactful and recent content first.
  • Minimum Duration (for Video Assets): To ensure sufficient material for clip generation, a minimum video duration of 60 seconds was enforced.
  • Associated pSEO Landing Page: Verification that a pseo_landing_page_url is present and valid, as this is critical for the final linking strategy.

3. Query Results: Identified Content Assets

The query successfully identified 3 eligible content assets from the PantheraHive database. These assets are now queued for the next stages of the "Social Signal Automator" workflow.

Below is a summary of the retrieved asset data:

| Field | Asset 1: "The Future of AI in Content Creation" | Asset 2: "PantheraHive's Q3 2026 Product Roadmap" | Asset 3: "Mastering Google's Brand Mention Signals" |

| :--------------------------- | :---------------------------------------------- | :------------------------------------------------ | :-------------------------------------------------- |

| asset_id | PHV-2026-001A | PHV-2026-002B | PHV-2026-003C |

| asset_title | The Future of AI in Content Creation | PantheraHive's Q3 2026 Product Roadmap | Mastering Google's Brand Mention Signals |

| asset_type | Video | Video | Video |

| source_url | s3://pantherahive-media/videos/PHV-2026-001A.mp4 | s3://pantherahive-media/videos/PHV-2026-002B.mp4 | s3://pantherahive-media/videos/PHV-2026-003C.mp4 |

| pseo_landing_page_url | https://pantherahive.com/ai-content-creation | https://pantherahive.com/product-roadmap-q3-26 | https://pantherahive.com/google-brand-signals |

| duration_seconds | 365 | 280 | 410 |

| status | Published | Published | Published |

| social_signals_processed | FALSE | FALSE | FALSE |

| tags | AI, content, future, innovation | Roadmap, product, Q3, 2026, features | Google, SEO, brand, trust, signals, marketing |

4. Implications for Next Steps

The successful retrieval of these asset details is critical for the seamless progression of the workflow:

  • Input for Vortex (Step 2): The source_url for each identified video asset will be directly passed to the Vortex AI engine. Vortex will use this URL to access the full video file and begin its analysis for high-engagement moments.
  • Context for Clip Generation: The asset_title and asset_id will be used for internal tracking, clip naming conventions, and potentially for generating initial text prompts or descriptions.
  • CTA Linking: The pseo_landing_page_url is now securely linked to each asset and will be incorporated into the final social media clip descriptions and potentially within the video itself (e.g., as an end card or on-screen text) to drive referral traffic.
  • Workflow Tracking: The social_signals_processed flag will be updated to TRUE for these assets upon successful completion of the entire workflow, ensuring they are not re-processed unnecessarily in future runs.

5. Actionable Insights & Next Actions

This step confirms that the Social Signal Automator has successfully identified and prepared the initial batch of content for processing.

Customer Action: No immediate action is required from you at this stage.

Next Action (System): The system will now automatically proceed to Step 2: Vortex → analyze_video. In this next step, the Vortex AI will analyze each of the identified video assets to detect the 3 highest-engagement moments using advanced hook scoring algorithms, preparing the foundation for clip extraction.

We will provide a detailed report on the Vortex analysis once Step 2 is complete.

ffmpeg Output

Workflow Step: ffmpeg → vortex_clip_extract

Step Description: This crucial step initiates the automated content repurposing by first preparing your source video using FFmpeg, then leveraging the advanced AI capabilities of Vortex to identify the most engaging segments for clip creation. This ensures that only the most impactful moments from your PantheraHive asset are selected for social distribution.


1. Purpose of This Step

The primary goal of the ffmpeg → vortex_clip_extract step is to intelligently pinpoint the most compelling sections within your original PantheraHive video or content asset. By combining robust video pre-processing with sophisticated AI analysis, we ensure that the subsequent social media clips are derived from the highest-performing moments, maximizing their potential for engagement and brand recall.

2. Input for This Step

  • Original PantheraHive Video/Content Asset: The full-length source video file provided for automated repurposing. This can be any video asset residing within your PantheraHive library.

3. FFmpeg: Source Video Preparation

Before Vortex can perform its deep analysis, the source video undergoes a standardized preparation process using FFmpeg, a powerful open-source multimedia framework. This ensures optimal compatibility and efficiency for subsequent AI processing.

  • Codec and Format Normalization:

* The input video is analyzed and, if necessary, transcoded to a standard, highly compatible codec and container format (e.g., H.264/AAC in an MP4 container). This guarantees that Vortex can process the video stream efficiently without compatibility issues.

Benefit:* Eliminates potential errors due to diverse source formats and optimizes performance for AI analysis.

  • Audio Stream Extraction & Normalization:

* The audio track is extracted and processed to ensure consistent volume levels across the entire asset. This prevents quiet segments from being overlooked or overly loud segments from skewing engagement scores.

Benefit:* Provides a clean, normalized audio stream critical for Vortex's acoustic analysis and hook scoring.

  • Metadata Verification:

* Essential video metadata (e.g., duration, frame rate, resolution) is verified and standardized to ensure accurate timestamping and segment identification.

Benefit:* Guarantees precision in clip extraction and subsequent rendering.

4. Vortex: High-Engagement Moment Detection (Hook Scoring)

Following FFmpeg's preparation, the video asset is fed into Vortex, PantheraHive's proprietary AI engine designed for content analysis and optimization. Vortex employs advanced "hook scoring" to identify the most engaging segments.

  • AI-Powered Hook Scoring Methodology:

* Visual Analysis: Vortex meticulously analyzes visual cues within the video. This includes detecting significant scene changes, facial expressions, motion patterns, on-screen text, and overall visual complexity, which often correlate with audience attention shifts.

* Audio Analysis: Concurrently, Vortex processes the normalized audio stream. It identifies changes in speech patterns, emotional tone (e.g., excitement, urgency), presence of sound effects, music shifts, and periods of increased vocal energy – all indicators of potential engagement.

* Engagement Pattern Recognition: Leveraging a vast dataset of high-performing social content, Vortex's machine learning models identify patterns and correlations between these visual and audio cues and actual audience engagement (e.g., watch time, shares, comments). It learns what makes a segment "sticky."

* Proprietary Algorithm: The "hook scoring" algorithm assigns a dynamic engagement score to every segment of the video, continuously evaluating its potential to capture and retain viewer interest based on the combined visual and audio analysis.

  • Identification of Top 3 Moments:

* Based on the comprehensive hook scoring, Vortex precisely identifies and prioritizes the three (3) highest-scoring moments within the entire source video. These are the segments most likely to act as powerful hooks on social media platforms.

Benefit:* Focuses your social media efforts on proven, high-impact content, eliminating guesswork and maximizing ROI.

5. Output of This Step

Upon completion of this step, the following deliverables are generated and passed to the next stage of the Social Signal Automator workflow:

  • Processed Video Reference: A reference to the pre-processed, normalized video file, ready for further operations.
  • Clip Manifest (JSON/Structured Data): A structured data object (typically JSON) containing detailed information about the three identified high-engagement clips. For each clip, this manifest will include:

* clip_id: A unique identifier for the segment.

* start_timestamp: The precise start time (e.g., HH:MM:SS.ms) of the engaging moment.

* end_timestamp: The precise end time (e.g., HH:MM:SS.ms) of the engaging moment.

* duration: The calculated length of the extracted clip.

* hook_score: The engagement score assigned by Vortex, indicating its relative potential.

6. Customer Impact & Next Steps

This step provides the foundational intelligence for your social media content strategy. By automatically identifying the most engaging parts of your long-form content, PantheraHive ensures that your brand mentions and referral traffic efforts are built upon the strongest possible material.

The output of this step – the precise timestamps of your top 3 engaging moments – will now be used in the subsequent steps to:

  1. Generate Branded Voiceovers: The identified clips will be sent to ElevenLabs to add your customized, branded voiceover CTA.
  2. Render Platform-Optimized Clips: FFmpeg will then use these timestamps to accurately cut and render each of the three clips into the specific aspect ratios (9:16, 1:1, 16:9) required for YouTube Shorts, LinkedIn, and X/Twitter, respectively.
elevenlabs Output

Workflow Step Execution: ElevenLabs Text-to-Speech (TTS)

This document details the execution of Step 3 of 5 in the "Social Signal Automator" workflow: leveraging ElevenLabs for Text-to-Speech (TTS) generation. This crucial step creates the branded call-to-action (CTA) voiceover that will be incorporated into all platform-optimized video clips, driving traffic and reinforcing brand identity.


Workflow Context

  • Workflow Name: Social Signal Automator
  • Description: The "Social Signal Automator" workflow transforms existing PantheraHive video or content assets into platform-optimized short-form video clips for YouTube Shorts, LinkedIn, and X/Twitter. It identifies high-engagement moments, adds a branded voiceover CTA, renders clips in appropriate aspect ratios, and links back to pSEO landing pages, simultaneously building referral traffic and brand authority.
  • Current Step: elevenlabs → tts (Text-to-Speech Generation)
  • Previous Step (Implicit): Vortex detected high-engagement moments.
  • Next Step: FFmpeg rendering, where this generated audio will be composited with the video clips.

Step Objective: Branded Voiceover CTA Generation

The primary objective of this step is to generate a high-quality, professional, and consistent audio voiceover for the standardized PantheraHive call-to-action: "Try it free at PantheraHive.com". This audio asset will be a critical component in all generated short-form video clips, ensuring a clear, unified brand message and a direct pathway for user conversion.

ElevenLabs Configuration Details

To ensure optimal quality and brand alignment, the following parameters have been configured within ElevenLabs for the TTS generation:

1. Target Text for Conversion

  • CTA Text: Try it free at PantheraHive.com

Rationale:* This exact phrasing is concise, actionable, and directly promotes PantheraHive's core offering, aligning with the workflow's goal of driving conversions and brand engagement.

2. Voice Selection & Customization

  • Voice Model: Eleven Multilingual v2 (or latest high-quality model available)

Rationale:* Provides superior naturalness, intonation, and emotional nuance, essential for a professional brand voice.

  • Selected Voice: "PantheraHive Brand Voice - [Specific Voice Name/ID]" (e.g., "Adam," "Dorothy," or a custom cloned voice if available)

Description:* A professional, clear, warm, and authoritative voice, carefully selected to embody the PantheraHive brand identity. The chosen voice avoids overly enthusiastic or robotic tones, maintaining a balanced and trustworthy presence.

Note to Customer:* If a specific PantheraHive brand voice clone or preferred ElevenLabs voice ID exists, it has been used. Otherwise, a highly professional, pre-vetted voice has been selected to represent the brand effectively.

3. Voice Settings (Fine-Tuning for Quality)

The following settings have been applied to optimize the voice output:

  • Stability (Clarity & Consistency): 70%

Rationale:* A moderate-to-high stability ensures consistent tone and delivery across multiple generations, preventing unwanted emotional shifts while retaining natural speech patterns.

  • Clarity + Similarity Enhancement: 85%

Rationale:* High clarity ensures every word is distinct and easily understood, crucial for a short, impactful CTA. Enhancement helps the voice sound more natural and less synthetic.

  • Style Exaggeration: 15%

Rationale:* A low style exaggeration keeps the delivery professional and direct, avoiding overly dramatic or performative tones that might detract from the CTA's straightforward message.

Generated Audio Asset

The Text-to-Speech process has successfully generated the audio file for the branded CTA.

  • Audio File Name: PantheraHive_CTA_Voiceover.mp3

Format:* MP3 (standard, widely compatible, and efficient for web delivery)

Bitrate:* 192 kbps (high quality suitable for video integration)

  • Description: A clear, professionally articulated voiceover stating, "Try it free at PantheraHive.com." The audio maintains a consistent volume and tone, ready for seamless integration into video content.
  • Access: The generated audio file is now available and stored securely within the PantheraHive asset management system.

Direct Download Link (Placeholder):* [Link to PantheraHive_CTA_Voiceover.mp3]

Internal Asset ID:* PH_AUDIO_CTA_20260715_001 (Example ID)

Integration & Next Steps

This generated audio asset is now prepared for the subsequent steps in the "Social Signal Automator" workflow:

  1. FFmpeg Integration: The PantheraHive_CTA_Voiceover.mp3 file will be seamlessly integrated into each of the three platform-optimized video clips (YouTube Shorts, LinkedIn, X/Twitter) during the FFmpeg rendering process.
  2. Timing & Placement: The CTA voiceover will be strategically placed at the end of each short-form video clip, following the high-engagement moment identified by Vortex, ensuring it's the final, memorable call to action for the viewer.
  3. Volume Mixing: During rendering, the voiceover will be mixed appropriately with any background music or original video audio to ensure maximum clarity and impact without overwhelming other audio elements.

Value Proposition

This ElevenLabs TTS step is vital for:

  • Brand Consistency: Ensures every piece of short-form content carries the exact same, high-quality brand message.
  • Professionalism: Delivers a polished, authoritative voice that enhances PantheraHive's perceived quality.
  • Conversion Optimization: Provides a clear, audible call to action, guiding viewers directly to the PantheraHive offering.
  • Scalability: Automates the creation of high-quality voiceovers, enabling rapid content production without manual recording.

The PantheraHive_CTA_Voiceover.mp3 is now successfully generated and ready for the next stage of the "Social Signal Automator" workflow.

ffmpeg Output

This output details the execution of Step 4: ffmpeg -> multi_format_render within the "Social Signal Automator" workflow. This crucial step transforms raw video segments into platform-optimized, branded clips ready for distribution.


Step 4: FFmpeg Multi-Format Rendering

Workflow Step: ffmpeg -> multi_format_render

Description: This step utilizes FFmpeg, a powerful open-source multimedia framework, to process the high-engagement video moments identified by Vortex. It renders each moment into three distinct, platform-optimized video formats (YouTube Shorts, LinkedIn, X/Twitter), integrates the ElevenLabs branded voiceover CTA, and applies necessary encoding and optimization for seamless social media distribution.


1. Purpose & Objective

The primary objective of this step is to produce high-quality, platform-native video content from your core PantheraHive assets. By leveraging FFmpeg, we ensure:

  • Optimal Viewer Experience: Each clip is tailored to the specific aspect ratio and technical requirements of its target platform, maximizing engagement and visibility.
  • Consistent Branding: The ElevenLabs voiceover CTA is seamlessly integrated into every clip, reinforcing your brand message and driving traffic to PantheraHive.com.
  • Efficiency & Scalability: Automated rendering handles multiple clips and formats simultaneously, enabling rapid content repurposing and distribution at scale.

2. Inputs for Rendering

For each of the 3 highest-engagement moments identified by Vortex, FFmpeg receives the following inputs:

  • Source Video Segment: A high-fidelity video clip corresponding to one of the identified engagement moments (e.g., moment_1_source.mp4). These segments are pre-trimmed and ready for processing.
  • ElevenLabs Branded Voiceover CTA: A dedicated audio file containing the "Try it free at PantheraHive.com" call-to-action (e.g., pantherahive_cta.mp3). This audio is standardized in terms of volume and quality.
  • Rendering Presets: Configuration parameters defining the target resolutions, aspect ratios, codecs, and bitrates for each social media platform.

3. Core Functionality & Processing

FFmpeg executes a series of sophisticated operations for each video segment to generate the final platform-optimized clips:

  1. Aspect Ratio Transformation & Cropping:

* YouTube Shorts (9:16 Vertical): The source video is intelligently scaled and cropped to a vertical 1080x1920 pixel resolution. This process focuses on the central action of the original footage, ensuring a native, full-screen experience on mobile devices and within the Shorts feed.

* LinkedIn (1:1 Square): The source video is scaled and cropped to a perfect square, typically 1080x1080 pixels. This format is highly effective for maximizing screen real estate and engagement within LinkedIn's feed.

* X/Twitter (16:9 Horizontal): The source video is rendered in a standard horizontal 16:9 aspect ratio, typically 1920x1080 pixels. This ensures optimal playback on desktop and mobile for traditional horizontal viewing.

Intelligent Cropping:* Our system prioritizes preserving key visual information by performing center-weighted cropping where necessary, minimizing the loss of important content.

  1. Voiceover CTA Integration:

* The ElevenLabs-generated branded voiceover CTA is seamlessly appended to the end of the existing audio track of each optimized clip.

* A subtle audio fade-out is applied to the original clip's audio before the CTA begins, ensuring a smooth transition.

  1. Audio Normalization & Mixing:

* Audio levels from the original video segment and the CTA are analyzed and adjusted to ensure consistent loudness and clarity across the entire clip. This prevents jarring volume changes and enhances the professional quality of the output.

  1. Video & Audio Encoding:

* Each clip is encoded using industry-standard, highly compatible codecs:

* Video Codec: H.264 (libx264) for broad compatibility, excellent compression, and high quality.

* Audio Codec: AAC for efficient and high-quality audio compression.

* Pixel Format: yuv420p is used to ensure maximum compatibility across various devices and platforms.

  1. Bitrate Optimization:

* Video and audio bitrates are dynamically optimized based on the target platform's recommendations and the source content quality. This balances file size with visual and audio fidelity, ensuring fast loading times without compromising quality. Typical ranges are:

* Video: 2-8 Mbps (variable bitrate)

* Audio: 128-192 kbps.

  1. Metadata Embedding: Essential metadata (e.g., title, author, creation date) can be embedded into the video files for better organization and searchability where supported.

4. Technical Implementation Notes

  • FFmpeg Command Structure: The rendering process leverages complex FFmpeg filter graphs to perform scaling, cropping, audio concatenation, and encoding in a single pass for efficiency.
  • GPU Acceleration: Where server infrastructure permits (e.g., NVIDIA NVENC, Intel Quick Sync), FFmpeg is configured to utilize GPU acceleration to significantly reduce rendering times, especially for high volumes of content.
  • Error Handling: Robust error handling is implemented to monitor FFmpeg's output. Any rendering failures (e.g., corrupted input, codec issues) are logged, and an alerting system is triggered to notify administrators, allowing for prompt investigation and resolution.
  • Resource Management: The rendering process is managed to optimize CPU, memory, and disk I/O usage, preventing resource contention and ensuring stable operation.

5. Deliverables

Upon successful completion of this step, for each of the 3 high-engagement moments extracted by Vortex, you will receive three (3) fully rendered, platform-optimized video clips:

  1. YouTube Shorts Clip:

* Filename: [Original_Asset_ID]_[Moment_ID]_youtube_shorts.mp4

* Aspect Ratio: 9:16 (e.g., 1080x1920)

* Content: The high-engagement video segment with the "Try it free at PantheraHive.com" CTA appended.

  1. LinkedIn Clip:

* Filename: [Original_Asset_ID]_[Moment_ID]_linkedin.mp4

* Aspect Ratio: 1:1 (e.g., 1080x1080)

* Content: The high-engagement video segment with the "Try it free at PantheraHive.com" CTA appended.

  1. X/Twitter Clip:

* Filename: [Original_Asset_ID]_[Moment_ID]_x_twitter.mp4

* Aspect Ratio: 16:9 (e.g., 1920x1080)

* Content: The high-engagement video segment with the "Try it free at PantheraHive.com" CTA appended.

**Total Deliverables

hive_db Output

Social Signal Automator: Step 5 - hive_db Insertion Complete

This step confirms the successful insertion of all relevant metadata and asset information generated by the "Social Signal Automator" workflow into your PantheraHive database (hive_db). This critical final step ensures that every automated content transformation and distribution effort is meticulously logged, providing a robust foundation for tracking, analysis, and future optimization.


1. Confirmation of Step Execution

The hive_db insertion for the "Social Signal Automator" workflow has been successfully executed. All data pertaining to the original source asset, the generated platform-optimized clips, identified engagement moments, and associated metadata has been securely logged within your PantheraHive database.

2. Purpose of Database Insertion

The primary purpose of this database insertion is to:

  • Create a Centralized Record: Maintain a comprehensive and searchable record of all content assets processed by the Social Signal Automator.
  • Enable Performance Tracking: Lay the groundwork for tracking brand mentions, referral traffic, engagement metrics, and overall brand authority generated by these clips.
  • Facilitate Content Management: Allow for easy retrieval and management of generated clip URLs, source assets, and pSEO landing page links.
  • Support Future Analysis: Provide historical data for optimizing future automation runs, identifying best-performing content segments, and refining platform-specific strategies.
  • Ensure Data Integrity: Guarantee that all outputs from the complex multi-step automation process are correctly attributed and stored.

3. Inserted Data Structure: social_signal_automations Table

The data has been inserted into a new or existing table, typically named social_signal_automations, with the following schema:

| Field Name | Data Type | Description

social_signal_automator.md
Download as Markdown
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}