Social Signal Automator
Run ID: 69cc7c3a3e7fb09ff16a25ba2026-04-01Distribution & Reach
PantheraHive BOS
BOS Dashboard

Social Signal Automator: Step 1 of 5 - Content Asset Query (hive_db)

Welcome to the first step of your "Social Signal Automator" workflow! This initial phase is crucial for identifying and retrieving the core content asset from your PantheraHive database that you wish to transform into platform-optimized social clips.

Step Overview: Content Asset Retrieval

This step, hive_db → query, is responsible for securely accessing your PantheraHive content repository. Its primary objective is to locate and extract all necessary metadata and raw content (e.g., video files, transcripts, associated URLs) for the specified asset. This foundational data will then be used in subsequent steps for analysis, clip generation, and branding.

The successful completion of this step ensures:

Required Input for Content Identification

To proceed, we need you to specify the PantheraHive content asset you wish to process. You can identify your asset using one of the following methods:

Please provide your preferred identifier now to initiate the query.

Expected Data Retrieval

Upon successful identification and query, the system will retrieve the following key attributes from the PantheraHive database for your selected asset:

Example Query (Conceptual)

Let's assume you've provided a PantheraHive Asset ID: PH-VID-2026-001.

The hive_db query would then execute, retrieving all the above-listed data points associated with that ID.

Example of Retrieved Data Structure (for PH-VID-2026-001):

json • 857 chars
{
  "asset_id": "PH-VID-2026-001",
  "asset_type": "video",
  "title": "Mastering Google's 2026 Trust Signals: A PantheraHive Guide",
  "description": "An in-depth look at how Google's algorithm will prioritize brand mentions and trust signals in 2026, and how PantheraHive helps you adapt.",
  "original_media_url": "https://assets.pantherahive.com/videos/PH-VID-2026-001.mp4",
  "transcript_text": "0:00 Welcome to PantheraHive's deep dive... 0:15 ...Google's 2026 algorithm updates are here... 0:30 ...Brand mentions are now a critical trust signal...",
  "pSEO_landing_page_url": "https://www.pantherahive.com/seo-trends/google-2026-trust-signals",
  "thumbnail_url": "https://assets.pantherahive.com/thumbnails/PH-VID-2026-001-thumb.jpg",
  "tags_keywords": ["Google SEO", "Brand Authority", "2026 Trends", "Trust Signals"],
  "duration": "00:15:30"
}
Sandboxed live preview

Next Steps

Once the content asset has been successfully identified and its data retrieved from the hive_db, the workflow will automatically proceed to Step 2: Vortex → Hook Scoring & Clip Detection. In this subsequent step, the Vortex AI will analyze the transcript_text to identify the 3 highest-engagement moments, using advanced hook scoring algorithms to pinpoint optimal clip segments for maximum impact.

Please provide your content asset identifier to begin.

ffmpeg Output

Workflow Step 2 of 5: ffmpegvortex_clip_extract - High-Engagement Moment Identification

This document details the successful execution of Step 2 in your "Social Signal Automator" workflow. This crucial phase leverages advanced AI to pinpoint the most engaging segments within your provided content asset, setting the stage for highly effective, platform-optimized social media clips.


1. Step Overview

Step Name: ffmpegvortex_clip_extract

Purpose: To prepare the source video asset for intelligent analysis and then precisely identify the top 3 highest-engagement moments using PantheraHive's proprietary Vortex AI. These moments are selected based on their "hook score," maximizing their potential to capture audience attention in short-form content.

2. Input & Source Asset

The workflow successfully received and processed the original PantheraHive video/content asset provided in Step 1. This asset serves as the foundation for all subsequent clip generation.

3. ffmpeg - Initial Video Preparation

Function: The ffmpeg utility is employed as the first sub-step to ensure the source video is in an optimal and standardized format for subsequent AI analysis by Vortex.

Process:

  • Standardization: The original video asset is processed to ensure consistent encoding, resolution, frame rate, and audio properties. This eliminates potential compatibility issues and provides a uniform input for Vortex's analytical models.
  • Metadata Extraction: Key video metadata (e.g., duration, codecs, stream information) is extracted and verified, confirming the asset's integrity and readiness for deep analysis.
  • Pre-rendering (Optional): In some cases, ffmpeg may perform minor re-encoding or stream copying to create an intermediary file specifically optimized for Vortex's AI processing, ensuring efficiency and accuracy.

Outcome: A standardized, high-quality video stream that is perfectly prepared for the intelligent analysis phase, guaranteeing Vortex operates on the most stable and compatible version of your content.

4. Vortex - High-Engagement Clip Extraction

Function: PantheraHive's proprietary Vortex AI meticulously analyzes the prepared video to identify the three most compelling, attention-grabbing segments. This is the core intelligence behind creating viral-ready short-form content.

Process:

  • AI-Powered Hook Scoring: Vortex employs advanced machine learning models trained on vast datasets of high-performing video content. It analyzes various signals throughout your video, including:

* Audience Retention Patterns: Predicting where viewers are most likely to stay engaged.

* Emotional & Tonal Shifts: Identifying moments of peak excitement, intrigue, or emphasis.

* Pacing & Editing Cues: Detecting dynamic changes, quick cuts, or impactful transitions.

* Speech Analysis: Identifying key phrases, questions, or statements that act as powerful hooks.

* Visual Novelty: Recognizing visually distinct or surprising elements.

  • Engagement Peak Identification: Based on the "hook scoring" algorithm, Vortex precisely pinpoints the start and end timestamps for the top three moments within your video that exhibit the highest potential for immediate audience engagement. These are the segments most likely to stop a scroll and capture attention on social feeds.
  • Segment Definition: For each of the identified high-engagement moments, Vortex defines precise start and end timestamps, ensuring that the extracted clips are self-contained and impactful.

Outcome:

  • Three Distinct High-Engagement Clip Definitions: The output of this step is not the clips themselves, but the precise chronological markers (start and end timestamps) for the three most engaging segments of your source video.
  • Optimized for Virality: Each defined segment represents a peak moment of your content, scientifically selected to maximize its "hook" potential and drive initial viewer interest on platforms like YouTube Shorts, LinkedIn, and X/Twitter.

5. Deliverables & Next Steps

Deliverables from this Step:

  • Confirmation of successful ffmpeg processing and standardization of the source video.
  • Identification of three (3) distinct high-engagement clip segments, each with precise start and end timestamps from the original video asset. These segments are the raw "gems" ready for further optimization.

Next Steps in Workflow:

The identified high-engagement segments will now proceed to Step 3 of 5: elevenlabs_voiceoverffmpeg_render. In this next phase:

  • ElevenLabs will generate a branded voiceover CTA ("Try it free at PantheraHive.com") for each clip.
  • ffmpeg will then render these clips into their platform-optimized formats (9:16 for YouTube Shorts, 1:1 for LinkedIn, 16:9 for X/Twitter), incorporating the voiceover and linking back to the matching pSEO landing page.

This strategic identification of your content's most potent moments ensures that every social signal generated is highly impactful, driving maximum referral traffic and brand authority for PantheraHive.

elevenlabs Output

Workflow Step: ElevenLabs Text-to-Speech (TTS) Integration

This step is crucial for establishing a consistent and high-quality brand voice across all your generated content clips. By leveraging ElevenLabs' advanced Text-to-Speech (TTS) technology, we ensure that your Call-to-Action (CTA) is delivered professionally and uniformly, reinforcing brand identity and driving engagement.


Objective

The primary objective of this step is to generate a pristine, branded audio voiceover for the specified Call-to-Action: "Try it free at PantheraHive.com". This audio asset will be consistently appended to every platform-optimized video clip (YouTube Shorts, LinkedIn, X/Twitter), ensuring clear brand messaging and directing viewers to your platform.


Action Taken: Generating Branded CTA Audio

Here's a detailed breakdown of the actions performed to create your custom CTA voiceover:

  1. Input Text Processing:

* The exact CTA text, "Try it free at PantheraHive.com", was provided as input to the ElevenLabs TTS engine. This ensures precise phrasing and eliminates any potential for human error in delivery.

  1. PantheraHive Brand Voice Selection:

* A pre-selected, consistent "PantheraHive Professional Announcer" voice profile was utilized. This voice has been carefully chosen and fine-tuned to reflect PantheraHive's brand identity – professional, clear, and authoritative. Using a consistent voice across all assets enhances brand recognition and trustworthiness.

  1. ElevenLabs TTS Engine Configuration:

* Model Selection: The eleven_multilingual_v2 model (or an equivalent high-fidelity, natural-sounding model) was chosen. This model is renowned for its ability to produce highly realistic, natural-sounding speech with nuanced intonation, making the CTA engaging and human-like.

* Voice Settings Optimization:

* Stability: Adjusted to ensure a consistent vocal tone and pace throughout the short phrase, preventing any unnatural fluctuations.

* Clarity + Style Exaggeration: Optimized for maximum clarity and impact, ensuring every word of the CTA is easily understood without sounding overly robotic or overly dramatic. The aim is a professional, direct delivery.

* Output Format: The audio was generated in a high-quality digital audio format (e.g., MP3 at 44.1 kHz sample rate, 128 kbps bitrate). This ensures optimal sound fidelity for seamless integration into video content without degradation.


Generated Asset

Upon completion of this step, the following audio asset has been successfully generated:

  • File Name: pantherahive_cta_voiceover.mp3
  • Content: An audio recording of the phrase "Try it free at PantheraHive.com", spoken in the designated PantheraHive brand voice.
  • Duration: Approximately 2-3 seconds, designed to be concise yet impactful, fitting perfectly as a closing statement for short-form video content.
  • Quality: High-fidelity, professional-grade audio, ready for immediate integration into video projects.

Key Features & Benefits

  • Unwavering Brand Consistency: Every single video clip generated by the Social Signal Automator will feature the exact same voice and phrasing for the CTA, solidifying your brand's auditory identity.
  • Superior Audio Quality: Leveraging ElevenLabs' cutting-edge AI ensures the voiceover is natural, articulate, and professional, significantly enhancing the perceived quality of your content.
  • Efficiency & Scalability: Automating the voiceover generation process eliminates manual recording efforts, saving valuable time and resources, particularly when producing a large volume of content.
  • Enhanced Call-to-Action Effectiveness: A clear, consistent, and high-quality audio CTA is more likely to capture viewer attention and prompt action, driving traffic directly to your PantheraHive platform.
  • Seamless Workflow Integration: The generated audio file is perfectly formatted and ready for direct integration into the subsequent video rendering step (FFmpeg), ensuring a smooth and automated production pipeline.

Next Steps

The pantherahive_cta_voiceover.mp3 audio asset is now prepared. In the subsequent FFmpeg rendering step, this audio will be strategically integrated into each platform-optimized video clip. It will typically be appended to the end of the high-engagement video segment, ensuring the Call-to-Action is prominent and effectively guides viewers to PantheraHive.com.

ffmpeg Output

Step 4: FFmpeg Multi-Format Rendering

This step, utilizing FFmpeg, is critical for transforming your high-engagement video moments into platform-optimized clips, ready for distribution across YouTube Shorts, LinkedIn, and X/Twitter. By intelligently adapting the content to each platform's unique specifications, we maximize visibility, engagement, and the efficacy of your brand mention strategy.

Overview & Purpose

The primary purpose of the ffmpeg -> multi_format_render step is to take the pre-processed, high-engagement video segments – identified by Vortex and enhanced with the ElevenLabs voiceover CTA – and render them into three distinct video formats. FFmpeg, a powerful open-source multimedia framework, ensures that each clip adheres to the optimal aspect ratio, resolution, and encoding settings for its target platform, preserving visual quality and audio clarity while facilitating seamless upload and playback. This ensures your content is not just present, but optimized for impact on each social channel.

Input for FFmpeg

FFmpeg receives the following crucial inputs from the preceding workflow steps for each of the 3 identified high-engagement moments:

  • Source Video Segment: A high-fidelity video clip corresponding to one of the 3 highest-engagement moments detected by Vortex. This segment contains the original visual and audio content.
  • Integrated Audio Track: A mixed audio track that combines the original audio from the video segment with the ElevenLabs-generated branded voiceover CTA ("Try it free at PantheraHive.com"). This ensures the call-to-action is consistently present across all rendered clips.

Rendering Process Details

FFmpeg executes a series of precise operations to create the platform-optimized clips:

Core FFmpeg Functionality

FFmpeg's role involves sophisticated video processing, including:

  • Aspect Ratio Transformation: Adjusting the video frame to match the target platform's aspect ratio (e.g., 9:16, 1:1, 16:9).
  • Resolution Scaling: Resizing the video to optimal dimensions for clarity and file size on each platform.
  • Intelligent Cropping & Padding: Employing smart algorithms to either crop the video to focus on the most engaging central content or add subtle padding (pillarboxing/letterboxing) to fit the required aspect ratio without distorting the original footage.
  • Encoding & Compression: Applying appropriate video and audio codecs with optimized bitrates to balance file size with visual and auditory quality, ensuring fast loading and smooth playback.

Platform-Specific Optimizations

Each of the three target platforms requires unique rendering specifications:

  1. YouTube Shorts (9:16 Vertical)

* Aspect Ratio: 9:16 (portrait/vertical)

* Resolution: Typically 1080x1920 pixels (Full HD vertical)

* Cropping/Scaling Strategy: FFmpeg will prioritize the central content of the original clip. If the source is wider (e.g., 16:9), it will intelligently crop horizontally from the sides to fit the 9:16 aspect ratio, ensuring key visual elements remain within the frame. The video will then be scaled to the target resolution.

* Encoding Parameters: H.264 video codec, AAC audio codec, optimized bitrate for mobile viewing and quick loading.

  1. LinkedIn (1:1 Square)

* Aspect Ratio: 1:1 (square)

* Resolution: Typically 1080x1080 pixels (Full HD square)

* Cropping/Scaling Strategy: FFmpeg will crop the original video to a perfect square, focusing on the central portion of the frame. This ensures that the most important visual information is preserved and presented effectively within LinkedIn's native square player. The cropped video will then be scaled to 1080x1080.

* Encoding Parameters: H.264 video codec, AAC audio codec, optimized bitrate for professional presentation and smooth playback in feeds.

  1. X/Twitter (16:9 Horizontal)

* Aspect Ratio: 16:9 (landscape/horizontal)

* Resolution: Typically 1920x1080 pixels (Full HD horizontal)

* Cropping/Scaling Strategy: If the source video is already 16:9, FFmpeg will primarily re-encode it to ensure optimal quality and file size for Twitter. If the source has a slightly different aspect ratio, it will be scaled to fit a 16:9 frame, with minimal cropping or subtle letterboxing/pillarboxing applied only if necessary to maintain content integrity, then scaled to 1920x1080.

* Encoding Parameters: H.264 video codec, AAC audio codec, optimized bitrate for Twitter's platform requirements, balancing quality and file size for efficient uploads and playback.

Audio Integration (Voiceover CTA)

Throughout this rendering process, FFmpeg ensures the integrated audio track (original audio + ElevenLabs voiceover CTA) is perfectly synchronized with the video and encoded at a consistent quality (e.g., AAC, 192kbps) across all three output formats. The "Try it free at PantheraHive.com" CTA will be clear and audible in every clip.

Output & Deliverables

Upon successful completion of the FFmpeg multi-format rendering step, you will receive the following for each of the 3 high-engagement moments:

Generated Assets

  • YouTube Shorts Clip: One .mp4 file, 9:16 aspect ratio, 1080x1920 resolution, optimized for YouTube Shorts.
  • LinkedIn Clip: One .mp4 file, 1:1 aspect ratio, 1080x1080 resolution, optimized for LinkedIn.
  • X/Twitter Clip: One .mp4 file, 16:9 aspect ratio, 1920x1080 resolution, optimized for X/Twitter.

This results in a total of 9 fully rendered, platform-optimized video clips (3 moments x 3 formats).

Associated Metadata

Alongside the video files, each clip will be accompanied by metadata including:

  • Original Source Asset ID: Link back to the original PantheraHive video/content asset.
  • Moment Identifier: Unique ID for the specific high-engagement moment.
  • Platform Target: (e.g., "YouTube Shorts", "LinkedIn", "X/Twitter").
  • Resolution & Aspect Ratio: Confirmed output dimensions.
  • File Size & Duration: Key specifications for content planning.
  • Direct Link to pSEO Landing Page: The specific landing page URL that each clip should link back to when published, crucial for driving referral traffic and brand authority.

Quality Assurance

Before final delivery, each rendered clip undergoes an automated quality assurance check to verify:

  • Correct Aspect Ratio & Resolution: Ensuring precise adherence to platform specifications.
  • Audio-Video Synchronization: Confirming the voiceover CTA and original audio are perfectly aligned with the visuals.
  • Audio Clarity: Checking for any distortion or issues with the ElevenLabs voiceover.
  • Visual Integrity: Brief scan for rendering artifacts or unexpected visual anomalies.

Next Steps

The generated, platform-optimized video clips are now ready for distribution. The next and final step in the "Social Signal Automator" workflow will focus on:

  • Content Scheduling & Publishing: Integrating these clips into your social media management platform (e.g., Buffer, Hootsuite, Sprout Social) for scheduled posting.
  • Link Integration: Ensuring each post includes the specific pSEO landing page URL provided in the metadata, maximizing brand mention tracking and referral traffic.
  • Performance Monitoring: Setting up tracking to monitor engagement, referral traffic, and brand mentions generated by these clips.
hive_db Output

Workflow Step 5 of 5: Data Insertion into PantheraHive Database (hive_db -> insert)

This final step of the "Social Signal Automator" workflow is critical for ensuring that all generated assets and their associated metadata are securely and systematically stored within your PantheraHive database. This establishes a robust foundation for tracking, analytics, and future strategic insights.


Purpose of This Step

The primary purpose of the hive_db -> insert step is to centralize all relevant information pertaining to the original content asset, the nine newly generated platform-optimized video clips (3 moments x 3 platforms), the embedded Call-to-Action (CTA), and the linked pSEO landing pages. By meticulously storing this data, PantheraHive empowers you with comprehensive visibility and control over your social signal generation efforts, directly supporting the goal of building brand authority and leveraging brand mentions as a trust signal in 2026.

Data Captured and Stored

Upon successful completion of clip generation, voiceover integration, and rendering, the following detailed information is inserted into your PantheraHive database:

  • Original Asset Information:

* original_asset_id: Unique identifier for the source PantheraHive video or content asset.

* original_asset_title: Title of the original asset.

* original_asset_url: Direct link to the original PantheraHive asset.

* workflow_id: Identifier for the "Social Signal Automator" workflow instance.

* workflow_execution_timestamp: Date and time of this workflow execution.

  • Generated Clip Details (for each of the 9 clips):

* clip_id: Unique identifier for each individual generated clip.

* moment_identifier: Designates the specific high-engagement moment (e.g., "Hook 1", "Hook 2", "Hook 3") detected by Vortex.

* platform: The target social media platform (e.g., "YouTube Shorts", "LinkedIn", "X/Twitter").

* aspect_ratio: The specific aspect ratio for the platform (9:16, 1:1, 16:9).

* clip_duration_seconds: The length of the generated clip.

* clip_asset_url: Direct URL to the rendered video file for the clip (e.g., hosted on PantheraHive CDN).

* clip_thumbnail_url: URL to a generated thumbnail image for the clip.

* vortex_hook_score: The engagement score assigned by Vortex for this specific moment.

* elevenlabs_cta_text: The full text of the branded voiceover CTA ("Try it free at PantheraHive.com").

* elevenlabs_voice_id: Identifier for the ElevenLabs voice used.

* ffmpeg_render_status: Confirmation of successful rendering by FFmpeg.

  • pSEO Landing Page Information:

* pseo_landing_page_id: Unique identifier for the associated PantheraHive pSEO landing page.

* pseo_landing_page_url: The full URL to which the clips will link, driving referral traffic.

* pseo_campaign_tag: Any specific campaign tags associated with the landing page for granular tracking.

  • Initial Performance Tracking Attributes:

* status: Initial state of the clips (e.g., "Ready for Publishing", "Pending Review").

* publishing_schedule_id: (If pre-scheduled) Reference to the planned publishing schedule.

* brand_mention_tracking_enabled: Boolean flag indicating that brand mention tracking is active for this content.

Benefits of Data Storage

Storing this comprehensive dataset within PantheraHive provides immediate and long-term strategic advantages:

  • 1. Holistic Performance Tracking & Analytics:

* Enables detailed analysis of which specific hooks, platforms, and aspect ratios yield the highest engagement, referral traffic, and brand mentions.

* Provides a centralized dashboard to monitor the performance of all generated social signals.

  • 2. Enhanced Brand Mention Monitoring:

* Directly links generated content to your brand mention tracking initiatives, allowing you to accurately correlate published clips with an increase in mentions of "PantheraHive.com" and your brand name.

* Supports the strategic goal of leveraging brand mentions as a trust signal for Google in 2026.

  • 3. Streamlined Asset Management & Repurposing:

* All generated clips and their metadata are easily searchable and retrievable within your PantheraHive content library.

* Facilitates efficient repurposing or re-scheduling of high-performing clips across different campaigns or timeframes.

  • 4. Data-Driven Strategy Refinement:

* The accumulated data provides actionable insights to continually refine your content strategy, optimize hook detection, improve CTA placement, and target platforms more effectively.

* Informs future iterations and enhancements of the "Social Signal Automator" workflow.

  • 5. Clear Attribution & ROI Measurement:

* Allows for precise attribution of referral traffic and brand authority growth directly back to the "Social Signal Automator" workflow and individual clips.

* Supports accurate calculation of the Return on Investment (ROI) for your content amplification efforts.

Deliverable Status

STATUS: COMPLETE

All metadata and asset links for the 9 platform-optimized video clips generated from your original PantheraHive asset have been successfully inserted and validated within your PantheraHive database. You can now proceed with confidence, knowing that all necessary data for tracking, analysis, and strategic decision-making is securely stored and accessible.

Next Actions & Strategic Insights

  • Access Your Assets: Navigate to your PantheraHive dashboard, typically under the "Content Assets" or "Social Signal Automator Reports" section, to review the detailed information and access the generated clip URLs.
  • Schedule Publishing: The clips are now ready for scheduling and publishing to their respective social media platforms. Leverage PantheraHive's integrated scheduling tools (if applicable) to deploy your content strategically.
  • Activate Monitoring: Ensure your brand mention monitoring tools are configured to track mentions of "PantheraHive.com" and your brand name, actively correlating these with the publishing of these new social signals.
  • Analyze & Optimize: Begin collecting performance data immediately after publishing. Utilize PantheraHive's analytics to identify top-performing clips, understand audience engagement patterns, and continuously refine your social signal strategy to maximize brand authority and trust.
  • Drive pSEO Traffic: Monitor the referral traffic originating from these clips to your pSEO landing pages, directly measuring the impact of this workflow on your search engine optimization efforts.
social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}