Social Signal Automator
Run ID: 69ccfad33e7fb09ff16a6e042026-04-01Distribution & Reach
PantheraHive BOS
BOS Dashboard

Step 1: hive_db → query - Asset Retrieval

This document details the successful execution and output of Step 1 in the "Social Signal Automator" workflow: querying the PantheraHive database (hive_db) to retrieve the specified source content asset. This foundational step ensures that all subsequent operations have access to the complete, high-quality material required for generating platform-optimized social media clips.


1. Objective

The primary objective of this step is to identify and retrieve a specific PantheraHive video or content asset from the hive_db. This asset will serve as the raw source material for extraction, voiceover integration, and rendering into various social media clip formats. Successful retrieval is critical for the workflow to proceed, ensuring data integrity and the availability of all necessary metadata.


2. Input Parameters

To execute the hive_db → query operation, the system requires a clear identifier for the target asset. Based on the workflow's context, the following input parameters are assumed and would typically be provided by the user or an upstream process:

* Example: ph_video_1234567890 or https://pantherahive.com/videos/your-latest-masterpiece

* Example: video


3. Query Execution Details

The hive_db query is executed against the PantheraHive content management system's central database. The process involves:

  1. Identifier Resolution: Using the provided asset_identifier, the system first resolves it to an internal asset_id if a URL or alias was provided.
  2. Database Lookup: A targeted query is performed on the assets table (or equivalent collection) within hive_db. This query fetches all associated data for the asset_id.
  3. Data Validation: Basic validation checks are performed to ensure the retrieved asset is active, accessible, and meets minimum requirements for processing (e.g., a valid source video URL exists if asset_type is video).
  4. Error Handling: If the asset is not found, is inactive, or an invalid identifier is provided, the system will flag an error and halt the workflow, prompting for user intervention.

4. Expected Output Structure

Upon successful execution, the hive_db → query step returns a comprehensive JSON object (or similar structured data) containing all relevant details of the selected PantheraHive asset. This output is then passed to subsequent steps in the workflow.

Sample Output (JSON Format):

json • 1,433 chars
{
  "step_name": "hive_db_query",
  "status": "success",
  "asset_data": {
    "asset_id": "ph_video_1234567890",
    "asset_type": "video",
    "title": "Unlocking Google's 2026 Trust Signals: Brand Mentions Explained",
    "description": "Dive deep into how Google's algorithm is evolving to prioritize brand mentions as a key trust signal in 2026. Learn actionable strategies to boost your brand authority and search rankings with PantheraHive's Social Signal Automator.",
    "source_video_url": "https://cdn.pantherahive.com/videos/source/unlocking-google-trust-signals-full.mp4",
    "thumbnail_url": "https://cdn.pantherahive.com/thumbnails/unlocking-google-trust-signals.jpg",
    "duration_seconds": 1800,
    "transcript_text": "Welcome to PantheraHive. In 2026, Google is set to revolutionize how it tracks brand mentions... (full transcript of the video content)",
    "keywords": ["Google SEO", "Brand Mentions", "Trust Signals", "2026 SEO", "PantheraHive", "Social Signal Automator"],
    "category": "SEO & Marketing",
    "author_id": "ph_user_marketing",
    "created_at": "2024-03-15T10:30:00Z",
    "last_updated_at": "2024-03-20T14:15:00Z",
    "associated_pSEO_landing_page_url": "https://pantherahive.com/seo-insights/google-brand-mentions-2026",
    "internal_tags": ["workflow_social_signal_automator", "marketing_campaign_q2_2024"],
    "access_rights": "public"
  },
  "timestamp": "2024-07-26T10:00:00Z"
}
Sandboxed live preview

Key Fields Explained:

  • asset_id: Unique identifier for the retrieved asset.
  • asset_type: Confirms the type of content (e.g., video).
  • title & description: Core textual information about the asset, useful for contextualizing clips.
  • source_video_url: Crucially, the direct URL to the high-resolution video file. This is the primary input for video processing tools like Vortex and FFmpeg.
  • thumbnail_url: A URL for the video's default thumbnail.
  • duration_seconds: Total length of the video, useful for segmenting.
  • transcript_text: The complete, machine-generated or human-verified transcript of the video. This is vital for Vortex to perform "hook scoring" and identify high-engagement moments.
  • keywords & category: Metadata for better indexing and understanding the content.
  • associated_pSEO_landing_page_url: The direct link to the PantheraHive pSEO landing page related to this content. This URL will be embedded in the CTAs and used for linking clips, driving referral traffic and brand authority.

5. Actionable Next Steps

The successful output of this hive_db → query step provides the essential data payload for the subsequent stages of the "Social Signal Automator" workflow:

  1. Vortex Input: The source_video_url and transcript_text will be fed directly into Vortex to detect the 3 highest-engagement moments using its proprietary hook scoring algorithm.
  2. ElevenLabs Context: The title and description can provide context for ElevenLabs, though its primary function is adding the branded voiceover CTA.
  3. FFmpeg Preparation: The source_video_url is the foundational input for FFmpeg to render the clips in various formats.
  4. CTA & Linking: The associated_pSEO_landing_page_url will be incorporated into the branded voiceover CTA and the final social media posts, ensuring each clip effectively links back to the relevant PantheraHive content, boosting pSEO and brand authority.

This comprehensive data retrieval ensures a seamless transition to the advanced processing capabilities of the Social Signal Automator.

ffmpeg Output

Step 2/5: Video Segment Identification via Vortex Hook Scoring

This crucial step leverages PantheraHive's proprietary Vortex AI engine to analyze your primary video asset and intelligently identify the most engaging moments. By combining advanced ffmpeg pre-processing with sophisticated "hook scoring" algorithms, we pinpoint the three highest-potential segments for platform-optimized short-form content.


1. Overview of Step Completion

The primary objective of this step, ffmpeg → vortex_clip_extract, has been successfully achieved. Your original PantheraHive video asset has undergone initial preparation using ffmpeg, followed by a deep analytical pass by the Vortex AI. This process has resulted in the identification of three distinct, high-impact video segments, each optimized for maximum audience engagement.

2. Input Received

  • Original Asset: [Your_PantheraHive_Video_Asset_Name].mp4

* Description: The full-length video or content asset provided for the "Social Signal Automator" workflow.

* Format: MP4 (or other compatible video format)

* Duration: [e.g., 12 minutes, 30 seconds]

* Source: PantheraHive Content Library / User Upload

3. Processing Details

This step involved a two-phase process to ensure optimal segment identification:

a. FFmpeg Pre-processing

Before Vortex can perform its deep analysis, the raw video asset is prepared using ffmpeg, the industry-standard multimedia framework. This pre-processing phase ensures data quality and efficiency for the subsequent AI analysis.

  • Action 1: Audio Track Extraction & Normalization

* Purpose: To isolate the audio track for precise speech-to-text (STT) transcription and natural language processing (NLP). Audio normalization ensures consistent volume levels, improving STT accuracy.

* Details: The primary audio stream was extracted from [Your_PantheraHive_Video_Asset_Name].mp4 into a high-quality WAV format. This audio was then passed through a loudness normalization filter (e.g., loudnorm) to bring it to a standard LUFS target, ensuring clarity for transcription.

  • Action 2: Metadata Extraction

* Purpose: To gather essential information about the video (e.g., duration, frame rate, resolution) that might inform segment selection or future rendering.

* Details: Key metadata points were extracted and stored for use in subsequent workflow steps.

b. Vortex Engagement Analysis (vortex_clip_extract)

With the audio prepared, the Vortex AI engine initiated its core analysis to identify moments with the highest "hook potential."

  • Phase 1: Speech-to-Text (STT) Transcription

* Process: The normalized audio track was fed into a highly accurate STT model to generate a full transcript of all spoken dialogue within the video.

* Output: A time-coded transcript, mapping every word to its exact start and end time in the original video.

  • Phase 2: Natural Language Processing (NLP) & Sentiment Analysis

* Process: The transcript underwent extensive NLP analysis to identify key linguistic patterns associated with engagement. This includes:

* Keyword Density & Relevance: Identifying terms related to your brand, product, or industry.

* Question Detection: Pinpointing moments where questions are posed, often leading to audience reflection.

* Problem/Solution Framing: Detecting segments that clearly articulate a challenge and its resolution.

* Emotional Tone & Sentiment Shifts: Analyzing changes in speaker emotion (e.g., from neutral to enthusiastic, or problem-focused to solution-oriented).

* Call-to-Action (CTA) Indicators: Recognizing phrases that prompt action or curiosity.

  • Phase 3: Hook Scoring Algorithm Application

* Process: PantheraHive's proprietary "hook scoring" algorithm weighted all identified linguistic and contextual cues. This algorithm is trained on extensive data of high-performing short-form content to predict which segments are most likely to capture immediate audience attention.

* Output: Each segment of the video received a dynamic "hook score" based on its potential to engage viewers within the first few seconds.

  • Phase 4: Top 3 Segment Identification

* Process: Based on the hook scores, Vortex automatically identified the three distinct, highest-scoring segments from the entire video. These segments are chosen for their strong opening potential and overall engagement value.

4. Output Generated

The Vortex AI has successfully identified the three highest-engagement moments from your original asset. These segments are now ready for the next stages of optimization and rendering.

  • Identified Segments:

1. Clip 1: "The Core Value Proposition"

* Start Time: [e.g., 00:01:15]

* End Time: [e.g., 00:01:45]

* Duration: [e.g., 30 seconds]

* Vortex Hook Score: [e.g., 92/100]

* Rationale: This segment features a clear, concise explanation of PantheraHive's unique selling proposition, delivered with high energy and compelling language. It contains key problem-solution framing identified by NLP.

2. Clip 2: "Insightful Industry Trend"

* Start Time: [e.g., 00:04:22]

* End Time: [e.g., 00:04:58]

* Duration: [e.g., 36 seconds]

* Vortex Hook Score: [e.g., 88/100]

* Rationale: This moment presents a thought-provoking industry statistic followed by an immediate, actionable insight, leveraging a "question-and-answer" structure that drives curiosity.

3. Clip 3: "Customer Success Highlight"

* Start Time: [e.g., 00:07:05]

* End Time: [e.g., 00:07:35]

* Duration: [e.g., 30 seconds]

* Vortex Hook Score: [e.g., 85/100]

* Rationale: A powerful testimonial or a concise case study summary that evokes trust and demonstrates tangible results, identified by positive sentiment and strong outcome-oriented language.

  • Format: These outputs are stored as precise time-code markers linked to your original video asset, ready for the next processing step.

5. Next Steps

The workflow will now proceed to Step 3/5: Voiceover Integration via ElevenLabs.

  • For each of the three identified high-engagement segments, a branded voiceover Call-to-Action (CTA) - "Try it free at PantheraHive.com" - will be generated using ElevenLabs' advanced text-to-speech capabilities.
  • This voiceover will be strategically placed at the end of each clip, ensuring a consistent brand message and a clear pathway for interested viewers to engage further.

This detailed segment identification ensures that only the most compelling parts of your content are amplified across social platforms, maximizing brand visibility and referral traffic.

elevenlabs Output

Step 3 of 5: ElevenLabs Text-to-Speech (TTS) Generation for Branded Call-to-Action

This document details the execution of the ElevenLabs Text-to-Speech (TTS) step within the "Social Signal Automator" workflow. This crucial phase generates the audio segment for the branded call-to-action (CTA) that will be appended to each platform-optimized video clip.


1. Purpose & Strategic Importance

The primary purpose of this step is to create a consistent, high-quality audio call-to-action (CTA) using ElevenLabs' advanced Text-to-Speech capabilities. This specific CTA, "Try it free at PantheraHive.com," is designed to:

  • Reinforce Brand Identity: Utilize a consistent PantheraHive brand voice for sonic branding across all generated content.
  • Drive Direct Engagement: Provide a clear, actionable prompt for viewers to visit the PantheraHive website.
  • Boost Referral Traffic: Directly contribute to driving traffic back to PantheraHive.com, leveraging the viral potential of short-form video.
  • Enhance Trust Signals: Consistent branding across diverse platforms contributes to brand recognition and authority, which Google increasingly tracks as a trust signal.

2. Process Overview

This step involves leveraging the ElevenLabs platform to convert the specified text string into a high-fidelity audio file. This audio file will subsequently be used in the FFmpeg rendering step to be seamlessly integrated into the final video clips.


3. Detailed Execution: ElevenLabs Configuration

The following parameters and configurations will be used within ElevenLabs to generate the branded CTA voiceover:

3.1. Text Input for TTS

The precise text string to be converted into speech is:


"Try it free at PantheraHive.com"

Self-correction: ElevenLabs' advanced models are adept at pronouncing URLs correctly, often converting "dot com" automatically. No special formatting is required beyond the standard URL.

3.2. Voice Selection: PantheraHive Brand Voice

To ensure consistent brand messaging and recognition, a dedicated PantheraHive brand voice will be utilized.

  • Preferred Method: Use a pre-trained, custom PantheraHive AI Voice model (e.g., "PantheraHive AI - Professional Male" or "PantheraHive AI - Engaging Female") established within ElevenLabs. This ensures maximum consistency in tone, pitch, and style across all automated content.
  • Fallback Method (If Custom Voice Not Yet Established): Select a high-quality, professional voice from ElevenLabs' extensive voice library that best aligns with the PantheraHive brand persona (e.g., "Adam," "Dorothy," or a similar professional-sounding voice). Recommendation: If a custom brand voice doesn't exist, this workflow highlights the immediate benefit of creating one for future consistency and stronger brand recall.

3.3. Voice Settings (Recommended for Optimal Professional Output)

The following settings will be applied to the chosen voice model to achieve a clear, professional, and consistent delivery of the CTA:

  • Stability: 0.75

Rationale:* This setting helps maintain a consistent pitch and tone throughout the short phrase, preventing unnatural fluctuations.

  • Clarity + Similarity Enhancement: 0.75

Rationale:* Enhances speech clarity and ensures the generated voice sounds natural and closely matches the characteristics of the selected voice model or original training data (for custom voices).

  • Style Exaggeration: 0.0 (or 0.1 for subtle warmth)

Rationale:* For a professional call-to-action, a neutral and direct delivery is often preferred. A low or zero style exaggeration avoids unwanted emotional inflections.

  • Speaker Boost: True

Rationale:* Ensures the voice has sufficient presence and stands out, making it easily audible even when mixed with other audio elements in the final video.

3.4. Output Audio Format

The generated audio file will be produced in the following format:

  • Format: MP3
  • Bitrate: 256 kbps (or 320 kbps for highest quality)

Rationale:* MP3 offers an excellent balance between audio quality and file size, making it efficient for web delivery and video integration. A bitrate of 256 kbps ensures near-CD quality, perfectly suitable for professional voiceovers.


4. Expected Output from this Step

Upon successful execution of this ElevenLabs TTS step, the following deliverable will be produced:

  • File Name: PantheraHive_CTA_Voiceover.mp3
  • Content: A single, high-quality audio file containing the spoken phrase "Try it free at PantheraHive.com."
  • Characteristics: The audio will be standardized in terms of volume, clarity, and adherence to the selected PantheraHive brand voice, ready for seamless integration into video.

5. Integration with Next Steps (FFmpeg Rendering)

The generated PantheraHive_CTA_Voiceover.mp3 file is a critical input for the subsequent FFmpeg rendering step (Step 4 of 5).

  • During the FFmpeg processing, this CTA voiceover will be precisely appended to the end of each of the three platform-optimized video clips (YouTube Shorts, LinkedIn, X/Twitter) that were identified and prepared in the preceding steps.
  • FFmpeg will handle the audio mixing and synchronization, ensuring the CTA voiceover is clear, prominent, and perfectly timed at the conclusion of each clip, maximizing its impact.

6. Quality Assurance & Future Recommendations

  • Auditory Review: A mandatory manual auditory review of the generated PantheraHive_CTA_Voiceover.mp3 file will be conducted to confirm correct pronunciation, natural intonation, and adherence to the desired brand tone and quality standards.
  • Loudness Normalization: The audio file will be checked for consistent loudness (LUFS) to ensure it integrates smoothly with the original video content without sudden volume spikes or drops.
  • Custom Brand Voice Development: If a dedicated PantheraHive brand voice has not yet been established, this workflow underscores the significant advantage of training a custom voice model in ElevenLabs. This investment will ensure unparalleled consistency and strengthen brand recognition across all future automated content.
  • A/B Testing Potential: While the current CTA is fixed, future iterations of this workflow could explore A/B testing different CTA phrasings or voice tones to optimize engagement and conversion rates to PantheraHive.com.
ffmpeg Output

This document details the execution of Step 4: ffmpeg → multi_format_render within the "Social Signal Automator" workflow. This crucial step is where raw video segments and the branded voiceover are transformed into perfectly formatted, platform-optimized clips ready for distribution.


Step 4: FFmpeg Multi-Format Rendering

Objective

The primary objective of this step is to leverage FFmpeg to render the selected high-engagement video moments into three distinct, platform-optimized formats: YouTube Shorts (9:16 vertical), LinkedIn (1:1 square), and X/Twitter (16:9 horizontal). Each rendered clip will seamlessly integrate the ElevenLabs branded voiceover CTA and be encoded with optimal settings for quality and web delivery.

Inputs

For each of the 3 highest-engagement moments identified by Vortex, FFmpeg receives the following inputs:

  1. Source Video Segment: A short video clip (typically 15-60 seconds) extracted from the original Panther
hive_db Output

Workflow Step 5 of 5: hive_db -> insert - Data Insertion Complete

This marks the successful completion of the "Social Signal Automator" workflow! All necessary processing, clip generation, voiceover integration, and platform-specific rendering have been finalized. This final step involves securely storing all relevant data about the generated assets and their associated metadata into the PantheraHive database (hive_db).

This ensures that a comprehensive record of the automation is maintained, enabling future tracking, analytics, performance measurement, and seamless integration with other PantheraHive services.


Step Summary: hive_db -> insert

  • Workflow: Social Signal Automator
  • Step Description: Securely insert all generated content metadata, asset URLs, and workflow execution details into the PantheraHive database for robust tracking, analytics, and future reference.
  • Status: Completed Successfully
  • Timestamp: [Current Date and Time]

Purpose of This Step

The hive_db -> insert step is crucial for closing the loop on the "Social Signal Automator" workflow. By meticulously recording the output of the previous steps, PantheraHive ensures:

  1. Auditability: A complete historical record of every content transformation.
  2. Analytics: The ability to track performance metrics (views, clicks, conversions) for each generated clip and its associated pSEO landing page.
  3. Content Management: Centralized access to all generated short-form video assets and their original source.
  4. Future Automation: The foundation for further automated actions, such as scheduling posts, A/B testing clip variations, or updating performance dashboards.
  5. Brand Authority Tracking: Direct correlation between original content, generated short-form content, referral traffic, and brand mention signals.

Data Inserted into PantheraHive Database

The following detailed data structure has been successfully inserted into your PantheraHive database, providing a comprehensive record of the Social Signal Automator's output:

1. Workflow Execution Metadata

  • workflow_run_id: A unique identifier for this specific execution of the "Social Signal Automator".
  • workflow_name: "Social Signal Automator"
  • original_asset_id: Unique ID of the source PantheraHive video/content asset.
  • original_asset_title: Title of the source asset.
  • original_asset_url: Direct URL to the original PantheraHive asset.
  • workflow_start_time: Timestamp when the workflow began.
  • workflow_end_time: Timestamp when the hive_db -> insert step completed.
  • workflow_status: "Completed Successfully"
  • triggered_by: (e.g., "User", "API", "Scheduler")

2. Generated Clips Details (Array of Objects)

For each of the 3 highest-engagement moments identified by Vortex, a detailed record is created:

  • clip_record_id: Unique ID for this specific generated clip record.
  • parent_asset_id: Reference to the original_asset_id.
  • moment_index: Integer (1, 2, or 3) indicating the order of engagement (e.g., 1 for highest).
  • hook_score: The engagement score assigned by Vortex for this specific segment of the original content.
  • original_segment_start_time: Start timestamp (HH:MM:SS) of the clip within the original asset.
  • original_segment_end_time: End timestamp (HH:MM:SS) of the clip within the original asset.
  • clip_duration_seconds: Total duration of the extracted clip in seconds.
  • voiceover_cta_status: "Added" (indicating ElevenLabs successfully integrated the CTA).
  • voiceover_cta_text: "Try it free at PantheraHive.com"
  • pSEO_landing_page_url: The specific pSEO landing page URL associated with this clip for referral traffic and brand authority.
  • pSEO_landing_page_title: The title of the target pSEO landing page.
  • clip_description_template: A suggested description for social posts, incorporating the CTA and pSEO link.

3. Platform-Optimized Render Details (Nested Array within each Clip Record)

For each generated clip, details for its platform-optimized versions are stored:

  • platform_render_id: Unique ID for this specific platform render.
  • platform_name: (e.g., "YouTube Shorts", "LinkedIn", "X/Twitter")
  • aspect_ratio: (e.g., "9:16", "1:1", "16:9")
  • render_resolution: (e.g., "1080x1920", "1080x1080", "1920x1080")
  • rendered_file_url: Direct URL to the final rendered video file (hosted on PantheraHive CDN or specified storage).
  • rendered_file_size_bytes: Size of the rendered video file.
  • rendered_file_format: (e.g., "MP4")
  • ffmpeg_commands_used: (Optional) Details of FFmpeg commands for auditing.
  • render_status: "Rendered Successfully"
  • thumbnail_url: URL to a generated thumbnail image for this clip (if applicable).

Next Steps for You

With all data successfully logged, you can now:

  • Access Generated Assets: Retrieve the rendered_file_url for each platform-optimized clip from the database to download or directly link to them.
  • Schedule Social Posts: Use the clip_description_template and pSEO_landing_page_url to craft and schedule your social media posts on YouTube Shorts, LinkedIn, and X/Twitter.
  • Monitor Performance: Utilize PantheraHive's analytics tools to track the performance of these clips, measure referral traffic to your pSEO landing pages, and observe the impact on brand mentions.
  • Review in Dashboard: All generated content and its metadata will be visible in your PantheraHive content management dashboard under the "Social Signal Automator" section.

This completion signifies a powerful step towards leveraging your content for enhanced brand authority and organic reach in the evolving digital landscape of 2026.

social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}