Social Signal Automator
Run ID: 69ccd0a63e7fb09ff16a56062026-04-01Distribution & Reach
PantheraHive BOS
BOS Dashboard

Workflow: Social Signal Automator

Step 1 of 5: hive_dbquery - Asset Identification & Retrieval

This step focuses on programmatically querying the PantheraHive's central database (hive_db) to identify and retrieve all eligible video and content assets that will be processed by the Social Signal Automator workflow. The goal is to gather comprehensive metadata and source information for each asset, ensuring that only relevant, high-quality content with an associated pSEO landing page is selected.


1. Objective of this Step

The primary objective is to generate a comprehensive list of PantheraHive content assets that meet the criteria for social signal amplification. This involves:

This step serves as the foundational data layer for the entire "Social Signal Automator" workflow.


2. Query Parameters & Selection Criteria

To ensure the selection of appropriate assets, the hive_db query will incorporate the following parameters and criteria:

* asset_type must be video or any other content type explicitly designated as having a primary video component (e.g., webinar_recording, long_form_tutorial_video). This prioritizes content suitable for video clipping.

* publication_status must be published or active. Drafts or archived content will be excluded.

* p_seo_landing_page_url must NOT be NULL or empty. This is a critical requirement as the workflow's core value proposition is linking back to a matching pSEO landing page. Assets without this URL will be automatically excluded.

* workflow_status for SocialSignalAutomator should NOT indicate processed or completed for the current version of the workflow. This prevents reprocessing assets unnecessarily. A timestamp or version identifier can be used to allow reprocessing of older versions if desired.

* publication_date can be filtered to include assets published within a specific timeframe (e.g., last 90 days, last 30 days, or all time). For initial deployment, "all time" may be selected, with subsequent runs focusing on newer content.

* If available, engagement_score or views_count can be used to prioritize assets with higher potential impact.


3. Data Fields to Retrieve per Asset

For each eligible content asset identified, the query will retrieve the following detailed fields:


4. Conceptual Query Logic (SQL-like Pseudo-code)

text • 311 chars
---

### 5. Expected Output

The output of this `hive_db` query step will be a structured data payload, typically an array of JSON objects (or similar data structure), where each object represents an eligible content asset with all the retrieved fields.

**Example Output Structure (Truncated for brevity):**

Sandboxed live preview

6. Deliverable & Next Steps

Deliverable: A JSON array of identified PantheraHive content assets, each containing comprehensive metadata and the source_file_path and p_seo_landing_page_url.

Next Steps (Step 2 of 5): The identified assets will be passed to the "Vortex → analyze" step. This next stage will involve downloading the source_file_path for each asset and using Vortex's AI capabilities to analyze the content, detect high-engagement moments, and score potential hooks for clip generation.

ffmpeg Output

Step 2: ffmpegvortex_clip_extract - High-Engagement Moment Identification

This document details the execution and output of Step 2 in the "Social Signal Automator" workflow. This crucial phase leverages advanced AI to intelligently identify the most impactful segments from your video content, setting the stage for highly engaging social media clips.


1. Workflow Context: Social Signal Automator

The "Social Signal Automator" is designed to amplify your brand's presence and authority online. By transforming your PantheraHive video and content assets into platform-optimized short-form clips, we aim to:

  • Boost Brand Mentions: Generate consistent brand mentions across high-traffic social platforms, a key trust signal for Google in 2026.
  • Drive Referral Traffic: Link each clip back to its matching pSEO landing page, funneling engaged users directly to your valuable content.
  • Enhance Brand Authority: Establish your brand as a leading voice by consistently delivering high-value, shareable content tailored for each platform.

This step is pivotal in ensuring that only the most compelling parts of your content are selected for repurposing, maximizing the impact of every generated clip.


2. Step Overview: ffmpegvortex_clip_extract

Following the initial processing by ffmpeg (which handles foundational video tasks like format normalization or basic cuts), the vortex_clip_extract module takes center stage. Its primary function is to analyze the entire video asset and pinpoint the "3 highest-engagement moments" using sophisticated hook scoring and AI analysis.

This intelligent selection process ensures that every subsequent operation (voiceover, rendering, distribution) is applied to the most impactful and attention-grabbing segments of your original content.


3. Core Functionality of vortex_clip_extract

The vortex_clip_extract module is the brain behind identifying your content's peak engagement points. Here's a breakdown of its operation:

  • Input:

* The pre-processed video asset (e.g., MP4, MOV) as prepared and standardized by the preceding ffmpeg operation.

* Associated metadata, such as transcribed audio (if available from previous PantheraHive processing steps), which aids in semantic understanding.

  • Process: Vortex AI & Hook Scoring:

* Proprietary AI Analysis: The vortex_clip_extract module employs PantheraHive's proprietary Vortex AI engine, specifically trained to understand and predict audience engagement within video content.

* Multi-Dimensional Hook Scoring: The AI analyzes various signals within the video to assign a "hook score" to potential segments, identifying moments most likely to capture and retain viewer attention. This includes:

* Content Dynamics: Analyzing changes in speaker tone, pacing, visual cues, and on-screen activity.

* Semantic Relevance: Identifying segments rich in keywords, core messages, and high-value information.

* Emotional Resonance: Detecting moments with strong sentiment or impactful delivery.

* Narrative Arc: Pinpointing segments that offer a compelling mini-narrative or a strong call to action/insight.

* Segment Identification & Ranking: Based on the comprehensive hook scores, Vortex identifies numerous potential clip candidates throughout the video and ranks them by their engagement potential.

* Top 3 Selection: The system then intelligently selects the top 3 highest-scoring, distinct moments. These moments are chosen to be self-contained, impactful, and suitable for standalone consumption as short-form social media content.

  • Output:

* A structured data object containing the precise start and end timestamps for each of the 3 identified high-engagement clips.

* A confidence score (or "hook score") for each selected clip, indicating its predicted engagement potential.

* (Optional) A brief textual descriptor or suggested theme for each clip, derived from its content.


4. Key Features & Benefits for Your Brand

  • AI-Powered Precision: Eliminates subjective manual review, ensuring that only the most objectively engaging segments are selected, maximizing content impact.
  • Automated Efficiency: Drastically reduces the time and resources required to identify compelling short-form content from long-form assets, enabling high-volume content repurposing.
  • Maximized Engagement: By focusing on scientifically identified "hook moments," your social clips are inherently optimized to capture audience attention and drive higher interaction rates.
  • Scalability: Process an unlimited number of video assets efficiently, ensuring a consistent and high-quality standard for clip selection across all your content initiatives.
  • Future-Proofing: Aligns with evolving social media consumption habits and Google's increasing emphasis on brand mentions and trust signals.

5. Expected Outcome for the Customer

Upon completion of the vortex_clip_extract step, the system has successfully:

  • Intelligently identified the 3 most impactful, high-engagement moments from your original PantheraHive video asset.
  • Generated the precise timestamp data for these segments.

Please note: You will not receive rendered video files at this stage. Instead, this step delivers the critical metadata (timestamps, scores) that define these optimal clip segments. This ensures that all subsequent operations—adding branded voiceovers and final rendering—are applied to the most potent parts of your content, guaranteeing maximum impact.


6. Next Steps in the Workflow

The identified high-engagement clip segments (via their precise timestamps) are now ready for the subsequent stages of the "Social Signal Automator" workflow:

  1. ElevenLabs Integration: The identified segments will be passed to ElevenLabs, where a consistent, branded voiceover CTA ("Try it free at PantheraHive.com") will be added to each clip.
  2. Final FFmpeg Rendering: The clips, now enriched with the branded CTA, will be rendered by FFmpeg into their platform-optimized formats: YouTube Shorts (9:16), LinkedIn (1:1), and X/Twitter (16:9). Each final clip will be automatically linked back to its matching pSEO landing page.

This seamless progression ensures that your high-value content is transformed into powerful social signals, ready to amplify your brand's reach and authority.

elevenlabs Output

Workflow Step: ElevenLabs Text-to-Speech (TTS) Integration

This document details the execution of Step 3 of 5 in the "Social Signal Automator" workflow: elevenlabs → tts. This crucial step leverages ElevenLabs' advanced Text-to-Speech capabilities to generate a high-quality, branded voiceover call-to-action (CTA) that will be integrated into every platform-optimized video clip.

1. Purpose of this Step

The primary objective of this elevenlabs → tts step is to create a consistent and professional audio branding element for all derived content. By utilizing a standardized voice and message, we ensure that every clip effectively drives traffic back to PantheraHive.com and reinforces brand recognition, contributing directly to the "Brand Mentions as a trust signal" goal.

2. Input Parameters for TTS Generation

To ensure a high-quality and on-brand voiceover, the following specific parameters are fed into the ElevenLabs API:

  • CTA Text: The exact phrase to be converted into speech is:

    "Try it free at PantheraHive.com"

This concise and direct call-to-action is designed for maximum impact within short video formats.

  • Voice Profile Selection: A pre-configured "PantheraHive Brand Voice" is utilized. This voice has been carefully selected and trained to embody PantheraHive's brand identity – professional, authoritative, and engaging.

* Voice ID: PH_BrandVoice_ID_XYZ (a unique identifier for the specific ElevenLabs voice model associated with PantheraHive).

* This ensures audio consistency across all content generated by the Social Signal Automator.

3. ElevenLabs TTS Generation Process

The generation process involves a secure API call to ElevenLabs, configured for optimal audio quality and brand alignment:

  • API Endpoint: https://api.elevenlabs.io/v1/text-to-speech/{PH_BrandVoice_ID_XYZ}
  • Request Method: POST
  • Headers:

* xi-api-key: [SECURELY_MANAGED_ELEVENLABS_API_KEY]

* Content-Type: application/json

* Accept: audio/mpeg (specifying MP3 format for efficient storage and integration)

  • Request Body (JSON Payload):

The following JSON payload is sent to the ElevenLabs API to control the speech generation parameters:


    {
      "text": "Try it free at PantheraHive.com",
      "model_id": "eleven_multilingual_v2", // Utilizing ElevenLabs' most advanced multilingual model for superior naturalness and clarity.
      "voice_settings": {
        "stability": 0.55,       // Controls the variability of the voice. A moderate value balances consistency with natural inflection.
        "similarity_boost": 0.75, // Determines how closely the output matches the original voice's characteristics. Set high for brand consistency.
        "style": 0.0,            // Reduces stylistic exaggeration, ensuring a professional, direct delivery suitable for a CTA.
        "use_speaker_boost": true // Enhances the clarity and presence of the speaker's voice, crucial for short, impactful messages.
      }
    }
  • Execution: Upon successful API execution, ElevenLabs processes the text using the specified voice model and settings. The generated audio stream (MP3 format) is then received and stored securely within the PantheraHive asset management system.

4. Generated TTS Audio Files (Deliverables)

The output of this step is a high-fidelity audio file containing the branded voiceover CTA.

  • Audio File Name: pantherahive_cta_voiceover.mp3
  • Content: The spoken phrase: "Try it free at PantheraHive.com"
  • Voice Used: PantheraHive Brand Voice (PH_BrandVoice_ID_XYZ)
  • Estimated Duration: Approximately 2-4 seconds (optimized for conciseness).
  • Format: MP3 (MPEG Audio Layer III) – widely compatible, high quality, and compact.
  • Quality: Professionally rendered, clear, and engaging audio, ensuring maximum impact when integrated into video content.
  • Storage: The generated .mp3 file is stored in a dedicated temporary asset directory, ready for immediate use in the subsequent video rendering step.

5. Usage in Workflow

This pantherahive_cta_voiceover.mp3 file is a critical component for the remainder of the "Social Signal Automator" workflow:

  • It will serve as the standardized audio branding element, ensuring every piece of content published under the PantheraHive brand carries a consistent and compelling call-to-action.
  • In Step 4 of 5 (FFmpeg → Render Clips), this voiceover audio will be precisely timed and overlaid onto the end of each platform-optimized video clip (YouTube Shorts, LinkedIn, X/Twitter). This seamless integration will guide viewers to PantheraHive.com, simultaneously building referral traffic and brand authority.

6. Next Steps in Social Signal Automator Workflow

With the branded voiceover CTA successfully generated, the workflow is ready to proceed:

  • Step 4 of 5: FFmpeg → Render Clips: The system will now utilize FFmpeg to combine the extracted high-engagement video moments (from Vortex) with this newly generated branded voiceover, rendering the final, platform-specific video clips.
  • Step 5 of 5: Publish & Track: The fully rendered clips will then be published to their respective social platforms, and their performance will be continuously monitored and analyzed.
ffmpeg Output

Step 4: FFmpeg Multi-Format Render (ffmpeg → multi_format_render)

This step is crucial for transforming your high-engagement video segments into ready-to-publish, platform-optimized social media clips. Leveraging the power of FFmpeg, we automatically crop, resize, and integrate your branded call-to-action (CTA) into three distinct formats, ensuring maximum impact and reach across YouTube Shorts, LinkedIn, and X/Twitter.


1. Purpose of This Step

The primary objective of the ffmpeg → multi_format_render step is to take the precisely identified high-engagement moments from your original PantheraHive content, along with the branded ElevenLabs voiceover CTA, and programmatically render them into three unique video formats. Each format is meticulously optimized for its target social platform's aspect ratio and technical specifications, ensuring high-quality playback and native feel. This automation eliminates manual editing, saving significant time and resources while maintaining brand consistency.

2. Input Assets for Rendering

Before FFmpeg begins its work, it receives the following pre-processed assets:

  • Original Content Asset: The full-length PantheraHive video or content asset (e.g., MP4, MOV) from which clips are derived.
  • Vortex Engagement Moments: The three (3) highest-engagement video segments, identified by Vortex's hook scoring. For each moment, FFmpeg receives precise start and end timestamps.
  • ElevenLabs Branded Voiceover CTA: The audio file containing the standardized "Try it free at PantheraHive.com" voiceover, ready to be appended to the end of each generated social clip.
  • Associated pSEO Landing Page URL: The specific PantheraHive SEO-optimized landing page URL that each clip is designed to drive traffic to. This URL will be embedded in the metadata and provided for distribution.

3. FFmpeg Configuration & Rendering Process

For each of the three identified high-engagement moments, FFmpeg executes a series of automated operations to create three distinct video files:

A. Core FFmpeg Operations

  • Segment Extraction: FFmpeg precisely extracts the defined video segment based on the start and end timestamps provided by Vortex.
  • Aspect Ratio Transformation:

* Cropping: For formats requiring a different aspect ratio than the original (e.g., transforming a 16:9 original into 9:16 vertical or 1:1 square), FFmpeg intelligently crops the video to focus on the central action, ensuring critical visual information is retained.

* Resizing: The cropped segment is then resized to the optimal resolution for each platform to maintain visual clarity and reduce file size.

  • Audio Integration: The original audio from the extracted segment is preserved. The ElevenLabs branded voiceover CTA is then seamlessly appended to the end of the clip's audio track.
  • Encoding: All clips are encoded using the H.264 codec for video and AAC for audio, ensuring broad compatibility, excellent compression, and high visual fidelity across platforms.

B. Platform-Specific Output Specifications

FFmpeg renders each engagement moment into the following three formats:

  1. YouTube Shorts (Vertical Video)

* Aspect Ratio: 9:16 (Vertical)

* Resolution: 1080x1920 pixels (Full HD Vertical)

* Target Bitrate: Optimized for YouTube's recommendations (e.g., 8-12 Mbps for 1080p), balancing quality and upload efficiency.

* File Format: MP4

* Purpose: Ideal for short, attention-grabbing content on YouTube's vertical video feed, maximizing screen real estate on mobile devices.

  1. LinkedIn (Square Video)

* Aspect Ratio: 1:1 (Square)

* Resolution: 1080x1080 pixels (Full HD Square)

* Target Bitrate: Optimized for LinkedIn's video player (e.g., 5-10 Mbps), ensuring professional presentation.

* File Format: MP4

* Purpose: Highly effective for professional networking, fitting well within the LinkedIn feed and performing strongly on both desktop and mobile.

  1. X/Twitter (Horizontal Video)

* Aspect Ratio: 16:9 (Horizontal)

* Resolution: 1920x1080 pixels (Full HD Horizontal)

* Target Bitrate: Optimized for X/Twitter's video playback (e.g., 6-10 Mbps), ensuring crisp visuals.

* File Format: MP4

* Purpose: The standard wide-screen format, perfect for engaging audiences on X/Twitter and providing a traditional viewing experience.

4. Output Deliverables

Upon completion of this step, you will receive a comprehensive package of rendered video clips, meticulously organized and ready for immediate distribution:

  • Total Clips: Nine (9) distinct video clips will be generated. This includes 3 high-engagement moments, each rendered in 3 different platform-optimized formats.
  • File Naming Convention: Each clip will follow a clear and consistent naming convention for easy identification:

[OriginalAssetName]_[MomentID]_[PlatformFormat].mp4

* Example:

* PantheraHive_WorkflowOverview_Moment1_YouTubeShorts.mp4

* PantheraHive_WorkflowOverview_Moment1_LinkedIn.mp4

* PantheraHive_WorkflowOverview_Moment1_XTwitter.mp4

* (And similarly for Moment 2 and Moment 3)

  • Associated Metadata: Each clip will be delivered with its corresponding pSEO landing page URL, ensuring that the link to your brand authority page is readily available for posting.
  • Quality Assurance: All rendered clips undergo an automated quality check to ensure correct aspect ratios, audio synchronization, CTA integration, and file integrity.

5. Benefits & Next Steps

This automated multi-format rendering process provides significant advantages:

  • Maximized Reach: By tailoring content to each platform, you increase visibility and engagement across diverse audiences.
  • Brand Consistency: The integrated ElevenLabs CTA ensures every clip reinforces your brand and drives action.
  • Efficiency & Scalability: Eliminates manual editing, allowing for rapid content deployment and scalable production of social signals.
  • Referral Traffic Generation: Each clip is a direct conduit to your pSEO landing pages, building crucial referral traffic and enhancing brand authority.

The rendered clips are now fully prepared for distribution. The final step in the "Social Signal Automator" workflow will involve the automated scheduling and posting of these clips to their respective social media platforms, complete with their associated pSEO landing page links.

hive_db Output

Workflow Step Completion: hive_db Insertion for Social Signal Automator

Step 5 of 5: hive_db → insert has been successfully executed for your "Social Signal Automator" workflow.

This final step ensures that all generated assets, associated metadata, and critical linking information are securely stored within the PantheraHive database. This comprehensive record is essential for tracking, analytics, future automation, and maintaining an auditable trail of your content amplification efforts.


1. Summary of Action

The hive_db insert operation has concluded the "Social Signal Automator" workflow by:

  • Recording the details of the original PantheraHive video/content asset.
  • Storing the specific metadata for each of the three platform-optimized social media clips (YouTube Shorts, LinkedIn, X/Twitter).
  • Logging the cloud storage URLs for the rendered clips.
  • Associating each clip with its corresponding pSEO landing page URL.
  • Marking the overall workflow as complete within your PantheraHive project.

This structured data enables you to monitor the performance of your brand mentions, referral traffic, and overall brand authority building initiatives.


2. Detailed Database Insertion Records

The following data points have been meticulously inserted into the PantheraHive database for each processed asset, ensuring comprehensive tracking and accessibility:

2.1. Original Asset Reference

  • original_asset_id: Unique identifier for the source PantheraHive video or content asset.
  • original_asset_title: Title of the original content.
  • original_asset_url: Direct link to the source content on PantheraHive.
  • processing_timestamp: Timestamp indicating when the Social Signal Automator workflow began for this asset.

2.2. Generated Social Clip Details (Per Clip)

For each of the three platform-optimized clips generated from the highest-engagement moments, the following detailed records have been stored:

  • clip_id: A unique identifier for the specific social media clip.
  • parent_asset_id: Reference to the original_asset_id this clip was derived from.
  • platform_target: The social media platform for which the clip is optimized (e.g., YouTube Shorts, LinkedIn, X/Twitter).
  • aspect_ratio: The specific aspect ratio of the rendered clip (e.g., 9:16, 1:1, 16:9).
  • start_timestamp_original: The precise start time (in seconds or HH:MM:SS) within the original asset from which this clip segment was extracted.
  • end_timestamp_original: The precise end time (in seconds or HH:MM:SS) within the original asset where this clip segment concludes.
  • hook_score: The engagement score assigned by Vortex for this specific moment, indicating its potential for high engagement.
  • voiceover_cta_applied: Boolean (TRUE) confirming the ElevenLabs branded voiceover CTA was successfully added.
  • voiceover_cta_text: The exact CTA text used: "Try it free at PantheraHive.com".
  • cloud_storage_url: A secure, direct link to the rendered video file stored in PantheraHive's cloud storage, ready for download or direct publishing.
  • p_seo_landing_page_url: The specific, matching pSEO landing page URL designed to capture referral traffic and build brand authority for this content.
  • clip_status: Current status of the clip (e.g., Rendered_Ready_For_Publishing).
  • creation_timestamp: Timestamp indicating when this specific clip's metadata was inserted into the database.

2.3. Workflow Status & Audit Trail

  • workflow_instance_id: Unique identifier for this specific execution of the "Social Signal Automator" workflow.
  • workflow_status: Updated to Completed.
  • completion_timestamp: Timestamp marking the successful completion of the entire workflow.

3. Actionable Insights & Next Steps

With the data now securely stored, you can proceed with the next phase of your social signal strategy:

  • Access Your Clips: Retrieve the cloud_storage_url for each clip from the PantheraHive dashboard or via API. These are your final, ready-to-publish assets.
  • Publish to Platforms: Upload the respective 9:16 clip to YouTube Shorts, the 1:1 clip to LinkedIn, and the 16:9 clip to X/Twitter.
  • Implement pSEO Links: Crucially, ensure that when you publish these clips, you include the corresponding p_seo_landing_page_url in the post description or as a clickable link. This is vital for driving referral traffic and reinforcing your brand authority.
  • Monitor Performance: Utilize PantheraHive's analytics tools (or your native platform analytics) to track views, engagement, and most importantly, referral traffic back to your pSEO landing pages.
  • Leverage Brand Mentions: As these clips circulate, Google's 2026 algorithm will recognize the increased brand mentions, contributing positively to your trust signals and search engine ranking.

4. Confirmation & Support

This concludes the "Social Signal Automator" workflow for your specified asset. All generated content and its comprehensive metadata are now available in your PantheraHive account.

Should you require assistance in accessing these records, publishing the clips, or analyzing their performance, please do not hesitate to contact PantheraHive Support. We are here to ensure your success in amplifying your brand's reach and authority.

social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}