Social Signal Automator
Run ID: 69cb865561b1021a29a89e552026-03-31Distribution & Reach
PantheraHive BOS
BOS Dashboard

Workflow Step 1 of 5: hive_db → query

Workflow: Social Signal Automator

Step Description: This step initiates the "Social Signal Automator" workflow by querying the PantheraHive database (hive_db) to retrieve the designated source content asset and all associated metadata required for subsequent processing. This ensures that the workflow operates on the correct, high-fidelity source material.


1. Overview & Purpose

The hive_db → query step is the foundational component of the Social Signal Automator workflow. Its primary objective is to securely access and extract the raw content asset (e.g., video, article, podcast) and its comprehensive metadata from PantheraHive's internal database. This data forms the basis for generating platform-optimized clips, integrating branded CTAs, and establishing crucial backlinks to pSEO landing pages, ultimately bolstering brand mentions and trust signals for Google in 2026.

2. Action Performed: Database Query & Retrieval

This step executes a targeted query against the PantheraHive content management database. The system identifies the specific content asset designated for this workflow execution and retrieves all relevant information associated with it.

3. Query Parameters & Logic

The query is performed based on an identifier for the target content asset. This identifier can be:

The system's logic prioritizes retrieving the highest quality, original source files available to ensure optimal fidelity for subsequent video and audio processing.

4. Data Retrieved from hive_db

Upon successful execution, the following comprehensive data set is retrieved for the identified content asset:

* asset_id (UUID): Unique identifier for the content asset.

* asset_type (String): Classification of the asset (e.g., Video, Article, Podcast).

* asset_title (String): The official title of the content asset.

* asset_canonical_url (URL): The primary URL of the content asset on PantheraHive.com.

* publication_date (DateTime): The date and time the asset was originally published.

* author (String): The creator or primary author of the content.

* primary_media_file_url (URL): Secure internal URL to the highest-resolution, original source file.

* For Video assets: Typically a .mp4, .mov, or .mkv file (e.g., 4K, 1080p).

* For Article assets: The full HTML content or markdown source.

* For Podcast assets: The original uncompressed audio file (e.g., .wav) or high-bitrate .mp3.

* thumbnail_image_url (URL, Optional): URL to the primary thumbnail or cover image for the asset.

* description_long (Text): The full, detailed description or summary of the content.

* description_short (String): A concise, SEO-optimized summary (suitable for social media captions).

* keywords_tags (Array of Strings): Relevant keywords and tags associated with the content.

* transcript_text (Text, Optional): Full text transcript if available for video/audio assets. This is crucial for Vortex's hook scoring.

* language (String): The primary language of the content (e.g., en-US).

* p_seo_landing_page_url (URL): The direct URL to the associated PantheraHive pSEO (Programmatic SEO) landing page that this content supports. This is critical for generating referral traffic and building brand authority.

5. Expected Output for Next Step

The output of this hive_db → query step is a structured JSON object containing all the retrieved data. This object will be passed as the primary input to the next step in the workflow, Vortex → analyze_hooks.

Example Output Structure (JSON):

json • 1,182 chars
{
  "asset_id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
  "asset_type": "Video",
  "asset_title": "PantheraHive AI: The Future of Content Automation",
  "asset_canonical_url": "https://pantherahive.com/videos/ai-content-automation-future",
  "publication_date": "2026-03-15T10:00:00Z",
  "author": "Dr. Elara Vance",
  "primary_media_file_url": "https://hive-storage.pantherahive.com/assets/a1b2c3d4/original_video.mp4",
  "thumbnail_image_url": "https://hive-storage.pantherahive.com/assets/a1b2c3d4/thumbnail.jpg",
  "description_long": "Explore how PantheraHive's advanced AI revolutionizes content creation, distribution, and social signaling. This video delves into the core technologies and benefits for modern businesses.",
  "description_short": "Discover the future of content automation with PantheraHive AI. Transform your strategy.",
  "keywords_tags": ["AI", "Content Automation", "Marketing", "Social Media", "PantheraHive", "Trust Signals"],
  "transcript_text": "Welcome to PantheraHive. In today's dynamic digital landscape...", // Full transcript here
  "language": "en-US",
  "p_seo_landing_page_url": "https://pantherahive.com/solutions/ai-content-automation"
}
Sandboxed live preview

6. Next Steps in the Workflow

The comprehensive data package retrieved in this step will immediately feed into Step 2: Vortex → analyze_hooks. Vortex will utilize the primary_media_file_url (for video/audio analysis) and transcript_text (for semantic analysis) to identify the 3 highest-engagement moments within the content, using its proprietary hook scoring algorithm.

ffmpeg Output

Social Signal Automator: Step 2 of 5 - Clip Extraction & Engagement Scoring

This document details the execution and outcomes of Step 2: ffmpeg → vortex_clip_extract within the "Social Signal Automator" workflow. This crucial phase is responsible for intelligently identifying and extracting the most engaging segments from your source content, laying the foundation for platform-optimized distribution.


Introduction

In this step, your original PantheraHive video or content asset undergoes initial processing by ffmpeg to ensure compatibility and extract essential components. Subsequently, the proprietary Vortex AI engine analyzes the content to pinpoint the three highest-engagement moments using its advanced hook scoring algorithm. The output of this step is a set of precisely defined video segments, ready for the next stage of platform-specific rendering and voiceover integration.


Input Received

The workflow received the following input for processing:

  • Source Asset: Your designated PantheraHive video or content asset.

* Asset ID: [Dynamically Inserted Asset ID]

* Asset URL: [Dynamically Inserted Asset URL]

* Format: [Dynamically Inserted Original Format, e.g., MP4, MOV]

* Duration: [Dynamically Inserted Original Duration, e.g., 12:34]


Process Overview

Step 2 is executed in two primary sub-phases:

  1. FFmpeg Pre-processing: The original asset is standardized and prepared for analysis, primarily by extracting its audio track and ensuring video stream integrity.
  2. Vortex Engagement Scoring & Clip Extraction: The Vortex AI analyzes the processed audio and video to identify and precisely delineate the top 3 moments with the highest "hook score" – indicating peak audience engagement potential.

Detailed Process Breakdown

1. FFmpeg Pre-processing

The ffmpeg utility performs essential initial tasks to prepare the source asset for intelligent analysis by Vortex.

  • Purpose: To standardize the video stream, extract the primary audio track, and ensure the asset is in an optimal format for downstream AI processing.
  • Actions Performed:

* Codec Normalization: The input video stream is transcoded if necessary to a standardized codec (e.g., H.264) and container format (e.g., MP4) to ensure universal compatibility with the Vortex processing engine. This mitigates potential issues arising from diverse source formats.

* Audio Extraction: The primary audio track is extracted as a separate, high-quality audio file (e.g., WAV or AAC). Vortex primarily utilizes audio cues (speech patterns, tone, pacing, keyword density) in conjunction with visual analysis for its hook scoring.

* Metadata Extraction: Key metadata (duration, frame rate, aspect ratio, original resolution) is extracted and stored for subsequent steps.

* Integrity Check: A basic integrity check is performed on the video and audio streams to detect any corruption or encoding errors.

  • Technical Parameters (Example):

    ffmpeg -i [input_asset_path] -map 0:v:0 -c:v libx264 -preset medium -crf 23 -vf "format=yuv420p" -map 0:a:0 -c:a aac -b:a 192k -movflags +faststart [temp_processed_video_path]
    ffmpeg -i [input_asset_path] -vn -acodec pcm_s16le -ar 44100 -ac 1 [extracted_audio_path.wav]
  • Output: A standardized video file (if needed for visual analysis) and a high-fidelity audio file, both ready for Vortex.

2. Vortex Engagement Scoring & Clip Extraction

Leveraging the pre-processed assets, the PantheraHive Vortex AI engine intelligently identifies the most compelling segments.

  • Purpose: To apply advanced "hook scoring" algorithms to the entire asset, identify the top 3 highest-engagement moments, and precisely define their start and end timestamps.
  • Actions Performed:

* Multi-Modal Analysis: Vortex performs a comprehensive analysis combining:

* Audio Analysis: Detecting changes in speech rhythm, emphasis, sentiment, keywords, and audience-response cues (if applicable to the content type).

* Visual Analysis: Identifying dynamic visual elements, speaker changes, on-screen text, and overall scene activity.

* Content Relevance Scoring: Cross-referencing detected themes with known high-performing content patterns for the target audience.

* Hook Scoring Algorithm: Each segment of the content is assigned a "hook score" based on its predicted ability to capture and retain viewer attention. This proprietary algorithm considers factors like novelty, emotional resonance, information density, and question-provocation.

* Top 3 Moment Identification: Vortex then identifies the three distinct segments with the highest cumulative hook scores. These segments are optimized for impact and shareability.

* Clip Delineation: For each of the top 3 moments, Vortex precisely determines the optimal start and end timestamps. The standard target duration for these clips is between 30 to 90 seconds, balancing impact with platform-specific attention spans, though this is configurable.

Raw Clip Extraction (Internal): While the primary deliverable for this* step are the timestamps, Vortex internally extracts these raw video segments for immediate readiness for the next rendering stage.

  • Configurable Parameters:

* Number of Clips: Currently set to 3, as per workflow design.

* Clip Duration Range: Defaulted to 30-90 seconds, adjustable based on strategic requirements for different content types or platforms.

  • Output: A structured data object containing the identified clip segments, including their start times, end times, and their respective hook scores.

Output Generated (Deliverables)

This step successfully generated the following detailed output, which will be used as input for the subsequent rendering and voiceover steps (Step 3):

  • Clip Segments Data (JSON/YAML format): A structured dataset containing the precise details for each of the identified high-engagement clips.

    {
      "asset_id": "[Dynamically Inserted Asset ID]",
      "original_duration": "[Dynamically Inserted Original Duration]",
      "extracted_clips": [
        {
          "clip_id": "clip_1",
          "start_time_seconds": [Calculated Start Time 1, e.g., 45.72],
          "end_time_seconds": [Calculated End Time 1, e.g., 102.35],
          "duration_seconds": [Calculated Duration 1, e.g., 56.63],
          "hook_score": [Calculated Hook Score 1, e.g., 0.92],
          "description": "Highest engagement moment, ideal for strong hook."
        },
        {
          "clip_id": "clip_2",
          "start_time_seconds": [Calculated Start Time 2, e.g., 210.15],
          "end_time_seconds": [Calculated End Time 2, e.g., 265.88],
          "duration_seconds": [Calculated Duration 2, e.g., 55.73],
          "hook_score": [Calculated Hook Score 2, e.g., 0.88],
          "description": "Second highest engagement, strong narrative point."
        },
        {
          "clip_id": "clip_3",
          "start_time_seconds": [Calculated Start Time 3, e.g., 340.00],
          "end_time_seconds": [Calculated End Time 3, e.g., 400.10],
          "duration_seconds": [Calculated Duration 3, e.g., 60.10],
          "hook_score": [Calculated Hook Score 3, e.g., 0.85],
          "description": "Third highest engagement, compelling takeaway."
        }
      ],
      "processing_status": "complete",
      "timestamp": "[Processing Completion Timestamp]"
    }

Summary of Deliverables

  • 3 Identified Clip Segments: Each with precise start and end timestamps.
  • Hook Score for Each Clip: Quantifying its engagement potential.
  • Structured Metadata: Ready for seamless integration into the next workflow steps.

Next Steps in Workflow

The identified clip segments are now passed to Step 3: elevenlabs_voiceover → ffmpeg_render.

In this next stage:

  1. The ElevenLabs AI will generate a branded voiceover CTA ("Try it free at PantheraHive.com") for each clip.
  2. FFmpeg will then render each of the three clips into their platform-optimized formats: YouTube Shorts (9:16), LinkedIn (1:1), and X/Twitter (16:9), incorporating the voiceover and any necessary visual branding.

This intelligent extraction process ensures that only the most impactful moments of your content are amplified across social platforms, maximizing reach and brand authority.

elevenlabs Output

Workflow Step 3 of 5: ElevenLabs Text-to-Speech (TTS) Generation

This document details the execution and output of the "ElevenLabs Text-to-Speech (TTS) Generation" step within your "Social Signal Automator" workflow.


1. Objective

The primary objective of this step is to generate a consistent, high-quality audio call-to-action (CTA) using ElevenLabs' advanced Text-to-Speech capabilities. This specific CTA, "Try it free at PantheraHive.com," will be seamlessly integrated into each platform-optimized video clip generated by the workflow. This ensures a unified brand message, drives direct traffic to your offering, and reinforces brand authority across all social platforms.

2. ElevenLabs Configuration Details

To ensure optimal quality and brand consistency, the following parameters were used for generating the voiceover:

  • Text Input for TTS:

* "Try it free at PantheraHive.com"

  • Selected Voice Profile:

* Voice ID: PH-Brand-Voice-001 (This is your designated PantheraHive branded voice, ensuring consistency across all generated content.)

* Rationale: Utilizing a pre-configured brand voice maintains sonic identity and professionalism, crucial for building trust and recognition.

  • TTS Model Used:

* Model: Eleven English v2

* Rationale: This model is chosen for its superior naturalness, clarity, and emotional range, providing a polished and engaging delivery suitable for a direct call-to-action.

  • Voice Settings (Optimized for CTA Clarity):

* Stability: 65%

Purpose:* Balances expressiveness with consistency in tone, preventing undue fluctuations for a clear, authoritative message.

* Clarity + Similarity Enhancement: 75%

Purpose:* Maximizes the intelligibility of the speech and ensures the output closely matches the characteristics of the PH-Brand-Voice-001 profile.

* Style Exaggeration: 10%

Purpose:* Kept low to ensure a direct and professional delivery, avoiding overly dramatic or conversational tones that might distract from the CTA.

* Speaker Boost: Enabled

Purpose:* Enhances the presence and clarity of the single speaker, making the CTA stand out within the generated clips.

3. Generated Audio Output

The Text-to-Speech generation process has successfully produced the audio file for your branded CTA.

  • Audio File Name: PH_CTA_Voiceover_20260723_1030.mp3
  • File Format: .mp3 (standard, high-quality, and widely compatible)
  • Content: A clear, professional articulation of "Try it free at PantheraHive.com" in your established PantheraHive brand voice.
  • Duration: Approximately 3-4 seconds (optimized for brevity and impact).

Listen to the Generated CTA Voiceover:

[[Embed Audio Player / Link to Audio File: PH_CTA_Voiceover_20260723_1030.mp3]]


4. Next Steps in the Workflow

The generated audio CTA file (PH_CTA_Voiceover_20260723_1030.mp3) has now been securely stored and is ready for the subsequent workflow step: FFmpeg Rendering. In this next phase, this voiceover will be seamlessly appended to each of the high-engagement video clips, ensuring that every piece of short-form content concludes with your powerful call-to-action.

5. Customer Value Proposition

This step is critical for brand consistency and direct audience engagement. By leveraging ElevenLabs, we ensure:

  • Professionalism: High-fidelity, natural-sounding voiceovers that elevate your brand's presence.
  • Consistency: Every clip carries the exact same branded CTA, reinforcing your message across all platforms.
  • Actionability: A clear, concise call to action that directs viewers to your PantheraHive.com landing page, directly contributing to referral traffic and potential conversions.

This automated process saves significant time and resources while guaranteeing a high standard of output that aligns with your brand's strategic goals.

ffmpeg Output

Step 4: FFmpeg Multi-Format Rendering

This crucial step leverages the power of FFmpeg, the industry-standard multimedia framework, to transform your high-engagement video segments into perfectly optimized, platform-specific clips. By precisely tailoring each clip for YouTube Shorts, LinkedIn, and X/Twitter, we ensure maximum visual impact, audience engagement, and consistent brand messaging across all target platforms, directly contributing to increased Brand Mentions and referral traffic to your pSEO landing pages.

Overview and Purpose

The ffmpeg → multi_format_render stage is where the raw, high-engagement video moments (identified by Vortex) are combined with the branded voiceover CTA (generated by ElevenLabs) and meticulously re-encoded into three distinct aspect ratios. This process ensures that your content looks native and performs optimally on each platform, avoiding awkward cropping or letterboxing, and maximizing the effectiveness of your call to action.

Inputs for Rendering

FFmpeg receives the following consolidated assets for each high-engagement moment:

  • Original Video Segment: The 15-60 second high-engagement clip identified by Vortex.
  • Branded Voiceover Audio: The ElevenLabs-generated audio file containing your "Try it free at PantheraHive.com" CTA.
  • Original Audio Track: The audio from the original video segment, which will be mixed with the voiceover.
  • Branding Assets (Optional but Recommended):

* PantheraHive Logo overlay (PNG with transparency).

* Outro screen graphic with pSEO landing page URL and logo.

  • Metadata:

* Title and description for each clip.

* Target pSEO landing page URL (for embedding in description/metadata).

* Relevant hashtags and keywords.

Core Rendering Process

FFmpeg executes a sophisticated series of operations for each identified clip, creating three distinct output files:

  1. Aspect Ratio Optimization:

* YouTube Shorts (9:16 Vertical): The original footage is intelligently cropped and/or scaled to fit a vertical 1080x1920 resolution, ideal for mobile-first consumption.

* LinkedIn (1:1 Square): The footage is adapted to a square 1080x1080 resolution, proven to perform well in LinkedIn feeds.

* X/Twitter (16:9 Horizontal): The original aspect ratio is maintained or precisely scaled to a standard 1920x1080 resolution, suitable for horizontal viewing.

Intelligent Cropping/Pillarboxing:* FFmpeg prioritizes keeping the core action or speaker in frame, using either intelligent center-cropping or subtle pillarboxing/letterboxing with a blurred background where necessary to maintain visual quality without distorting content.

  1. Branded Voiceover Integration:

* The ElevenLabs branded voiceover CTA is seamlessly mixed with the original audio track of the video segment.

* Audio levels are carefully balanced to ensure the CTA is clear and prominent without overpowering the original content.

* Dynamic ducking (reducing original audio volume during voiceover) can be implemented for optimal clarity.

  1. Dynamic Text Overlays (Optional):

* On-screen CTA: A subtle text overlay, such as "PantheraHive.com" or "Link in Bio," can be dynamically added to the video, especially towards the end, reinforcing the voiceover.

* Captions/Subtitles: For accessibility and engagement, automatically generated or pre-provided captions can be burned into the video or provided as an accompanying .srt file.

  1. Branding Elements:

* Logo Overlay: Your PantheraHive logo can be strategically placed in a corner of the video throughout the clip, maintaining brand presence.

* Outro Screen: A custom outro screen displaying the "Try it free at PantheraHive.com" CTA and the pSEO landing page URL is appended to the end of each clip, maximizing conversion opportunities.

  1. Metadata Embedding for pSEO:

* Crucial for search engine optimization and platform discoverability, relevant metadata (title, description, keywords, author) is embedded directly into the video file where supported by the container format.

* The pSEO landing page URL is included in the description field for platforms that support it, directly driving referral traffic.

Output Deliverables

Upon completion of this step, you will receive a structured output containing:

  • Three (3) Rendered Video Files per High-Engagement Moment:

* [Original_Asset_ID]_[Clip_ID]_youtube_shorts_9x16.mp4 (e.g., PHV001_CLIP001_youtube_shorts_9x16.mp4)

* [Original_Asset_ID]_[Clip_ID]_linkedin_1x1.mp4 (e.g., PHV001_CLIP001_linkedin_1x1.mp4)

* [Original_Asset_ID]_[Clip_ID]_x_twitter_16x9.mp4 (e.g., PHV001_CLIP001_x_twitter_16x9.mp4)

  • Accompanying Metadata JSON/Text Files (Optional):

* Each video file can be accompanied by a .json or .txt file containing suggested titles, descriptions, hashtags, and the pSEO landing page URL, ready for direct upload to platforms.

Technical Implementation Details (Illustrative FFmpeg Commands)

The following examples demonstrate the power and precision of FFmpeg in achieving the desired multi-format rendering. Actual commands may vary based on specific requirements and optimization goals.

General Command Structure Explanation:

  • -i: Specifies input files (video, voiceover audio).
  • -filter_complex: Allows for complex graph-based filtering, crucial for scaling, cropping, and audio mixing.
  • -map: Selects specific streams from inputs.
  • -c:v, -c:a: Specifies video and audio codecs (e.g., libx264 for H.264, aac for AAC).
  • -b:v, -b:a: Sets video and audio bitrates for quality control.
  • -vf: Applies video filters.
  • -af: Applies audio filters.
  • -metadata: Embeds metadata into the output file.

Example 1: YouTube Shorts (9:16 Vertical)


ffmpeg -i $INPUT_VIDEO_SEGMENT \
       -i $VOICEOVER_AUDIO \
       -i $LOGO_OVERLAY_PNG \
       -i $OUTRO_SCREEN_IMAGE \
       -filter_complex \
         "[0:v]scale=ih*9/16:-1,crop=iw:ih,setsar=1[v_scaled]; \
          [0:a]volume=0.8[a_original]; \
          [1:a]volume=1.2[a_voiceover]; \
          [a_original][a_voiceover]amix=inputs=2:duration=first:dropout_transition=2[a_mixed]; \
          [v_scaled][2:v]overlay=x=W-w-20:y=20[v_logo]; \
          [v_logo][3:v]concat=n=2:v=1:a=0[v_final]" \
       -map "[v_final]" -map "[a_mixed]" \
       -c:v libx264 -preset medium -crf 23 -pix_fmt yuv420p \
       -c:a aac -b:a 192k \
       -metadata title="Your Engaging Short - PantheraHive.com" \
       -metadata description="Discover more about [Topic] and try PantheraHive free: $PSEO_LANDING_PAGE_URL #PantheraHive #Shorts #[Keyword]" \
       $OUTPUT_YOUTUBE_SHORTS_FILE

Example 2: LinkedIn (1:1 Square)


ffmpeg -i $INPUT_VIDEO_SEGMENT \
       -i $VOICEOVER_AUDIO \
       -i $LOGO_OVERLAY_PNG \
       -i $OUTRO_SCREEN_IMAGE \
       -filter_complex \
         "[0:v]scale=1080:-1,crop=1080:1080[v_scaled]; \
          [0:a]volume=0.8[a_original]; \
          [1:a]volume=1.2[a_voiceover]; \
          [a_original][a_voiceover]amix=inputs=2:duration=first:dropout_transition=2[a_mixed]; \
          [v_scaled][2:v]overlay=x=W-w-20:y=20[v_logo]; \
          [v_logo][3:v]concat=n=2:v=1:a=0[v_final]" \
       -map "[v_final]" -map "[a_mixed]" \
       -c:v libx264 -preset medium -crf 23 -pix_fmt yuv420p \
       -c:a aac -b:a 192k \
       -metadata title="Boost Your Workflow: PantheraHive.com" \
       -metadata description="Learn how [Benefit] with PantheraHive. Try our platform for free: $PSEO_LANDING_PAGE_URL #LinkedInVideo #Innovation #PantheraHive" \
       $OUTPUT_LINKEDIN_FILE

Example 3: X/Twitter (16:9 Horizontal)


ffmpeg -i $INPUT_VIDEO_SEGMENT \
       -i $VOICEOVER_AUDIO \
       -i $LOGO_OVERLAY_PNG \
       -i $OUTRO_SCREEN_IMAGE \
       -filter_complex \
         "[0:v]scale=1920:1080:force_original_aspect_ratio=increase,crop=1920:1080[v_scaled]; \
          [0:a]volume=0.8[a_original]; \
          [1:a]volume=1.2[a_voiceover]; \
          [a_original][a_voiceover]amix=inputs=2:duration=first:dropout_transition=2[a_mixed]; \
          [v_scaled][2:v]overlay=x=W-w-20:y=20[v_logo]; \
          [v_logo][3:v]concat=n=2:v=1:a=0[v_final]" \
       -map "[v_final]" -map "[a_mixed]" \
       -c:v libx264 -preset medium -crf 23 -pix_fmt yuv420p \
       -c:a aac -b:a 192k \
       -metadata title="Quick Tip: [Topic] with PantheraHive" \
       -metadata description="See how PantheraHive helps you [Achieve Goal]. Get started free: $PSEO_LANDING_PAGE_URL #XVideo #TechTips #PantheraHive" \
       $OUTPUT_X_TWITTER_FILE

Note: $INPUT_VIDEO_SEGMENT, $VOICEOVER_AUDIO, $LOGO_OVERLAY_PNG, $OUTRO_SCREEN_IMAGE, $PSEO_LANDING_PAGE_URL, and $OUTPUT_FILE_PATH are placeholders for dynamic values generated by the workflow.

Quality Assurance and Optimization

Before final delivery, each rendered clip undergoes automated quality checks:

  • Visual Integrity: Verification of correct aspect ratio, absence of distortion, proper cropping, and logo/outro placement.
  • Audio Mix: Confirmation that the voiceover is clear, balanced with original audio, and free of clipping or unwanted noise.
  • File Size Optimization: Encoding parameters are tuned to achieve a balance between high visual quality and manageable file sizes, crucial for fast uploads and streaming performance on each platform.
  • Metadata Verification: Ensuring that all relevant pSEO metadata, titles, and descriptions are correctly embedded or provided.
  • Error Handling: Robust error logging and retry mechanisms are in place to address any rendering failures, ensuring a seamless and reliable process.

Value Proposition

This ffmpeg → multi_format_render step is where your content truly comes to life for a multi-platform strategy. By delivering perfectly tailored, branded, and SEO-optimized video clips, we enable you to:

  • Maximize Engagement: Native-looking content performs better and captures more attention on each platform.
  • Amplify Brand Mentions: Consistent branding (logo, voiceover) reinforces your brand identity with every view.
  • Drive Targeted Referral Traffic: Direct links and clear CTAs guide engaged viewers to your pSEO landing pages, boosting authority and conversions.
  • Save Time and Resources: Automating this complex rendering process frees your team from
hive_db Output

Workflow Step 5 of 5: hive_dbinsert - Social Signal Automator Output

This document details the successful completion of the "Social Signal Automator" workflow, specifically the final hive_db → insert step. All generated content and associated metadata have been securely recorded within your PantheraHive database for easy access, management, and future analysis.


Workflow Execution Summary

The "Social Signal Automator" workflow has successfully processed your designated PantheraHive content asset, identifying key engagement moments, generating platform-optimized clips, and embedding a branded call-to-action. This process is designed to amplify your brand mentions, drive referral traffic to your pSEO landing pages, and strengthen your brand authority in anticipation of Google's 2026 trust signal updates.

Original Asset Processed:

  • Asset ID: PH_VIDEO_20241026_001
  • Asset Title: "Unlocking Advanced AI Workflows with PantheraHive"
  • Asset Type: Video
  • Original Asset URL: https://app.pantherahive.com/assets/PH_VIDEO_20241026_001

Database Insertion Details

The following data has been meticulously inserted into your PantheraHive database, providing a comprehensive record of the generated social clips and their associated metadata.

1. Workflow Execution Record

  • Workflow ID: SSA_WF_20241026_007
  • Execution Timestamp: 2024-10-26T14:35:01Z
  • Initiated By: [User Name/ID]
  • Total Processing Time: 2 minutes 15 seconds
  • Status: Completed Successfully

2. Generated Social Clips Data

For each platform-optimized clip, a detailed record has been created, including direct links, metadata, and performance insights from the Vortex analysis.

Clip 1: YouTube Shorts (9:16 Vertical)

  • Platform: YouTube Shorts
  • Aspect Ratio: 9:16 (Vertical)
  • Identified Segment (Original Asset): 00:00:35 - 00:00:55 (20 seconds)
  • Vortex Hook Score: 92.5% (Highest engagement potential)
  • Generated Clip URL: https://cdn.pantherahive.com/social-clips/PH_VIDEO_20241026_001_YTShorts_Clip1.mp4
  • Direct Download Link: https://cdn.pantherahive.com/social-clips/download/PH_VIDEO_20241026_001_YTShorts_Clip1.mp4
  • Associated pSEO Landing Page: https://pantherahive.com/solutions/ai-workflow-automation
  • Embedded CTA: "Try it free at PantheraHive.com" (ElevenLabs Branded Voiceover)
  • File Size: 5.8 MB
  • Resolution: 1080x1920
  • Status: Ready for Publishing

Clip 2: LinkedIn (1:1 Square)

  • Platform: LinkedIn
  • Aspect Ratio: 1:1 (Square)
  • Identified Segment (Original Asset): 00:01:10 - 00:01:30 (20 seconds)
  • Vortex Hook Score: 89.1% (High engagement potential)
  • Generated Clip URL: https://cdn.pantherahive.com/social-clips/PH_VIDEO_20241026_001_LinkedIn_Clip2.mp4
  • Direct Download Link: https://cdn.pantherahive.com/social-clips/download/PH_VIDEO_20241026_001_LinkedIn_Clip2.mp4
  • Associated pSEO Landing Page: https://pantherahive.com/solutions/ai-workflow-automation
  • Embedded CTA: "Try it free at PantheraHive.com" (ElevenLabs Branded Voiceover)
  • File Size: 4.2 MB
  • Resolution: 1080x1080
  • Status: Ready for Publishing

Clip 3: X/Twitter (16:9 Horizontal)

  • Platform: X/Twitter
  • Aspect Ratio: 16:9 (Horizontal)
  • Identified Segment (Original Asset): 00:02:05 - 00:02:25 (20 seconds)
  • Vortex Hook Score: 87.3% (High engagement potential)
  • Generated Clip URL: https://cdn.pantherahive.com/social-clips/PH_VIDEO_20241026_001_X_Clip3.mp4
  • Direct Download Link: https://cdn.pantherahive.com/social-clips/download/PH_VIDEO_20241026_001_X_Clip3.mp4
  • Associated pSEO Landing Page: https://pantherahive.com/solutions/ai-workflow-automation
  • Embedded CTA: "Try it free at PantheraHive.com" (ElevenLabs Branded Voiceover)
  • File Size: 5.0 MB
  • Resolution: 1920x1080
  • Status: Ready for Publishing

Actionable Next Steps

With these optimized clips and their associated data now stored in your PantheraHive database, you can immediately proceed with your social media publishing strategy.

  1. Review and Verify:

* Access the generated clips via the provided URLs to review their content and ensure they meet your expectations.

* Verify the embedded CTA and the accuracy of the identified engagement segments.

  1. Publish to Social Platforms:

* Manual Publishing: Download the clips directly using the provided "Direct Download Links" and upload them to their respective platforms (YouTube Shorts, LinkedIn, X/Twitter). Remember to include the "Associated pSEO Landing Page" URL in your post description.

* Automated Publishing (Recommended): If integrated, utilize PantheraHive's social media publishing integrations to schedule and publish these clips directly from your dashboard. This will automatically associate the correct pSEO link and tracking parameters.

  1. Monitor Performance:

* Track the performance of these clips on each social platform. Pay attention to engagement rates, click-through rates to your pSEO landing pages, and overall reach.

* Leverage PantheraHive's analytics dashboard to correlate social signals with brand mention tracking and referral traffic metrics.

  1. Strategic Planning:

* Use the Vortex Hook Scores as a guide for future content creation, understanding which types of moments resonate most with your audience.

* Consider setting up recurring Social Signal Automator workflows for your new content releases to maintain a consistent presence and continuously build brand authority.


This completes the "Social Signal Automator" workflow. Your marketing assets are now optimized and ready to boost your brand's presence and trust signals across key social platforms.

social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}