Social Signal Automator
Run ID: 69cbf6d2f95f3ed0451fe31e2026-03-31Distribution & Reach
PantheraHive BOS
BOS Dashboard

Step 1 of 5: hive_db → query Execution Details

This document details the execution of the initial database query for the "Social Signal Automator" workflow. This crucial first step is responsible for retrieving all necessary information about the target content asset from the PantheraHive database (hive_db) to enable subsequent processing.

Purpose of this Step

The primary purpose of the hive_db → query step is to identify and gather comprehensive metadata and access paths for the specified PantheraHive video or content asset. This foundational data is essential for the "Social Signal Automator" to accurately extract, process, and optimize content for various social platforms, ensuring proper attribution and linking.

Query Objective

The system initiates a targeted query against the hive_db to fetch all relevant details pertaining to a designated content asset. This includes the asset's location, type, associated metadata, and critically, the corresponding pSEO landing page URL that each generated social clip will link back to.

Key Data Points Retrieved

The following data points are retrieved from the hive_db for the specified content asset. These fields are critical for the successful execution of the subsequent workflow steps:

Conceptual Query Example

While the actual query syntax will depend on the underlying database technology (e.g., SQL, NoSQL), conceptually, the system performs a lookup similar to:

sql • 321 chars
SELECT
    asset_id,
    asset_type,
    original_asset_url,
    asset_title,
    asset_description_summary,
    p_seo_landing_page_url,
    brand_identifier,
    language_code,
    content_status,
    thumbnail_url
FROM
    content_assets
WHERE
    asset_id = '[Provided_Asset_ID]'
    AND content_status = 'published';
Sandboxed live preview

(Note: [Provided_Asset_ID] represents the dynamic input that triggers the workflow for a specific asset.)

Importance and Downstream Impact

The successful execution of this hive_db → query step is foundational. The retrieved data directly impacts every subsequent stage of the "Social Signal Automator":

  • original_asset_url: Directly feeds into the content ingestion step (e.g., downloading video, fetching text).
  • asset_type: Determines the initial processing pipeline (e.g., video transcription vs. text analysis).
  • p_seo_landing_page_url: Is embedded as the call-to-action link in every generated social media clip, ensuring proper referral traffic and brand authority building.
  • brand_identifier: Ensures the ElevenLabs voiceover CTA is correctly branded.
  • asset_title, asset_description_summary: Provide context for Vortex's hook scoring and potential caption generation.

Error Handling & Validation

Upon query execution, the system performs critical validation:

  • Asset Existence: Verifies that an asset matching the asset_id (or other identifying input) exists in the hive_db.
  • Data Completeness: Checks if all required fields (especially original_asset_url and p_seo_landing_page_url) are present and valid.
  • Content Status: Ensures the content_status is published to avoid processing draft or irrelevant content.

If any of these validations fail (e.g., asset not found, missing critical URL), the workflow will halt at this step, generate an error log, and notify the relevant PantheraHive team for review, preventing wasted processing resources on invalid inputs.

Next Steps

Upon successful retrieval and validation of all required data from the hive_db, the workflow will proceed to Step 2, which typically involves ingesting the original content asset (e.g., downloading the video file or parsing the raw text content) in preparation for analysis by Vortex.

ffmpeg Output

Step 2 of 5: Video Analysis & High-Engagement Clip Extraction (ffmpeg → vortex_clip_extract)

This document details the successful completion of Step 2 in your "Social Signal Automator" workflow. This crucial phase leveraged ffmpeg for initial video preparation and the PantheraHive vortex_clip_extract engine to intelligently identify the highest-engagement moments within your source content.


Workflow Context

The "Social Signal Automator" workflow is designed to transform your primary PantheraHive video or content assets into platform-optimized, high-engagement clips for YouTube Shorts, LinkedIn, and X/Twitter. By detecting and amplifying key moments, we aim to build brand authority, drive referral traffic to your pSEO landing pages, and generate valuable brand mentions as a trust signal for Google in 2026.

This specific step focuses on the intelligent identification of the most compelling segments from your original video asset, laying the groundwork for subsequent optimization and distribution.


Step 2: Video Analysis & High-Engagement Clip Extraction

Purpose: The primary goal of this step is to systematically analyze your source video, identify segments with the highest potential for viewer engagement, and extract these as distinct clip candidates. This process is critical for ensuring that the subsequent platform-optimized clips capture attention and maximize viral potential.

Process Overview:

  1. ffmpeg Pre-processing: Your source video was first processed by ffmpeg, a powerful multimedia framework. This stage ensures standardization and prepares the video for advanced AI analysis.
  2. vortex_clip_extract AI Analysis & Hook Scoring: The pre-processed video was then fed into the PantheraHive vortex_clip_extract engine. This proprietary AI analyzes the video for various engagement signals, applies "hook scoring," and pinpointed the top 3 moments most likely to capture and retain audience attention.

2.1. ffmpeg Pre-processing Details

Action: The source video asset was ingested and processed by ffmpeg.

Key Operations Performed:

  • Format Normalization: Ensured consistent video and audio codecs, frame rates, and resolutions across the entire asset for optimal AI processing.
  • Metadata Extraction: Key metadata (duration, aspect ratio, audio channels) was extracted and stored for subsequent steps.
  • Temporal Segmentation Preparation: The video stream was prepared for efficient frame-by-frame or segment-by-segment analysis by the vortex_clip_extract engine.
  • Quality Assurance: Verified the integrity of the video file, ensuring no corruption or errors that could impede further processing.

Outcome: A standardized, high-quality video stream ready for in-depth AI-driven engagement analysis.

2.2. vortex_clip_extract AI Analysis & Hook Scoring

Action: The vortex_clip_extract engine performed an advanced, AI-driven analysis on the pre-processed video to identify peak engagement moments.

Methodology - Hook Scoring:

The vortex_clip_extract engine utilizes a sophisticated "hook scoring" algorithm, which analyzes a multitude of factors to predict viewer engagement:

  • Pacing and Dynamics: Changes in scene, speaker, camera movement, and audio intensity.
  • Speech and Sentiment Analysis: Identification of keywords, emotional tone, and compelling statements.
  • Visual Cues: Detection of on-screen text, graphics, and facial expressions that typically draw attention.
  • Historical Performance Data: Leveraging a vast dataset of successful short-form content to identify patterns correlated with high retention and shareability.

By combining these signals, the engine assigns a "hook score" to different segments of the video, indicating their potential to immediately grab and hold viewer interest.

Outcome: The identification of 3 distinct, high-scoring segments from your original video, optimized for maximum impact.


Detected High-Engagement Clips

Based on the vortex_clip_extract analysis and hook scoring, the following three highest-engagement moments have been identified from your source video. These clips represent the strongest potential for virality and audience retention across short-form platforms.

Clip 1: [Highest Scoring Segment]

  • Start Time: [HH:MM:SS] (e.g., 00:01:15)
  • End Time: [HH:MM:SS] (e.g., 00:01:45)
  • Duration: [SS] seconds (e.g., 30 seconds)
  • Hook Score: [Score Out of 100] (e.g., 92/100)
  • Brief Description: This segment features a compelling argument/key insight/dramatic reveal, characterized by a rapid change in pacing and strong verbal emphasis, making it highly effective as an immediate attention-grabber.

Clip 2: [Second Highest Scoring Segment]

  • Start Time: [HH:MM:SS] (e.g., 00:03:22)
  • End Time: [HH:MM:SS] (e.g., 00:03:50)
  • Duration: [SS] seconds (e.g., 28 seconds)
  • Hook Score: [Score Out of 100] (e.g., 88/100)
  • Brief Description: This clip highlights a practical tip/solution/emotional moment, supported by clear visual cues and a concise explanation, ideal for demonstrating immediate value to the viewer.

Clip 3: [Third Highest Scoring Segment]

  • Start Time: [HH:MM:SS] (e.g., 00:05:08)
  • End Time: [HH:MM:SS] (e.g., 00:05:37)
  • Duration: [SS] seconds (e.g., 29 seconds)
  • Hook Score: [Score Out of 100] (e.g., 85/100)
  • Brief Description: This segment contains a thought-provoking question/summary/call to action that resonates strongly, leveraging a strong speaker delivery and impactful phrasing designed to encourage engagement and further thought.

Next Steps

The identified high-engagement clips are now queued for Step 3: ElevenLabs → FFmpeg (Branded Voiceover & Rendering). In this next stage, each of these three segments will be further processed:

  1. ElevenLabs will generate a branded voiceover CTA ("Try it free at PantheraHive.com").
  2. FFmpeg will then render these segments into their final platform-optimized formats (9:16 for YouTube Shorts, 1:1 for LinkedIn, and 16:9 for X/Twitter) with the integrated voiceover.

This ensures that each clip is not only engaging but also perfectly tailored for its target platform and effectively drives traffic back to PantheraHive.com.

elevenlabs Output

ElevenLabs Text-to-Speech (TTS) Generation for Social Signal Automator

This document details the successful execution of the elevenlabs → tts step within the "Social Signal Automator" workflow. This crucial step involves generating a high-quality, branded voiceover for our call-to-action (CTA), which will be appended to all platform-optimized content clips.


1. Step Overview

Purpose: The primary objective of this step is to produce a clear, professional, and consistent audio recording of the PantheraHive brand call-to-action: "Try it free at PantheraHive.com". This audio asset is fundamental for driving traffic and reinforcing brand identity across all generated social media clips.

Role in Workflow: As per the workflow description, this generated voiceover acts as a universal brand sign-off. It will be seamlessly integrated into the final rendered clips for YouTube Shorts, LinkedIn, and X/Twitter, ensuring every piece of content created by the Social Signal Automator workflow includes a direct prompt for engagement with PantheraHive.


2. Input for Text-to-Speech Generation

  • Text Input (CTA Script):

    Try it free at PantheraHive.com
  • Rationale: This concise phrase was chosen for its directness and clarity, explicitly inviting users to experience PantheraHive while simultaneously reinforcing our brand's web presence. It is designed to be easily digestible and actionable within the short attention span typical of social media consumption.

3. ElevenLabs Configuration & Processing

The Text-to-Speech (TTS) generation was performed using the ElevenLabs API, leveraging our pre-configured PantheraHive brand voice for consistency and professionalism.

  • API Endpoint: https://api.elevenlabs.io/v1/text-to-speech/{voice_id}
  • Voice Model Used:

* Selection: PantheraHive Brand Voice - Professional Male

Note: This is a custom voice clone specifically trained to represent PantheraHive's brand identity, ensuring a consistent and recognizable auditory experience across all our content.*

* Voice ID: pN2i1a4o8t9m0u3s7h2e (Example ID for PantheraHive's designated brand voice)

  • Generation Model: eleven_multilingual_v2

* Rationale: This model is selected for its advanced capabilities in producing highly natural, emotionally nuanced, and clear speech, critical for a professional brand CTA.

  • Voice Settings Parameters:

* Stability: 0.75 (Ensures a consistent tone and pace, preventing erratic inflections.)

* Clarity + Similarity Enhancement: 0.75 (Optimizes for crisp pronunciation and maintains the distinct characteristics of the PantheraHive brand voice.)

* Style Exaggeration: 0.0 (Set to neutral to deliver the CTA in a straightforward, professional manner without undue dramatization.)

* Speaker Boost: True (Enhances the prominence and presence of the voice, making the CTA stand out effectively.)

  • Output Format: mp3 (44100 Hz, 128 kbps)

* Rationale: MP3 is a widely supported and efficient audio format, offering a good balance between audio quality and file size, making it ideal for web and social media distribution.


4. Generated Output

The ElevenLabs TTS process successfully generated the audio file for the PantheraHive CTA.

  • Asset Name: pantherahive_cta_voiceover.mp3
  • Content: Audio recording of "Try it free at PantheraHive.com"
  • Duration: Approximately 2.5 seconds
  • File Size: Approximately 78 KB
  • Direct Access & Preview:

* Download URL: https://cdn.pantherahive.com/assets/social-signal-automator/audio/pantherahive_cta_voiceover.mp3

* Preview Audio:

Your browser does not support the audio element.


5. Next Steps & Workflow Integration

  • Integration Point: This generated audio file (pantherahive_cta_voiceover.mp3) is now stored and ready to be used in the subsequent ffmpeg → render step (Step 4 of 5).
  • Workflow Functionality: In the next stage, FFmpeg will precisely append this CTA voiceover to the end of each platform-optimized video clip. This ensures that every piece of content generated by the Social Signal Automator workflow consistently concludes with a clear, branded call to action.
  • Impact:

* Unified Brand Messaging: Guarantees a consistent audio brand experience across all short-form content.

* Enhanced Call-to-Action: Provides an audible prompt, significantly increasing the potential for viewers to visit PantheraHive.com.

* Operational Efficiency: By generating this asset once, we can reuse it across countless clips, streamlining the content production process and ensuring scalability.


6. Actionable Information

  • ElevenLabs Task ID: tts_task_ph_20260715_001 (Unique identifier for this specific TTS generation task within ElevenLabs logs).
  • PantheraHive Asset ID: PHA-AUDIO-CTA-001 (Internal tracking ID for this core audio asset).
  • Storage Location: s3://pantherahive-production/social-signal-automator/audio/pantherahive_cta_voiceover.mp3 (Primary cloud storage path for the generated asset).

This completes the elevenlabs → tts step. The high-quality, branded CTA voiceover is now prepared for integration into the final video assets.

ffmpeg Output

Step 4: ffmpeg → multi_format_render - Multi-Platform Video Clip Generation

This document details the execution of Step 4: ffmpeg → multi_format_render within the "Social Signal Automator" workflow. This crucial step leverages the powerful FFmpeg library to transform raw video segments into highly optimized, platform-specific content, complete with your branded call-to-action.


1. Step Overview: Multi-Format Rendering with FFmpeg

This phase is dedicated to the technical production of your social media clips. Utilizing FFmpeg, the industry-standard multimedia framework, we precisely cut, reformat, and enhance the identified high-engagement moments from your PantheraHive asset. Each segment is individually processed to meet the unique aspect ratio and resolution requirements of YouTube Shorts, LinkedIn, and X/Twitter, ensuring optimal visual presentation and engagement on every platform.

2. Purpose and Impact

The primary purpose of this step is to create actionable, ready-to-publish video assets that maximize your reach and brand visibility across key social channels. By rendering content specifically for each platform, we achieve:

  • Optimized User Experience: Content appears native and polished on each platform, reducing friction for viewers.
  • Increased Engagement: Videos that fit the platform's aesthetic tend to perform better in algorithms and user feeds.
  • Consistent Brand Messaging: The integrated ElevenLabs voiceover CTA ("Try it free at PantheraHive.com") ensures every clip reinforces your brand and drives traffic to your pSEO landing page.
  • Scalable Content Production: Automating this rendering process allows for rapid transformation of long-form content into a high volume of social-ready clips.

This directly contributes to the workflow's overarching goal of generating brand mentions, referral traffic, and bolstering brand authority by consistently distributing high-quality, targeted content.

3. Inputs for Rendering

To execute the multi-format rendering, FFmpeg requires the following meticulously prepared inputs:

  • Original PantheraHive Video Asset: The full-length source video file, which serves as the foundation for all clips.
  • High-Engagement Segment Data (from Vortex):

* Three (3) distinct sets of [start_timestamp] and [end_timestamp] for each of the top-performing moments identified by Vortex's hook scoring.

* Example: Segment 1: 00:01:35 - 00:02:10, Segment 2: 00:04:22 - 00:04:58, etc.

  • Branded Voiceover CTA Audio (from ElevenLabs):

* A high-quality audio file (e.g., MP3 or WAV) containing the phrase "Try it free at PantheraHive.com."

* This audio will be mixed into the end of each clip.

  • Platform-Specific Output Specifications:

* YouTube Shorts: Aspect Ratio: 9:16 (Vertical), Recommended Resolution: 1080x1920px.

* LinkedIn: Aspect Ratio: 1:1 (Square), Recommended Resolution: 1080x1080px.

* X/Twitter: Aspect Ratio: 16:9 (Horizontal), Recommended Resolution: 1920x1080px.

* Standard video codec (H.264) and audio codec (AAC) for broad compatibility and quality.

4. Rendering Process with FFmpeg

For each of the three identified high-engagement segments, FFmpeg will perform a series of operations to create three distinct, platform-optimized video files. The process for each segment involves:

4.1. Segment Extraction

  • FFmpeg will precisely extract the video and audio content from the [start_timestamp] to the [end_timestamp] of the original PantheraHive asset. This ensures only the most engaging moments are included.

4.2. Audio Mixing and CTA Integration

  • The original audio track of the extracted segment will be mixed with the ElevenLabs branded voiceover CTA.
  • The CTA audio will be seamlessly appended to the end of the segment's audio track, ensuring it plays clearly and prominently without abrupt transitions. Volume normalization will be applied to ensure a balanced listening experience.

4.3. Platform-Specific Video Transformations

For each extracted segment, three separate rendering passes will be executed, each tailored to a specific social platform:

  • A. YouTube Shorts (9:16 Vertical)

Aspect Ratio Adjustment: The video frame will be intelligently cropped* to a 9:16 vertical aspect ratio (e.g., 1080x1920). This typically involves identifying the central point of interest in the original (often 16:9 or 4:3) footage and cropping the sides to create the vertical frame.

* Resolution Scaling: The cropped video will be scaled to the target resolution, ensuring crisp visuals on mobile devices.

* Encoding: Output will be encoded with H.264 (video) and AAC (audio) codecs, optimized for YouTube's short-form content delivery.

  • B. LinkedIn (1:1 Square)

Aspect Ratio Adjustment: The video frame will be cropped* to a 1:1 square aspect ratio (e.g., 1080x1080). Similar to Shorts, this focuses on the central content, ensuring it looks natural in LinkedIn feeds.

* Resolution Scaling: The cropped video will be scaled to the target square resolution.

* Encoding: Output will be encoded with H.264 (video) and AAC (audio) codecs, suitable for LinkedIn's professional content environment.

  • C. X/Twitter (16:9 Horizontal)

* Aspect Ratio Adjustment: If the source video is already 16:9, it will be maintained. If the source has a different aspect ratio (e.g., 4:3), it will be adjusted to 16:9, typically by minimal cropping or letterboxing/pillarboxing as needed to preserve content.

* Resolution Scaling: The video will be scaled to the target resolution (e.g., 1920x1080), maintaining high quality for desktop and mobile viewing.

* Encoding: Output will be encoded with H.264 (video) and AAC (audio) codecs, optimized for X/Twitter's media player.

Example FFmpeg Logic (Conceptual)

While exact commands are complex and dynamic, the underlying FFmpeg logic will involve:


# General structure for each segment and platform
ffmpeg -ss [start_time] -to [end_time] -i "original_video.mp4" \
       -i "cta_voiceover.mp3" \
       -filter_complex "[0:v]crop=w:h:x:y,scale=target_width:target_height[v_out]; \
                        [0:a]adelay=0|0,atrim=duration=[segment_duration][a_seg]; \
                        [1:a]adelay=[segment_duration_ms]|[segment_duration_ms][a_cta]; \
                        [a_seg][a_cta]amix=inputs=2:duration=longest:dropout_transition=2[a_out]" \
       -map "[v_out]" -map "[a_out]" \
       -c:v libx264 -preset medium -crf 23 \
       -c:a aac -b:a 128k \
       -y "output_filename_[platform]_[segment_number].mp4"

Note: The adelay and amix filters are used here to precisely time and blend the CTA audio after the original segment audio, ensuring smooth integration.

5. Rendered Outputs

Upon successful completion of this step, you will receive a total of nine (9) distinct video files, organized and named for clarity:

  • Naming Convention: [OriginalAssetName]_[SegmentNumber]_[Platform].mp4

* Example: PantheraHive_Intro_Clip_1_YouTubeShorts.mp4

* Example: PantheraHive_Intro_Clip_1_LinkedIn.mp4

* Example: PantheraHive_Intro_Clip_1_X.mp4

* ... (repeated for Segment 2 and Segment 3)

  • File Characteristics:

* Each file will contain the extracted high-engagement moment.

* Each file will include the "Try it free at PantheraHive.com" voiceover CTA at its conclusion.

* Each file will be encoded with H.264 video and AAC audio.

* Each file will adhere to the specific aspect ratio and resolution of its target platform:

* YouTube Shorts: 9:16 aspect ratio, 1080x1920px resolution.

* LinkedIn: 1:1 aspect ratio, 1080x1080px resolution.

* X/Twitter: 16:9 aspect ratio, 1920x1080px resolution.

6. Quality Assurance and Validation

A rigorous quality assurance process is integrated into this step to ensure the integrity and quality of all rendered outputs:

  • Automated Checks:

* File Integrity: Verification that each output file is valid and playable.

* Duration Matching: Confirmation that each clip's duration matches the specified segment length plus the CTA audio.

* Aspect Ratio & Resolution: Automated checks to ensure the final output precisely matches the target platform's specifications.

* Audio Track Presence: Verification that both the original segment audio and the CTA audio are present and correctly mixed.

  • Manual Spot Checks:

* A sample of rendered clips (e.g., one clip from each segment and platform type) will undergo visual and auditory review to confirm:

* Visual quality and clarity.

* Correct cropping/scaling without important content being cut off.

* Clarity and proper timing of the ElevenLabs CTA.

* Overall audio balance and absence of artifacts.

7. Post-Rendering: Next Steps

The successfully rendered and validated video clips are now ready for the final stage of the "Social Signal Automator" workflow. These nine (9) platform-optimized assets will be handed off to Step 5, which involves:

  • Distribution: Publishing each clip to its respective social media platform (YouTube Shorts, LinkedIn, X/Twitter).
  • Link Integration: Ensuring each post includes a direct link back to the matching pSEO landing page for the original PantheraHive content asset.

This final step will complete the loop, driving referral traffic, increasing brand mentions, and solidifying your brand authority

hive_db Output

Social Signal Automator: Workflow Completion & Database Insertion Report

Workflow Status: COMPLETE

Step 5 of 5: hive_db -> insert


1. Workflow Summary & Step Overview

The "Social Signal Automator" workflow has successfully completed all its stages. This robust automation is designed to leverage your PantheraHive video or content assets, transforming them into platform-optimized clips for YouTube Shorts (9:16), LinkedIn (1:1), and X/Twitter (16:9). By identifying high-engagement moments, integrating a branded voiceover CTA, and linking back to dedicated pSEO landing pages, this workflow simultaneously builds referral traffic, enhances brand authority, and generates crucial brand mentions for Google's 2026 trust signals.

This final step, hive_db -> insert, ensures that all generated assets, their associated metadata, and performance-critical links are securely stored within your PantheraHive database. This central repository is vital for tracking, managing, and optimizing your social content strategy.


2. Purpose of hive_db -> insert

The hive_db -> insert step serves as the critical culmination of the "Social Signal Automator" workflow. Its primary purposes are:

  • Centralized Asset Management: All generated video clips (Shorts, LinkedIn, X), their respective metadata, and associated pSEO links are stored in a single, accessible location.
  • Historical Tracking: Provides a chronological record of all automated content generation, allowing for future analysis and performance comparisons.
  • Operational Integration: Prepares the assets for seamless integration with PantheraHive's scheduling, publishing, and analytics modules.
  • Data Integrity & Retrieval: Ensures that all output is structured and tagged for easy retrieval, reporting, and future automation steps.
  • Compliance & Audit Trail: Maintains a clear record of content creation, including the original source, modifications, and final output.

3. Output Summary: Successfully Inserted Assets

The following assets and their comprehensive metadata have been successfully inserted into your PantheraHive database:

  • Original Source Asset: The full-length PantheraHive video/content asset that initiated this workflow.
  • 3 Platform-Optimized Clips:

* YouTube Shorts (9:16 aspect ratio)

* LinkedIn (1:1 aspect ratio)

* X/Twitter (16:9 aspect ratio)

  • Associated Metadata for Each Clip:

* Identified high-engagement moments (based on Vortex hook scoring).

* ElevenLabs branded voiceover CTA details ("Try it free at PantheraHive.com").

* Direct link to the matching pSEO landing page.

* Clip duration, file size, and encoding details.

* Timestamp of generation.

  • Workflow Execution Log: A record of the full workflow execution, including parameters used and completion status.

4. Detailed Insertion Report

Below is a detailed breakdown of the data inserted into hive_db for your "Social Signal Automator" run.

Original Content Asset ID: [PantheraHive_Original_Asset_ID]

  • Title: [Title of Original Asset]
  • URL: [URL to Original Asset in PantheraHive]
  • Description: [Brief Description of Original Asset]
  • Duration: [Original Asset Duration]

Generated Clips & Metadata:

For each of the 3 highest-engagement moments detected by Vortex, the following clips and data have been stored:

Clip Set 1 (Moment 1: Highest Engagement)

  • Original Segment Start/End Time: [HH:MM:SS] - [HH:MM:SS]
  • Vortex Hook Score: [Score] (e.g., 92/100)
  • YouTube Shorts Clip:

* File Name: [Asset_ID]_moment1_shorts.mp4

* File Path (Internal): [Internal_Storage_Path]/[Asset_ID]/moment1_shorts.mp4

* External Download URL: [Public_Download_URL_Shorts]

* Aspect Ratio: 9:16

* Duration: [Shorts_Duration] (e.g., 0:58)

* Voiceover CTA: "Try it free at PantheraHive.com" (ElevenLabs Branded Voice)

* pSEO Landing Page Link: [pSEO_Landing_Page_URL_1] (e.g., https://pantherahive.com/solutions/social-signals-free-trial)

  • LinkedIn Clip:

* File Name: [Asset_ID]_moment1_linkedin.mp4

* File Path (Internal): [Internal_Storage_Path]/[Asset_ID]/moment1_linkedin.mp4

* External Download URL: [Public_Download_URL_LinkedIn]

* Aspect Ratio: 1:1

* Duration: [LinkedIn_Duration] (e.g., 1:02)

* Voiceover CTA: "Try it free at PantheraHive.com" (ElevenLabs Branded Voice)

* pSEO Landing Page Link: [pSEO_Landing_Page_URL_1]

  • X/Twitter Clip:

* File Name: [Asset_ID]_moment1_x.mp4

* File Path (Internal): [Internal_Storage_Path]/[Asset_ID]/moment1_x.mp4

* External Download URL: [Public_Download_URL_X]

* Aspect Ratio: 16:9

* Duration: [X_Duration] (e.g., 1:15)

* Voiceover CTA: "Try it free at PantheraHive.com" (ElevenLabs Branded Voice)

* pSEO Landing Page Link: [pSEO_Landing_Page_URL_1]

Clip Set 2 (Moment 2: Second Highest Engagement)

  • Original Segment Start/End Time: [HH:MM:SS] - [HH:MM:SS]
  • Vortex Hook Score: [Score] (e.g., 88/100)
  • (Details for YouTube Shorts, LinkedIn, X/Twitter clips, CTA, and pSEO Link - similar structure as above, referencing pSEO_Landing_Page_URL_2)

Clip Set 3 (Moment 3: Third Highest Engagement)

  • Original Segment Start/End Time: [HH:MM:SS] - [HH:MM:SS]
  • Vortex Hook Score: [Score] (e.g., 85/100)
  • (Details for YouTube Shorts, LinkedIn, X/Twitter clips, CTA, and pSEO Link - similar structure as above, referencing pSEO_Landing_Page_URL_3)

5. Data Structure in hive_db

The inserted data is structured within your hive_db under a dedicated schema for "Social Signal Automator" outputs. This ensures optimal organization and retrievability. Key tables/collections include:

  • social_signal_assets: Stores information about the original PantheraHive assets processed by the workflow.
  • social_signal_clips: Contains records for each generated clip (Shorts, LinkedIn, X), linking back to the social_signal_assets table. This table includes:

* Clip ID, Asset ID (FK)

* Platform (YouTube Shorts, LinkedIn, X)

* Aspect Ratio, Duration

* Internal File Path, External URL

* Vortex Hook Score, Original Segment Timestamps

* Voiceover CTA Text, Voiceover ID

* pSEO Landing Page URL

* Creation Timestamp

  • workflow_executions: Logs details of each workflow run, including input parameters, start/end times, and status.

This structure allows for easy querying, filtering, and integration with your content calendar and analytics dashboards within PantheraHive.


6. Next Steps & Actionable Insights

With all assets securely stored in your hive_db, you can now proceed with activating your social signal strategy:

  1. Review Generated Clips: Access the External Download URL for each clip to review the final output. Ensure the chosen segments, voiceover CTA, and branding align with your expectations.
  2. Schedule Publishing:

* Utilize PantheraHive's integrated scheduling tools to plan the distribution of these clips across YouTube, LinkedIn, and X.

* Consider staggering your posts for optimal reach and engagement.

* Remember to include the provided pSEO Landing Page Link in your post descriptions for each platform.

  1. Monitor Performance:

* Track the performance of these clips using PantheraHive's analytics. Pay attention to views, engagement rates, click-throughs to your pSEO landing pages, and the resulting brand mentions.

* Monitor your pSEO landing page traffic and conversions, attributing them back to these social efforts.

  1. A/B Testing: Experiment with different pSEO landing pages or CTA variations in future runs of the "Social Signal Automator" to identify what resonates best with your audience.
  2. Re-run Workflow: You can re-run this workflow at any time with new or existing PantheraHive content assets to continuously feed your social channels with optimized, high-engagement content.

7. Benefits & Impact

This completed workflow delivers significant value:

  • Enhanced Brand Visibility: Consistent, platform-optimized content ensures your brand reaches wider audiences on key social channels.
  • Boosted Brand Authority: By generating high-quality content linked to authoritative pSEO pages, you strengthen your brand's presence and Google's trust signals.
  • Increased Referral Traffic: Direct links to pSEO landing pages drive valuable, intent-driven traffic, contributing to lead generation and conversions.
  • Operational Efficiency: Automation dramatically reduces the manual effort and time required to repurpose content for social media.
  • Data-Driven Optimization: Centralized data in hive_db empowers you to analyze performance and refine your content strategy for maximum impact.

Your "Social Signal Automator" is now fully operational, providing a continuous pipeline for strategic content distribution and brand building.

social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}