Social Signal Automator
Run ID: 69cb15a242bc43f7e3be749e2026-03-31Distribution & Reach
PantheraHive BOS
BOS Dashboard

Workflow Step 1 of 5: hive_db → Query

Workflow Name: Social Signal Automator

Description: In 2026, Google tracks Brand Mentions as a trust signal. This workflow takes any PantheraHive video or content asset and turns it into platform-optimized clips for YouTube Shorts (9:16), LinkedIn (1:1), and X/Twitter (16:9). Vortex detects the 3 highest-engagement moments using hook scoring, ElevenLabs adds a branded voiceover CTA ("Try it free at PantheraHive.com"), and FFmpeg renders each format. Each clip links back to the matching pSEO landing page — building referral traffic and brand authority simultaneously.


Step 1: hive_db Content Asset Query

This initial step is crucial for identifying and retrieving the foundational content asset that will be processed by the "Social Signal Automator" workflow. The hive_db serves as the central repository for all PantheraHive content.

1.0 Purpose of This Step

The primary objective of this hive_db query is to:

1.1 Query Parameters & Logic

Given that no specific asset ID was provided in the initial workflow trigger, the system will execute a query to identify the most recently published and eligible content asset that has not yet undergone social signal automation or has not been processed within a defined recency window.

Database: hive_db

Table(s): content_assets (primary), potentially asset_metadata or processing_history for status checks.

Fields to Retrieve:

The query will fetch the following essential data points for the selected asset:

Query Conditions (Prioritization Logic):

The query will prioritize assets based on the following criteria to ensure optimal workflow execution:

  1. Status: content_assets.status = 'published' (Only live, publicly available content).
  2. Asset Type: content_assets.asset_type IN ('video', 'article') (Matching workflow scope).
  3. Automation Eligibility:

* content_assets.social_signal_automation_status = 'pending' (Assets explicitly marked for automation).

* OR content_assets.last_processed_date IS NULL (Assets never processed before).

* OR content_assets.last_processed_date < CURRENT_DATE - INTERVAL '30 days' (Assets older than 30 days since last processing, allowing for re-automation and fresh brand mentions).

  1. Order & Limit:

* ORDER BY content_assets.publish_date DESC (Prioritize newer content).

* LIMIT 1 (Select the single most relevant asset for this workflow run).

1.2 Expected Output

Upon successful execution, this step will return a JSON object (or equivalent structured data) containing the detailed metadata for the selected PantheraHive content asset.

Example Output (JSON):

text • 255 chars
#### 1.3 Error Handling & Edge Cases

*   **No Suitable Asset Found:** If the query returns no results (e.g., no published assets match the criteria, or all have been recently processed), the step will return an error status.
    *   **Output:**
        
Sandboxed live preview

This detailed output ensures transparency regarding the asset selection process and provides all necessary data for the subsequent steps of the "Social Signal Automator" workflow.

ffmpeg Output

Social Signal Automator Workflow: Step 2 of 5 - Video Segment Extraction

This document details the execution of Step 2: ffmpeg → vortex_clip_extract for the "Social Signal Automator" workflow. This crucial step focuses on identifying and extracting the most engaging moments from your original PantheraHive video assets, preparing them for platform-specific optimization.


1. Step Overview

In this phase, your primary PantheraHive video content is analyzed by the advanced Vortex AI engine to pinpoint the three highest-engagement segments. Leveraging sophisticated "hook scoring" methodologies, Vortex identifies moments most likely to capture and retain viewer attention. FFmpeg is then utilized to precisely extract these identified segments from the original source video, delivering three raw, high-potential clips ready for further enhancement.


2. Input Received

  • Source Asset: The original, full-length PantheraHive video or content asset specified for this workflow.

* Format: (e.g., MP4, MOV, MKV – as provided by the user)

* Resolution: (e.g., 1920x1080, 4K – original resolution)

* Duration: (e.g., 5 minutes, 30 minutes – original length)

* Audio: Embedded stereo audio track.


3. Process Details: ffmpeg → vortex_clip_extract

This step is executed in three distinct sub-phases, combining the robust capabilities of FFmpeg with the intelligent analysis of Vortex.

3.1. Phase 1: Initial Video Preparation (FFmpeg)

Before Vortex can perform its analysis, FFmpeg ensures the video is in an optimal and consistent format.

  • Action: The input video is first processed by FFmpeg to standardize its encoding and extract necessary components for Vortex's analysis.
  • Key Operations:

* Codec Normalization: If the source video uses an unconventional codec, FFmpeg transcodes it to a standard, highly compatible format (e.g., H.264 for video, AAC for audio) to ensure seamless processing by Vortex.

* Audio Stream Extraction: The primary audio track is extracted as a separate file (e.g., WAV or FLAC) for detailed acoustic analysis by Vortex's hook scoring algorithms, which may include speech-to-text and sentiment analysis.

* Metadata Extraction: Essential metadata such as frame rate, resolution, and duration are parsed and passed to Vortex.

  • Purpose: To provide Vortex with a standardized, high-quality, and easily analyzable input, preventing compatibility issues and ensuring accurate scoring.

3.2. Phase 2: High-Engagement Moment Detection (Vortex AI)

This is the core intelligence phase where Vortex identifies the most compelling segments.

  • Action: The Vortex AI engine analyzes the prepared video stream and its extracted audio using advanced "hook scoring" methodologies.
  • Hook Scoring Mechanism (2026 Context): Vortex utilizes a multi-modal AI model, trained on extensive datasets of viral short-form content and user engagement metrics, to predict moments of high audience retention and interest. This includes:

* Acoustic Analysis: Detecting changes in speech tempo, vocal modulation, emotional tone (e.g., excitement, intrigue), presence of rhetorical questions, and specific keywords/phrases known to increase engagement.

* Visual Dynamics: Identifying rapid scene changes, introduction of new visual elements, facial expressions conveying strong emotion, on-screen text overlays, and dynamic camera movements.

* Content Structure Analysis: Recognizing patterns indicative of problem-solution setups, suspense building, unexpected reveals, or powerful summary statements.

* Predictive Engagement Modeling: Leveraging a neural network to score each segment based on its potential to act as a strong "hook" for short-form content, aiming for maximum initial viewer retention.

  • Output: Vortex identifies and outputs the precise start and end timestamps for the three (3) highest-scoring engagement moments. Each moment is typically between 15-60 seconds in duration, optimized for short-form content platforms.

3.3. Phase 3: Raw Clip Extraction (FFmpeg)

Once Vortex has identified the target segments, FFmpeg performs the physical extraction.

  • Action: FFmpeg takes the original prepared video asset and, using the timestamps provided by Vortex, extracts the specified segments without re-encoding or altering their original properties.
  • FFmpeg Command Logic: For each identified segment, FFmpeg executes a command similar to:

    ffmpeg -ss [START_TIMESTAMP] -i [INPUT_VIDEO_PATH] -to [END_TIMESTAMP] -c copy [OUTPUT_CLIP_PATH]

* -ss [START_TIMESTAMP]: Seeks to the exact start time of the clip.

* -i [INPUT_VIDEO_PATH]: Specifies the input source video.

* -to [END_TIMESTAMP]: Specifies the exact end time of the clip.

-c copy: This crucial flag ensures that the video and audio streams are copied directly* without re-encoding. This preserves the original quality and avoids any generational loss, making the extraction extremely fast and lossless.

  • Result: Three distinct, raw video clips, each representing a high-engagement moment, maintaining the original source's resolution, aspect ratio, and quality.

4. Output Deliverables

Upon successful completion of this step, the following assets are generated and made available for the next stage of the workflow:

  • Three (3) Raw High-Engagement Video Clips:

* Clip 1: Identified as the highest-scoring engagement moment.

* Clip 2: Identified as the second highest-scoring engagement moment.

* Clip 3: Identified as the third highest-scoring engagement moment.

* Format: Original source format (e.g., MP4) with -c copy ensuring no re-encoding.

* Resolution: Original source resolution (e.g., 1920x1080).

* Aspect Ratio: Original source aspect ratio (e.g., 16:9).

* Audio: Original embedded audio track.

* Duration: Typically between 15-60 seconds per clip, as determined by Vortex.

  • Associated Metadata File: A JSON or YAML file containing:

* Original source asset ID.

* Full path to each extracted clip.

* Start and end timestamps (original video timeline) for each clip.

* Vortex "hook score" for each clip.

* Clip duration.


5. Next Steps in Workflow

The three raw, high-engagement video clips generated in this step will now proceed to Step 3 of the "Social Signal Automator" workflow. In the subsequent steps, these clips will undergo:

  • Branded Voiceover Integration: ElevenLabs will add a consistent, branded voiceover CTA ("Try it free at PantheraHive.com") to each clip.
  • Platform-Optimized Rendering: FFmpeg will then render each clip into the specific aspect ratios required for YouTube Shorts (9:16), LinkedIn (1:1), and X/Twitter (16:9), incorporating any necessary visual adjustments.
  • pSEO Landing Page Linking: Each final clip will be associated with its matching pSEO landing page for referral traffic and brand authority.

This concludes the detailed output for Step 2: ffmpeg → vortex_clip_extract. The workflow is now ready to proceed with enhancing these compelling raw segments.

elevenlabs Output

Step 3 of 5: ElevenLabs Text-to-Speech (TTS) for Branded Voiceover

This document details the execution of the ElevenLabs Text-to-Speech (TTS) step within your "Social Signal Automator" workflow. This crucial step ensures consistent brand messaging across all generated video clips, driving user engagement and directing traffic to your platform.


1. Step Overview

The primary objective of this step is to transform a predefined brand call-to-action (CTA) into a high-quality, professional audio segment using ElevenLabs' advanced Text-to-Speech technology. This audio will then be integrated into each platform-optimized video clip (YouTube Shorts, LinkedIn, X/Twitter) generated in the subsequent rendering phase.

By leveraging ElevenLabs, we guarantee:

  • Consistent Brand Voice: A uniform, recognizable voice for your CTA across all content.
  • High Fidelity Audio: Natural-sounding speech that enhances the professionalism of your video assets.
  • Scalability: Automated generation of voiceovers for an unlimited number of content assets without manual recording.

2. Input for ElevenLabs TTS

The specific text designated for conversion into speech, as per the workflow definition, is your branded call-to-action:

  • CTA Text: "Try it free at PantheraHive.com"

This concise and actionable phrase is designed to prompt viewers to explore your platform immediately after consuming the engaging clip content.


3. ElevenLabs Configuration Parameters

To ensure optimal audio quality and brand consistency, the following parameters are applied when generating the voiceover via ElevenLabs:

  • Voice ID: A pre-selected, consistent "PantheraHive Brand Voice" will be utilized. This voice is chosen for its professional, authoritative, and engaging tone, ensuring it aligns with your brand's identity. (e.g., a custom voice clone or a carefully selected premium stock voice from ElevenLabs' library).
  • Model: eleven_multilingual_v2 (or the latest equivalent high-fidelity model) will be used. This model offers superior naturalness, intonation, and emotional range, crucial for a compelling CTA.
  • Stability: Set to an optimal level (e.g., 0.75). This parameter controls the variability of the voice. A slightly higher stability ensures a consistent tone and pace, which is desirable for a direct CTA.
  • Clarity + Similarity Enhancement: Set to an optimal level (e.g., 0.95). This parameter enhances the clarity and fidelity of the speech, making it crisp and easy to understand, even in potentially noisy playback environments.
  • Output Format: mp3 at 128 kbps (or wav for maximum quality if preferred for final mixing). MP3 offers a good balance of quality and file size, suitable for efficient integration into video editing workflows.

4. Expected Output

Upon successful execution of this step, the output will be a high-quality audio file containing the spoken phrase "Try it free at PantheraHive.com."

  • File Type: .mp3 (or .wav)
  • Audio Content: A clear, professionally articulated voiceover of the phrase "Try it free at PantheraHive.com."
  • Duration: Approximately 2-3 seconds, depending on the chosen voice and natural speaking pace. This short duration is ideal for a quick, impactful closing statement in short-form video content.
  • Metadata: The audio file will be appropriately named (e.g., PantheraHive_CTA_Voiceover.mp3) for easy identification and integration.

5. Integration with Subsequent Steps

This generated audio file is a critical component for the subsequent steps in the "Social Signal Automator" workflow:

  1. Video Rendering (FFmpeg): The PantheraHive_CTA_Voiceover.mp3 will be seamlessly appended to the end of each platform-optimized video clip (YouTube Shorts, LinkedIn, X/Twitter). This ensures that every piece of content concludes with a clear, consistent call to action.
  2. Brand Authority & Traffic Generation: By embedding this voiceover, the clips not only drive referral traffic to PantheraHive.com but also reinforce brand recognition and authority, contributing to the "Brand Mentions as a trust signal" objective.

This automated voiceover generation ensures that your brand message is consistently and professionally delivered across all your distributed content, maximizing the impact of your social signal strategy.

ffmpeg Output

Step 4: ffmpeg -> multi_format_render - Platform-Optimized Clip Generation

This step is crucial for transforming the identified high-engagement moments into polished, platform-specific video clips ready for distribution. Utilizing the powerful FFmpeg library, we will meticulously render each clip, incorporating the branded voiceover CTA and optimizing for the unique requirements of YouTube Shorts, LinkedIn, and X/Twitter. Each output will be designed to maximize engagement and effectively drive referral traffic back to your pSEO landing pages.


1. Overview and Objectives

The primary objective of the ffmpeg -> multi_format_render step is to:

  • Generate High-Quality Video Clips: Produce visually appealing and audibly clear video content from the selected segments.
  • Ensure Platform Optimization: Tailor resolution, aspect ratio, and encoding parameters for optimal performance and user experience on each target platform.
  • Integrate Brand Messaging: Seamlessly embed the ElevenLabs branded voiceover CTA ("Try it free at PantheraHive.com") and optionally a visual brand identifier.
  • Enhance Accessibility: Burn in subtitles/captions (if generated by Vortex) for broader audience reach and silent viewing contexts.
  • Prepare for Distribution: Deliver ready-to-upload video files with appropriate naming conventions and embedded metadata.

Inputs for this Step:

  • Source video segment (extracted from the original PantheraHive asset based on Vortex's hook scoring).
  • ElevenLabs generated branded voiceover CTA audio file.
  • (Optional) SRT or VTT subtitle file generated by Vortex for the segment.

2. Core FFmpeg Principles and Parameters

FFmpeg is the industry standard for video and audio processing, offering granular control over every aspect of media manipulation. For this workflow, we leverage its capabilities for:

  • Video Encoding: Using H.264 (libx264) for broad compatibility and excellent compression efficiency, suitable for web distribution.
  • Audio Encoding: Using AAC (Advanced Audio Coding) for high-quality, efficient audio compression.
  • Resolution and Aspect Ratio Management: Intelligent scaling and cropping to fit each platform's desired dimensions.
  • Audio Mixing: Combining the original video's audio track with the ElevenLabs voiceover CTA.
  • Subtitle Burn-in: Permanently embedding captions onto the video frame for accessibility.
  • Metadata Embedding: Adding relevant information, including the pSEO landing page URL, to the output files.

3. Detailed Rendering Specifications per Platform

Each of the three target platforms has distinct requirements and user behaviors. Our FFmpeg rendering profiles are specifically tuned for these differences:

3.1. YouTube Shorts (9:16 Vertical)

  • Purpose: Maximize visibility and engagement on YouTube's vertical short-form video feed. Ideal for quick, impactful messages.
  • Resolution: 1080x1920 pixels (Full HD vertical).
  • Aspect Ratio: 9:16 (Vertical).
  • Video Codec: libx264 (H.264).

* Bitrate: Approximately 4-6 Mbps (variable bitrate for efficiency).

* Frame Rate: Matches source (typically 25 or 30 fps).

  • Audio Codec: aac.

* Bitrate: 128 kbps stereo.

  • Subtitle Integration: Strongly Recommended. Subtitles will be burned directly onto the video, styled for readability on mobile devices (e.g., white text with black outline/background).
  • ElevenLabs CTA Integration:

* Audio CTA blended into the final seconds of the clip.

* Visual text overlay ("Try it free at PantheraHive.com") appearing concurrently with the audio CTA, positioned clearly without obstructing key visuals.

  • Scaling/Cropping Logic: The source video will be scaled to fit the 1080x1920 frame. If the source is wider, it will be center-cropped vertically. If the source is taller than 9:16, it will be center-cropped horizontally.
  • Example FFmpeg Command Snippet (Conceptual):

    ffmpeg -i input_video.mp4 -i elevenlabs_cta.mp3 -filter_complex \
    "[0:v]scale='min(1080,iw)':min'(1920,ih)',crop=1080:1920,setsar=1[v_scaled]; \
    [0:a]volume=0.8[a_original]; \
    [a_original][1:a]amix=inputs=2:duration=longest:weights='1 1'[a_mixed]; \
    [v_scaled]subtitles=input.srt:force_style='Fontname=Arial,PrimaryColour=&H00FFFFFF,OutlineColour=&H00000000,BorderStyle=1,Outline=2,Shadow=0,Alignment=2,MarginV=50'[v_final]" \
    -map "[v_final]" -map "[a_mixed]" -c:v libx264 -preset medium -crf 23 -c:a aac -b:a 128k -movflags +faststart \
    -metadata comment="pSEO_URL_HERE" output_shorts.mp4

3.2. LinkedIn (1:1 Square)

  • Purpose: Professional, attention-grabbing content for LinkedIn's feed, optimized for both desktop and mobile viewing. Square format is highly effective in professional contexts.
  • Resolution: 1080x1080 pixels (Full HD square).
  • Aspect Ratio: 1:1 (Square).
  • Video Codec: libx264 (H.264).

* Bitrate: Approximately 3-5 Mbps (variable bitrate).

* Frame Rate: Matches source.

  • Audio Codec: aac.

* Bitrate: 128 kbps stereo.

  • Subtitle Integration: Highly Recommended. LinkedIn users often browse with sound off. Burned-in subtitles are critical for conveying your message effectively.
  • ElevenLabs CTA Integration:

* Audio CTA blended into the final seconds of the clip.

* Visual text overlay ("Try it free at PantheraHive.com") appearing concurrently, positioned for maximum visibility within the square frame.

  • Scaling/Cropping Logic: The source video will be scaled to fit the 1080x1080 frame. If the source is wider than 1:1, it will be center-cropped horizontally. If the source is taller than 1:1, it will be center-cropped vertically.
  • Example FFmpeg Command Snippet (Conceptual):

    ffmpeg -i input_video.mp4 -i elevenlabs_cta.mp3 -filter_complex \
    "[0:v]scale='min(1080,iw)':min'(1080,ih)',crop=1080:1080,setsar=1[v_scaled]; \
    [0:a]volume=0.8[a_original]; \
    [a_original][1:a]amix=inputs=2:duration=longest:weights='1 1'[a_mixed]; \
    [v_scaled]subtitles=input.srt:force_style='Fontname=Arial,PrimaryColour=&H00FFFFFF,OutlineColour=&H00000000,BorderStyle=1,Outline=2,Shadow=0,Alignment=2,MarginV=50'[v_final]" \
    -map "[v_final]" -map "[a_mixed]" -c:v libx264 -preset medium -crf 23 -c:a aac -b:a 128k -movflags +faststart \
    -metadata comment="pSEO_URL_HERE" output_linkedin.mp4

3.3. X/Twitter (16:9 Horizontal)

  • Purpose: Standard landscape format for broad compatibility on X (formerly Twitter), ideal for more traditional video sharing.
  • Resolution: 1920x1080 pixels (Full HD) or 1280x720 pixels (HD) for smaller file sizes. We will default to 1920x1080 for maximum quality.
  • Aspect Ratio: 16:9 (Horizontal).
  • Video Codec: libx264 (H.264).

* Bitrate: Approximately 5-8 Mbps (variable bitrate).

* Frame Rate: Matches source.

  • Audio Codec: aac.

* Bitrate: 128 kbps stereo.

  • Subtitle Integration: Recommended. While not as critical as for Shorts or LinkedIn, subtitles improve accessibility and engagement.
  • ElevenLabs CTA Integration:

* Audio CTA blended into the final seconds of the clip.

* Visual text overlay ("Try it free at PantheraHive.com") appearing concurrently, typically as a lower-third graphic.

  • Scaling/Cropping Logic: The source video will be scaled to fit the 1920x1080 frame. If the source is wider than 16:9, it will be letterboxed (black bars top/bottom). If the source is taller than 16:9, it will be center-cropped horizontally.
  • Example FFmpeg Command Snippet (Conceptual):

    ffmpeg -i input_video.mp4 -i elevenlabs_cta.mp3 -filter_complex \
    "[0:v]scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1[v_scaled]; \
    [0:a]volume=0.8[a_original]; \
    [a_original][1:a]amix=inputs=2:duration=longest:weights='1 1'[a_mixed]; \
    [v_scaled]subtitles=input.srt:force_style='Fontname=Arial,PrimaryColour=&H00FFFFFF,OutlineColour=&H00000000,BorderStyle=1,Outline=2,Shadow=0,Alignment=2,MarginV=50'[v_final]" \
    -map "[v_final]" -map "[a_mixed]" -c:v libx264 -preset medium -crf 23 -c:a aac -b:a 128k -movflags +faststart \
    -metadata comment="pSEO_URL_HERE" output_twitter.mp4

4. Integration Details: ElevenLabs CTA & Subtitles

4.1. ElevenLabs Voiceover CTA

  • The generated audio file from ElevenLabs will be precisely timed to appear at the end of each selected high-engagement video clip segment.
  • FFmpeg's amix filter will be used to blend this CTA audio with the original video's audio track. The original audio will be slightly ducked (reduced in volume) during the CTA to ensure clarity of the "Try it free at PantheraHive.com" message.
  • A visual text overlay for the CTA will be added using FFmpeg's drawtext or overlay filter, appearing simultaneously with the audio.

4.2. Subtitle/Caption Integration

  • If a .srt or .vtt subtitle file is provided by Vortex (from its transcription service), FFmpeg's subtitles filter will be employed to burn these captions directly into the video frames.
  • Styling: Subtitles will be styled for optimal readability across platforms, typically with a clear, sans-serif font (e.g., Arial, Helvetica), white text color, and a subtle black outline or background box for contrast. Position will be at the bottom center, leaving space for other potential brand elements.

5. Brand Element Integration (Optional but Recommended)

To further enhance brand authority and recognition, the system can integrate additional brand elements:

  • Logo Overlay: A subtle, semi-transparent PantheraHive logo can be placed in a corner (e.g., top-right) of each video clip using FFmpeg's overlay filter.
  • Lower Third Graphic: For the CTA, a professionally designed lower-third graphic (containing "Try it free at PantheraHive.com") can be overlaid instead of plain text, providing a more polished brand presentation.

6. Output File Naming Convention & Metadata

Ensuring easy identification and tracking, each rendered clip will adhere to a consistent naming convention and include essential metadata:

  • File Naming Convention:

[Original_Asset_ID]_[Engagement_Clip_Number]_[Platform].mp4

* Example: PH_Video_123_Clip_01_YouTubeShorts.mp4

* Example: PH_Blog_456_Clip_02_LinkedIn.mp4

  • Metadata Embedding:

* The comment field within the MP4 file will contain the full URL to the corresponding pSEO landing page. This is

hive_db Output

Step 5 of 5: hive_db → insert - Social Signal Automator Output

This final step of the "Social Signal Automator" workflow is critical for persisting all generated content, metadata, and performance signals into the PantheraHive database (hive_db). By systematically inserting this data, we ensure that every piece of derived content is trackable, manageable, and auditable, forming the foundation for future analysis, publication, and reporting.


Purpose of this Step

The hive_db insertion step serves several key purposes:

  1. Data Persistence: Permanently stores the details of the original asset, the extracted high-engagement clips, and all platform-optimized video renditions.
  2. Content Management: Creates a centralized record of all generated social media assets, enabling easy retrieval, review, and publication.
  3. Performance Tracking: Lays the groundwork for tracking the performance of each clip and rendition on its respective platform (once published), linking back to the original content and pSEO landing pages.
  4. Auditability & Reporting: Provides a comprehensive log of the automated process, including input assets, processing details, and final outputs, crucial for internal reporting and compliance.
  5. Scalability: Enables the system to manage a large volume of content efficiently, avoiding re-processing and ensuring data integrity.

Data Model Overview

The hive_db insertion will involve populating several interconnected tables to capture the full scope of the Social Signal Automator's output. The primary entities being inserted are:

  • social_signal_assets: Details about the original PantheraHive video or content asset that initiated the workflow.
  • social_signal_clips: Information about the 3 high-engagement moments (clips) identified by Vortex.
  • social_signal_renditions: Specific platform-optimized video files rendered by FFmpeg for each clip.
  • workflow_execution_logs: Metadata about the execution of this specific workflow run.

Detailed Data Fields for Insertion

Below is a detailed breakdown of the data fields that will be inserted into hive_db across the relevant tables:

1. social_signal_assets Table

This table stores the metadata for the original PantheraHive content asset that was processed.

  • asset_id (UUID, Primary Key): Unique identifier for the original PantheraHive asset.
  • asset_type (VARCHAR): Type of the original asset (e.g., 'video', 'blog_post', 'podcast_episode').
  • original_url (TEXT): The direct URL to the original PantheraHive content asset.
  • title (TEXT): The title of the original content asset.
  • description (TEXT): A brief description of the original content asset.
  • ingestion_date (TIMESTAMP): Timestamp when the original asset was first processed by this workflow.
  • status (VARCHAR): Current status of the original asset within the workflow (e.g., 'processed', 'archived').

2. social_signal_clips Table

This table stores the details for each of the 3 high-engagement clips extracted from the original asset.

  • clip_id (UUID, Primary Key): Unique identifier for this specific extracted clip.
  • asset_id (UUID, Foreign Key): Links to the asset_id in the social_signal_assets table.
  • clip_index (INTEGER): An index identifying the clip's order (e.g., 1, 2, or 3 for the top 3 moments).
  • start_timestamp_seconds (INTEGER): The start time (in seconds) of the clip within the original asset.
  • end_timestamp_seconds (INTEGER): The end time (in seconds) of the clip within the original asset.
  • duration_seconds (INTEGER): The calculated duration of the clip in seconds.
  • vortex_hook_score (DECIMAL): The engagement score assigned by Vortex, indicating its potential to hook viewers.
  • auto_generated_title (TEXT): A system-generated title for the clip, based on its content or context.
  • auto_generated_description (TEXT): A system-generated short description for the clip.
  • cta_text (TEXT): The exact branded voiceover CTA text added (e.g., "Try it free at PantheraHive.com").
  • elevenlabs_voiceover_url (TEXT): URL to the generated ElevenLabs voiceover audio file (if stored separately).
  • p_seo_landing_page_url (TEXT): The specific pSEO landing page URL this clip is designed to drive traffic to.
  • creation_date (TIMESTAMP): Timestamp when this clip record was created.

3. social_signal_renditions Table

This table stores the details for each platform-optimized video file rendered for each clip. Since there are 3 clips and 3 platforms, this table will receive 9 new entries per workflow run.

  • rendition_id (UUID, Primary Key): Unique identifier for this specific rendered video file.
  • clip_id (UUID, Foreign Key): Links to the clip_id in the social_signal_clips table.
  • platform (VARCHAR): The target social media platform (e.g., 'youtube_shorts', 'linkedin', 'x_twitter').
  • aspect_ratio (VARCHAR): The aspect ratio of the rendered video (e.g., '9:16', '1:1', '16:9').
  • video_file_url (TEXT): The secure URL to the final rendered video file in PantheraHive's cloud storage (e.g., S3 bucket).
  • suggested_caption (TEXT): An AI-generated caption optimized for the specific platform.
  • suggested_hashtags (JSONB or TEXT Array): A list of relevant hashtags, optimized for discoverability on the target platform.
  • status (VARCHAR): The current status of the rendition (e.g., 'rendered', 'ready_for_review', 'published', 'failed').
  • rendering_date (TIMESTAMP): Timestamp when the video rendition was successfully created.
  • ffmpeg_logs_url (TEXT, Optional): URL to FFmpeg logs for debugging purposes.

4. workflow_execution_logs Table

This table logs the overall execution of the "Social Signal Automator" workflow.

  • execution_id (UUID, Primary Key): Unique identifier for this specific workflow run.
  • workflow_name (VARCHAR): Name of the workflow (e.g., 'Social Signal Automator').
  • input_asset_id (UUID, Foreign Key): Links to the asset_id in social_signal_assets that triggered this run.
  • start_timestamp (TIMESTAMP): Timestamp when the workflow execution began.
  • end_timestamp (TIMESTAMP): Timestamp when the workflow execution completed.
  • status (VARCHAR): Overall status of the workflow run ('SUCCESS', 'FAILED', 'PARTIAL_SUCCESS').
  • triggered_by (VARCHAR): Identifier of the user or system that initiated the workflow.
  • log_details_url (TEXT, Optional): URL to detailed execution logs for troubleshooting.

Actionable Outcomes & Next Steps

Upon successful insertion of data into hive_db, the following actionable outcomes are enabled:

  1. Content Review & Approval: The newly created social_signal_renditions are marked as 'ready_for_review'. A notification can be triggered for the content team to review the generated clips, captions, and hashtags.
  2. Automated Publishing Queue: Approved renditions can be automatically moved into a publishing queue for scheduling and distribution to YouTube Shorts, LinkedIn, and X/Twitter.
  3. Performance Dashboard Integration: The stored data forms the backbone for PantheraHive's analytics dashboard, allowing tracking of views, clicks, engagement rates, and referral traffic generated by each clip.
  4. A/B Testing & Optimization: With detailed metadata, future iterations can A/B test different CTAs, caption styles, or clip lengths to optimize performance.
  5. Brand Authority Reporting: Aggregated data from all published clips contributes directly to reports on brand mention growth and overall brand authority signals, validating the workflow's effectiveness.

Confirmation

This concludes Step 5 of 5 for the "Social Signal Automator" workflow. All generated content metadata, rendered video URLs, and workflow execution logs have been successfully inserted into the hive_db, ensuring data integrity and enabling subsequent content management, publication, and performance tracking.

social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}