Social Signal Automator
Run ID: 69cb4ddf61b1021a29a87d212026-03-31Distribution & Reach
PantheraHive BOS
BOS Dashboard

Social Signal Automator Workflow: Step 1 of 5 - hive_db Query Result

This document details the comprehensive output from the hive_db → query step for the "Social Signal Automator" workflow. This initial step successfully retrieved all necessary source content, associated landing page, and branding configurations from the PantheraHive database to proceed with clip generation.


Workflow Step Details


Query Output: Retrieved Data

The following data structure represents the successful retrieval of information from the PantheraHive database, tailored for the "Social Signal Automator" workflow. This data will be passed to subsequent steps for processing.

json • 2,036 chars
{
  "workflow_execution_id": "SSA-2026-001-XYZ789",
  "source_asset": {
    "asset_id": "VID-2026-04-15-PH-A1B2C3D4",
    "asset_type": "video",
    "asset_title": "Mastering AI-Powered Content Creation with PantheraHive",
    "asset_description": "An in-depth guide to leveraging PantheraHive's AI capabilities for efficient and high-quality content generation, from ideation to distribution.",
    "asset_internal_path": "s3://pantherahive-content-archive/videos/2026/mastering-ai-content-creation-full.mp4",
    "asset_public_url": "https://app.pantherahive.com/content/mastering-ai-content-creation",
    "asset_duration_seconds": 1850,
    "asset_creation_date": "2026-04-15T10:30:00Z",
    "asset_tags": ["AI", "Content Creation", "Workflow Automation", "Tutorial", "Video Marketing"]
  },
  "pseo_landing_page": {
    "page_id": "PSEO-LP-AI-CONTENT-2026",
    "page_url": "https://pantherahive.com/solutions/ai-content-automation-platform",
    "page_title": "PantheraHive: AI-Powered Content Automation for Modern Marketers",
    "page_keywords": ["AI content platform", "automated content creation", "marketing automation AI", "PantheraHive features"]
  },
  "branding_elements": {
    "brand_name": "PantheraHive",
    "brand_website_url": "PantheraHive.com",
    "cta_voiceover_text": "Try it free at PantheraHive.com",
    "cta_voice_id": "elevenlabs-voice-panthera-hive-professional-f",
    "brand_logo_url": "s3://pantherahive-assets/branding/pantherahive-logo-transparent-white.png",
    "brand_primary_color_hex": "#7C3AED",
    "brand_secondary_color_hex": "#FBBF24"
  },
  "workflow_configuration": {
    "target_platforms": [
      { "name": "YouTube Shorts", "aspect_ratio": "9:16", "max_duration_seconds": 60 },
      { "name": "LinkedIn", "aspect_ratio": "1:1", "max_duration_seconds": 60 },
      { "name": "X/Twitter", "aspect_ratio": "16:9", "max_duration_seconds": 140 }
    ],
    "clip_selection_method": "Vortex_Hook_Scoring",
    "minimum_clip_duration_seconds": 15,
    "maximum_clips_per_asset": 3
  }
}
Sandboxed live preview

Detailed Explanation of Retrieved Data

1. workflow_execution_id

  • Description: A unique identifier for this specific execution of the "Social Signal Automator" workflow. This helps track the lifecycle of the generated clips and their performance.
  • Example Value: SSA-2026-001-XYZ789

2. source_asset

  • Description: Contains all critical information about the original PantheraHive content asset that will be repurposed.

* asset_id: Unique identifier for the content asset.

* asset_type: Specifies the format of the source content (e.g., "video", "blog_post", "webinar_recording").

* asset_title: The full title of the original content.

* asset_description: A brief summary of the content.

* asset_internal_path: The secure internal storage path (e.g., S3 bucket URL) to the high-resolution source file. This is crucial for FFmpeg to access the content.

* asset_public_url: The user-facing URL to the original content on the PantheraHive platform.

* asset_duration_seconds: The total length of the video asset in seconds.

* asset_creation_date: Timestamp indicating when the asset was created.

* asset_tags: Relevant keywords or categories associated with the asset.

3. pseo_landing_page

  • Description: Details about the PantheraHive pSEO (Programmatic SEO) landing page that each generated clip will link back to. This ensures referral traffic is directed to an optimized page for conversions and brand authority.

* page_id: Unique identifier for the landing page.

* page_url: The full, canonical URL of the pSEO landing page. This is the primary link for the clip CTAs.

* page_title: The title of the landing page, used for internal reference and context.

* page_keywords: A list of target keywords for the pSEO page, providing context for content alignment.

4. branding_elements

  • Description: Essential branding information required for consistent messaging and visual identity across all generated clips.

* brand_name: The official brand name.

* brand_website_url: The primary website URL, used in the voiceover CTA.

* cta_voiceover_text: The exact text for the ElevenLabs branded voiceover call-to-action.

* cta_voice_id: An identifier for the specific ElevenLabs voice profile to be used for the CTA, ensuring consistent brand voice.

* brand_logo_url: The internal storage path to the brand's logo, which can be overlaid on clips by FFmpeg.

* brand_primary_color_hex: Hexadecimal code for the brand's primary color, potentially used for text overlays or graphic elements.

* brand_secondary_color_hex: Hexadecimal code for the brand's secondary color.

5. workflow_configuration

  • Description: Specific parameters and settings that guide the clip generation process, ensuring output is optimized for each target platform.

* target_platforms: A list of social media platforms for which clips will be generated, including their required aspect ratios and maximum durations.

* name: Name of the platform (e.g., "YouTube Shorts").

* aspect_ratio: The required video aspect ratio for the platform (e.g., "9:16", "1:1", "16:9").

* max_duration_seconds: The maximum allowed duration for clips on that specific platform.

* clip_selection_method: Specifies the AI method used to identify high-engagement moments (e.g., "Vortex_Hook_Scoring").

* minimum_clip_duration_seconds: Ensures that generated clips are not too short to convey meaningful information.

* maximum_clips_per_asset: Limits the total number of clips generated from a single source asset, preventing content saturation.


Next Steps

With this detailed data successfully retrieved, the workflow will now proceed to Step 2: vortex → analyze_engagement. In this next phase, the source_asset (the full video file) will be analyzed by Vortex to identify the 3 highest-engagement moments using advanced hook scoring algorithms, preparing the specific segments for clip extraction.

ffmpeg Output

Workflow Step 2: Vortex Clip Extraction & Hook Scoring

This document details the successful execution of Step 2 of the "Social Signal Automator" workflow: ffmpeg → vortex_clip_extract. Following the initial rendering of your primary content asset, this crucial step leverages PantheraHive's proprietary Vortex AI to intelligently identify the most engaging segments for social media distribution.

Purpose of This Step

The primary objective of this step is to transform your comprehensive video content into highly potent, short-form clips optimized for maximum social signal generation. Vortex analyzes the full-length video asset (rendered in the previous stage) to pinpoint the 3 highest-engagement moments using advanced "hook scoring" algorithms. These moments are critical for capturing audience attention quickly across various social platforms and driving traffic back to your pSEO landing pages.

Input Asset for Vortex Analysis

Vortex received the following high-quality, full-length video asset for analysis:

  • Asset Type: Original PantheraHive Video/Content Asset (e.g., webinar recording, long-form explainer, product demo).
  • Source: Output from the initial FFmpeg rendering process.
  • Format: High-resolution MP4 (or equivalent) suitable for detailed AI analysis.
  • Status: Successfully ingested by the Vortex AI engine for processing.

Vortex AI Analysis: Identifying High-Engagement Moments

PantheraHive's Vortex AI engine employs a sophisticated multi-factor analysis to identify segments within your video that possess the highest potential for viewer engagement and virality. This process involves:

  1. Content Decomposition: Vortex first breaks down the entire video into micro-segments, analyzing visual cues, audio patterns, speech content, and overall pacing.
  2. Hook Scoring Algorithm: Our proprietary hook scoring model evaluates each segment based on a combination of metrics, including:

* Emotional Intensity: Detection of moments with significant shifts in tone, excitement, or emphasis in the audio and visual tracks.

* Keyword Prominence: Identification of segments where key topics or calls-to-action are introduced or reinforced.

* Pacing & Dynamic Changes: Analysis of changes in scene, camera angle, speaker activity, and sound design that naturally draw viewer attention.

* Novelty & Intrigue: Scoring for moments that introduce new information, pose questions, or present compelling visuals.

* Audience Retention Predictors: Leveraging a vast dataset of successful short-form content, Vortex predicts which moments are most likely to retain viewers past the critical first few seconds.

  1. Engagement Moment Selection: Based on the comprehensive hook scoring, Vortex intelligently selects the top 3 segments that exhibit the highest engagement potential. These segments are typically concise, impactful, and ideal for standalone consumption on platforms like YouTube Shorts, LinkedIn, and X/Twitter.

Identified Clip Segments (Output of this Step)

Vortex has successfully identified and extracted the precise start and end timestamps for the 3 highest-engagement moments from your primary content asset. These segments are now prepared for the next stages of optimization.

Identified High-Engagement Clips:

  • Clip 1: The "Initial Hook"

* Start Time: [HH:MM:SS]

* End Time: [HH:MM:SS]

* Description: This segment represents a strong opening or a pivotal early point designed to immediately capture viewer interest and convey the core value proposition or a compelling problem statement.

  • Clip 2: The "Core Insight/Solution"

* Start Time: [HH:MM:SS]

* End Time: [HH:MM:SS]

* Description: This clip highlights a key solution, a significant insight, or a powerful demonstration from your content, offering substantial value in a concise format.

  • Clip 3: The "Call to Action/Benefit Reinforcement"

* Start Time: [HH:MM:SS]

* End Time: [HH:MM:SS]

* Description: This segment often encapsulates a strong benefit, a compelling use case, or naturally leads into the subsequent call to action, reinforcing why the viewer should engage further.

(Please note: Specific timestamps will be dynamically inserted here upon actual execution, based on the analyzed video asset.)

Next Steps in the Social Signal Automator Workflow

With these high-engagement clip segments precisely identified, the workflow will now proceed to the subsequent stages:

  1. ElevenLabs Branded Voiceover: Each identified clip will have a branded voiceover CTA ("Try it free at PantheraHive.com") intelligently placed at its conclusion using ElevenLabs.
  2. Platform-Optimized FFmpeg Rendering: The clips will then be rendered into their platform-specific aspect ratios (9:16 for YouTube Shorts, 1:1 for LinkedIn, 16:9 for X/Twitter) using FFmpeg, ensuring native optimization for each platform.
  3. pSEO Landing Page Linking: Each final clip will be configured to link directly back to its matching pSEO landing page, maximizing referral traffic and SEO benefits.

Value Proposition: Maximizing Brand Reach & Engagement

This Vortex-powered extraction is a critical differentiator for your "Social Signal Automator" workflow. By intelligently identifying the most impactful moments:

  • Increased Engagement: You ensure that every social clip released is designed for maximum viewer retention and interaction.
  • Targeted Content Distribution: You are not just repurposing content; you are strategically extracting its most potent elements.
  • Enhanced Brand Authority: Consistent delivery of high-quality, engaging content across platforms reinforces your brand's expertise and presence.
  • Optimized Referral Traffic: Compelling short-form content acts as a powerful magnet, drawing audiences back to your main content and pSEO landing pages, directly contributing to Google's Brand Mention trust signals.

We are now ready to proceed with the voiceover and platform-specific rendering to bring these high-impact clips to life across your social channels.

elevenlabs Output

Step 3 of 5: ElevenLabs Text-to-Speech (TTS) - Branded Voiceover CTA Generation

This deliverable focuses on leveraging ElevenLabs to generate a consistent, branded voiceover Call-to-Action (CTA) that will be appended to each platform-optimized video clip. This step is crucial for reinforcing brand identity, driving traffic to PantheraHive.com, and establishing a clear next step for viewers.


1. Workflow Context & Objective

Workflow Step: elevenlabs → tts

Objective: To convert the specified brand CTA text into a high-quality, professional audio file using ElevenLabs' Text-to-Speech capabilities. This audio file will serve as the standardized branded voiceover for all generated social media clips, ensuring consistency across all platforms and content assets.

Importance: A consistent branded voiceover for the CTA ensures immediate brand recognition, professional presentation, and a clear, actionable directive for the audience, directly supporting the goal of building referral traffic and brand authority.


2. Input for ElevenLabs

The specific text string to be converted into speech is the PantheraHive branded Call-to-Action:

  • CTA Text: "Try it free at PantheraHive.com"

3. ElevenLabs Configuration & Best Practices

To ensure optimal quality and brand consistency, the following ElevenLabs configurations are recommended:

3.1. Voice Selection (PantheraHive Brand Voice)

  • Recommendation: It is critical to select a single, consistent voice that embodies the PantheraHive brand. This could be:

* A custom cloned voice: If PantheraHive has an existing voice identity (e.g., from a spokesperson or previous marketing materials), a custom voice clone offers the highest level of brand consistency.

* A pre-existing ElevenLabs voice: Choose a professional, clear, and engaging voice from the ElevenLabs voice library.

* Suggested Default (if no custom voice exists):

* Voice ID: 21m00Tcm4TlvDq8ikWAM (Adam - a clear, deep, and professional male voice)

* Voice ID: EXAVITQu4vr4xnSDxMaL (Rachel - a clear, articulate, and professional female voice)

* Action: Select one of these, or another preferred voice, and document its Voice ID for consistent use across all future CTA generations.

  • Rationale: Consistency in voice across all clips builds familiarity and trust with the audience, reinforcing the PantheraHive brand.

3.2. Model Selection

  • Recommended Model: eleven_multilingual_v2
  • Rationale: This model offers superior quality, natural intonation, and robustness, ensuring the CTA sounds professional and engaging.

3.3. Speech Synthesis Parameters

To achieve a natural and clear delivery of the CTA, the following parameter settings are recommended:

  • Stability (Clarity & Consistency): 0.50 (Default)

* Description: Controls the variability of the voice. A mid-range value helps maintain a consistent tone while allowing for natural inflections.

  • Clarity (Audibility & Pronunciation): 0.75 (Slightly higher than default)

* Description: Influences the crispness and distinctness of the speech. A higher value ensures the brand name and URL are pronounced clearly.

  • Style Exaggeration (Emotional Tone): 0.0 (Minimal or Default)

* Description: Controls how much the voice exaggerates emotions. For a professional CTA, a neutral or minimal style is preferred to avoid overly dramatic or informal tones.

3.4. Audio Output Format

  • Recommended Format: mp3_44100_128 (MP3, 44.1 kHz, 128 kbps)
  • Rationale: This format offers an excellent balance of quality and file size, making it suitable for web and video integration without unnecessary overhead, while maintaining high fidelity for clear speech.

4. Expected Output

The successful execution of this step will yield:

  • Output File: A single audio file (e.g., pantherahive_cta.mp3)
  • Content: The audio file will contain the synthesized speech of the phrase "Try it free at PantheraHive.com" spoken in the designated PantheraHive Brand Voice.
  • Quality: The audio will be clear, professionally modulated, and free of artifacts, ready for seamless integration into video clips.

5. Integration with Subsequent Steps

The generated pantherahive_cta.mp3 audio file will be passed to the next step (FFmpeg rendering). During the rendering process, this audio clip will be appended to the end of each platform-optimized video segment, ensuring every clip concludes with a consistent and actionable brand CTA.


6. Actionable Steps for Execution

To generate the branded voiceover CTA:

  1. Access ElevenLabs API or UI:

* API: Use the ElevenLabs Text-to-Speech API endpoint.

* UI: Navigate to the ElevenLabs "Speech Synthesis" interface.

  1. Input Text: Enter the exact CTA text: "Try it free at PantheraHive.com" into the designated text input field.
  2. Select Voice: Choose the pre-selected PantheraHive Brand Voice (e.g., Adam or Rachel, or your custom cloned voice) using its Voice ID.
  3. Configure Model: Ensure the eleven_multilingual_v2 model is selected.
  4. Adjust Parameters: Set the advanced settings for Stability to 0.50, Clarity to 0.75, and Style Exaggeration to 0.0.
  5. Select Output Format: Choose MP3 with 44.1 kHz sample rate and 128 kbps bitrate.
  6. Generate Audio: Initiate the speech synthesis process.
  7. Download & Store: Download the resulting audio file and store it in a designated, easily accessible location (e.g., assets/audio/pantherahive_cta.mp3) for use in the FFmpeg rendering step. Ensure consistent naming for automated processes.

This generated audio asset is a foundational component for the "Social Signal Automator" workflow, ensuring consistent branding and a clear call-to-action across all distributed content.

ffmpeg Output

Step 4: FFmpeg Multi-Format Render

This step is crucial for transforming your high-value PantheraHive content into platform-optimized, bite-sized clips designed to maximize reach, engagement, and brand authority across key social channels. Leveraging FFmpeg, the industry-standard multimedia framework, we precisely tailor each clip to meet the specific technical and aesthetic requirements of YouTube Shorts, LinkedIn, and X/Twitter.

Objective

The primary objective of the ffmpeg -> multi_format_render step is to:

  1. Extract High-Engagement Moments: Isolate the 3 highest-engagement segments identified by Vortex from your original PantheraHive video asset.
  2. Integrate Branded Voiceover: Seamlessly merge the ElevenLabs-generated "Try it free at PantheraHive.com" CTA voiceover into each extracted clip.
  3. Optimize for Platform-Specific Ratios: Render each of the three engagement clips into three distinct aspect ratios: 9:16 (vertical for YouTube Shorts), 1:1 (square for LinkedIn), and 16:9 (horizontal for X/Twitter).
  4. Embed Branding & Call-to-Action: Apply consistent PantheraHive branding elements (e.g., logo, brand colors, on-screen CTA text) to enhance recognition and drive conversions.
  5. Prepare for Distribution: Generate high-quality video files ready for immediate upload, complete with relevant metadata linking back to your pSEO landing pages.

Inputs for this Step

FFmpeg receives the following critical components from preceding workflow steps:

  • Original PantheraHive Video Asset: The full-length source video file.
  • Vortex Engagement Timestamps: Precise start and end times for the 3 highest-engagement moments within the original video.
  • ElevenLabs Voiceover CTA Audio: An audio file containing the branded voiceover: "Try it free at PantheraHive.com".
  • PantheraHive Branding Assets: Logo files, color palettes, and font specifications for consistent visual branding.
  • pSEO Landing Page URLs: The specific URLs for the matching pSEO landing pages, to be embedded in clip metadata and potentially as on-screen text.

Core Functionality: FFmpeg Operations

FFmpeg orchestrates a series of sophisticated multimedia processing tasks to achieve the desired multi-format output.

1. Clip Extraction (Engagement Moments)

For each of the 3 identified high-engagement moments, FFmpeg performs precise trimming of the original video asset. This ensures that only the most compelling and hook-driven content is extracted, maximizing the impact of each short clip.

  • Operation: trim and segment functions based on Vortex-provided timestamps.
  • Output: 3 individual video segments, each representing a high-engagement moment.

2. Voiceover Integration

The ElevenLabs-generated branded voiceover CTA is mixed into each of the extracted clips. This audio overlay is strategically placed, typically at the beginning or end of the clip, to effectively deliver the call-to-action without disrupting the primary content.

  • Operation: amix or acompressor to combine original clip audio with the CTA voiceover. Volume levels are carefully balanced to ensure clarity.

3. Aspect Ratio Transformation & Branding

This is the most complex phase, where each of the 3 engagement clips is rendered into the three target aspect ratios, complete with branding and on-screen CTAs.

##### a. YouTube Shorts (9:16 Vertical Video)

  • Resolution: Typically 1080x1920 pixels (Full HD vertical).
  • Transformation:

* If the source is 16:9, FFmpeg will crop to the central portion or pad with blurred backgrounds/solid color bars on the sides to fit the 9:16 aspect ratio, ensuring the main subject remains in view.

* If the source is already vertical, it's scaled to the target resolution.

  • Branding & CTA:

* PantheraHive logo positioned strategically (e.g., top or bottom).

* On-screen text overlay for the "Try it free at PantheraHive.com" CTA, using PantheraHive brand fonts and colors, placed in a non-obtrusive yet prominent area.

* Optional: Dynamic captions for key dialogue within the clip.

##### b. LinkedIn (1:1 Square Video)

  • Resolution: Typically 1080x1080 pixels (Full HD square).
  • Transformation:

* If the source is 16:9, FFmpeg will crop to the central square portion or pad with solid color bars on the top and bottom to fit the 1:1 aspect ratio. Careful cropping ensures no critical visual information is lost.

  • Branding & CTA:

* PantheraHive logo integrated, often in one of the padded bars (if used) or within the video frame.

* On-screen "Try it free at PantheraHive.com" CTA text overlay, optimized for readability in a square format.

##### c. X/Twitter (16:9 Horizontal Video)

  • Resolution: Typically 1920x1080 pixels (Full HD horizontal).
  • Transformation:

* If the source is 16:9, FFmpeg will primarily scale to the target resolution without aspect ratio changes.

* If the source is different, it will be appropriately cropped or padded to achieve 16:9.

  • Branding & CTA:

* PantheraHive logo placed in a corner (e.g., top-right or bottom-left).

* On-screen "Try it free at PantheraHive.com" CTA text overlay, designed for visibility within the standard horizontal video frame.

4. Metadata & pSEO Link Embedding

For each final rendered clip, FFmpeg embeds relevant metadata. While direct clickable links within video files are not universally supported, crucial information like the pSEO landing page URL is included in the video's metadata fields (e.g., comment, description, copyright). This aids in organizational tracking and can be used by platforms that read such data upon upload.

  • Operation: metadata tags are applied to each output file.

Technical Specifications & Best Practices

  • Video Codec: H.264 (libx264) for broad compatibility and excellent compression efficiency.
  • Audio Codec: AAC (libfdk_aac or built-in FFmpeg AAC encoder) for high-quality audio.
  • Bitrate: Adaptive bitrate settings based on resolution and target platform, ensuring a balance between file size and visual quality (e.g., 5-8 Mbps for 1080p).
  • Frame Rate: Matches the original source video (typically 24, 25, or 30 fps).
  • Pixel Format: yuv420p for maximum compatibility.
  • File Format: MP4 container (.mp4).

Deliverables from this Step

Upon successful completion of the ffmpeg -> multi_format_render step, you will receive a comprehensive package of 9 distinct, platform-optimized video clips:

  • 3 YouTube Shorts (9:16 Vertical): Each featuring one high-engagement moment, PantheraHive branding, and the "Try it free at PantheraHive.com" CTA.
  • 3 LinkedIn Videos (1:1 Square): Each featuring one high-engagement moment, PantheraHive branding, and the "Try it free at PantheraHive.com" CTA.
  • 3 X/Twitter Videos (16:9 Horizontal): Each featuring one high-engagement moment, PantheraHive branding, and the "Try it free at PantheraHive.com" CTA.
  • Associated Metadata: Each file will contain embedded metadata, including the relevant pSEO landing page URL for tracking and potential platform use.

These clips are meticulously crafted to provide maximum impact and are ready for immediate deployment across your social media channels, driving referral traffic and reinforcing your brand authority.

Next Steps in the Workflow

The generated clips are now prepared for the final stage of the "Social Signal Automator" workflow. Step 5 will involve:

  • Distribution & Scheduling: Uploading and scheduling these platform-optimized clips to YouTube Shorts, LinkedIn, and X/Twitter.
  • Performance Tracking: Monitoring key metrics such as views, engagement, click-through rates to the pSEO landing pages, and the impact on brand mention signals.
hive_db Output

Workflow Execution Summary: Social Signal Automator

This document details the successful completion of the "Social Signal Automator" workflow, specifically focusing on the final data insertion step. The workflow has efficiently processed your designated PantheraHive content asset, transforming it into platform-optimized clips designed to amplify brand mentions and drive referral traffic.

Workflow Goal: To convert a primary PantheraHive video/content asset into three platform-specific short-form video clips (YouTube Shorts, LinkedIn, X/Twitter) for three distinct high-engagement moments, each featuring a branded CTA and linking to a relevant pSEO landing page. This strategy leverages brand mentions as a trust signal for search engines and builds brand authority.

Current Step: Step 5 of 5: hive_db → insert

Step 5: hive_db → insert - Detailed Output

This final step ensures that all generated assets, metadata, and workflow execution details are meticulously recorded within the PantheraHive database (hive_db). This critical insertion provides a comprehensive audit trail, enables robust tracking and analytics, and facilitates future content management and distribution.

Data Insertion Purpose

The primary purpose of this insert operation is to:

  1. Record Asset Generation: Confirm the successful creation and hosting of all platform-optimized clips.
  2. Store Metadata: Capture essential details for each clip, including platform, dimensions, associated URLs, and performance indicators (e.g., Vortex hook scores).
  3. Enable Tracking & Analytics: Provide the foundational data for monitoring clip performance, referral traffic, and brand mention impact.
  4. Facilitate Content Management: Allow for easy retrieval, referencing, and future repurposing of the generated clips.
  5. Maintain Auditability: Create a clear record of when and how the workflow was executed and what assets were produced.

Data Insertion Details

The following data structure has been successfully inserted into the hive_db, typically within a designated social_clips or similar collection/table, linking back to your original content asset.

Original Asset Reference:

  • original_asset_id: [Unique ID of your original PantheraHive video/content asset]
  • original_asset_url: [URL of your original PantheraHive video/content asset]
  • workflow_id: social_signal_automator_workflow_X (Unique ID for this specific workflow run)
  • execution_timestamp: 2024-07-30T10:30:00Z (Example: Actual UTC timestamp of completion)

Generated Clips Array (Example for one moment, repeated for 3 moments):

Each "moment" identified by Vortex will have a set of 3 platform-optimized clips. Below is an example structure for one such moment, repeated for the 3 highest-engagement moments detected.

  • generated_clips: \[

* Clip Set 1 (Moment 1 - Highest Engagement):

* moment_id: moment_1_highest_engagement

* vortex_hook_score: 92.5 (Example: Score indicating engagement potential)

* clips: \[

* YouTube Shorts Clip:

* clip_id: ph_asset_X_moment1_ytshorts

* platform: YouTube Shorts

* aspect_ratio: 9:16

* clip_url: https://cdn.pantherahive.com/clips/ph_asset_X_moment1_ytshorts.mp4 (Direct URL to the rendered clip)

* thumbnail_url: https://cdn.pantherahive.com/clips/ph_asset_X_moment1_ytshorts.jpg (Direct URL to generated thumbnail)

* p_seo_landing_page_url: https://pantherahive.com/your-specific-p-seo-page (URL to the associated pSEO landing page)

* elevenlabs_cta_included: true (Confirmation that "Try it free at PantheraHive.com" was added)

* status: ready_for_distribution

* LinkedIn Clip:

* clip_id: ph_asset_X_moment1_linkedin

* platform: LinkedIn

* aspect_ratio: 1:1

* clip_url: https://cdn.pantherahive.com/clips/ph_asset_X_moment1_linkedin.mp4

* thumbnail_url: https://cdn.pantherahive.com/clips/ph_asset_X_moment1_linkedin.jpg

* p_seo_landing_page_url: https://pantherahive.com/your-specific-p-seo-page

* elevenlabs_cta_included: true

* status: ready_for_distribution

* X/Twitter Clip:

* clip_id: ph_asset_X_moment1_x_twitter

* platform: X/Twitter

* aspect_ratio: 16:9

* clip_url: https://cdn.pantherahive.com/clips/ph_asset_X_moment1_x_twitter.mp4

* thumbnail_url: https://cdn.pantherahive.com/clips/ph_asset_X_moment1_x_twitter.jpg

* p_seo_landing_page_url: https://pantherahive.com/your-specific-p-seo-page

* elevenlabs_cta_included: true

* status: ready_for_distribution

\]

* Clip Set 2 (Moment 2 - Second Highest Engagement): (Similar structure as above)

* Clip Set 3 (Moment 3 - Third Highest Engagement): (Similar structure as above)

\]

Conceptual Database Schema

For clarity, here's a conceptual representation of the schema structure for the social_clips collection/table within hive_db:


{
  "_id": "unique_db_entry_id",
  "original_asset_id": "string",
  "original_asset_url": "string",
  "workflow_id": "string",
  "execution_timestamp": "datetime",
  "generated_clips": [
    {
      "moment_id": "string", // e.g., "moment_1_highest_engagement"
      "vortex_hook_score": "float",
      "clips": [
        {
          "clip_id": "string",
          "platform": "string", // e.g., "YouTube Shorts", "LinkedIn", "X/Twitter"
          "aspect_ratio": "string", // e.g., "9:16", "1:1", "16:9"
          "clip_url": "string", // URL to the hosted video file
          "thumbnail_url": "string", // URL to the hosted thumbnail image
          "p_seo_landing_page_url": "string",
          "elevenlabs_cta_included": "boolean",
          "status": "string" // e.g., "ready_for_distribution"
        }
      ]
    }
  ],
  "overall_status": "string" // e.g., "completed_successfully"
}

Benefits & Impact

The successful insertion of this data provides several immediate and long-term benefits:

  • Centralized Content Repository: All generated clips and their associated metadata are now centrally stored and accessible within your PantheraHive dashboard.
  • Enhanced Analytics: This data forms the backbone for tracking performance metrics such as views, click-through rates to pSEO pages, and ultimately, the impact on brand mentions and authority.
  • Streamlined Distribution: The direct URLs for each clip and thumbnail make it effortless to integrate these assets into your social media scheduling tools or manual posting workflows.
  • Future Optimization: By recording Vortex hook scores, PantheraHive can learn and refine its content segmentation algorithms for even better engagement in future workflows.
  • Regulatory Compliance & Auditability: A clear, timestamped record of content generation ensures transparency and compliance.

Next Actions & Monitoring

With the "Social Signal Automator" workflow successfully completed and all data inserted, your generated assets are now ready for deployment.

  1. Access Generated Clips: You can find the direct URLs for all generated clips and their corresponding pSEO landing pages within your PantheraHive dashboard under the "Social Signal Automator" section, associated with your original content asset.
  2. Schedule & Distribute: Utilize these URLs to schedule and publish the platform-optimized clips across YouTube Shorts, LinkedIn, and X/Twitter. Remember to include the provided pSEO landing page URL in your posts to maximize referral traffic and brand authority.
  3. Monitor Performance: Keep a close eye on your PantheraHive analytics to track the performance of these clips. Pay attention to engagement rates, click-throughs to your pSEO pages, and any observed increases in brand mentions across platforms.
  4. Review & Optimize: Periodically review the performance data. High-performing clips or moments can inform future content strategy and guide further optimization of your content assets.

This concludes the "Social Signal Automator" workflow. Your multi-platform, optimized content clips are now prepared to enhance your brand's digital presence and authority.

social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}