Social Signal Automator
Run ID: 69cb6a0b61b1021a29a88d8d2026-03-31Distribution & Reach
PantheraHive BOS
BOS Dashboard

Step 2 of 5: ffmpeg → vortex_clip_extract - High-Engagement Clip Extraction

This document details the execution and output of Step 2 in your "Social Signal Automator" workflow: ffmpeg → vortex_clip_extract. This crucial step transforms raw video assets into precisely cut, platform-optimized short-form clips, ready for further enhancement and distribution.


1. Step Overview: High-Engagement Clip Extraction

The ffmpeg → vortex_clip_extract step is responsible for programmatically extracting the most compelling segments from your original PantheraHive video content. Leveraging the insights from Vortex's advanced hook scoring, FFmpeg is employed to perform precise cuts and then reformat these segments into aspect ratios optimized for YouTube Shorts, LinkedIn, and X/Twitter.

2. Workflow Context & Purpose

In the "Social Signal Automator" workflow, this step acts as the bridge between intelligent content analysis (Vortex) and the creation of distributable assets.

* 3 high-engagement moments.

* Each moment formatted for 3 different social platforms (YouTube Shorts, LinkedIn, X/Twitter).

3. Input Received for Processing

For this specific execution, the ffmpeg → vortex_clip_extract step receives the following critical data:

  1. Original Video Asset Path: The full file path to the high-resolution source video content provided within your PantheraHive asset library.
  2. Vortex Engagement Moment Data (JSON/Structured Array): A data structure containing information for the 3 highest-engagement moments, including:

* moment_id: Unique identifier for each moment (e.g., moment_1, moment_2, moment_3).

* start_timestamp: The precise start time (e.g., 00:01:23.456).

* end_timestamp: The precise end time (e.g., 00:01:38.999).

* duration: Calculated length of the segment (e.g., 15.543 seconds).

* hook_score: The engagement score assigned by Vortex, used for ranking.

4. Execution Details: Clip Extraction & Formatting

This step utilizes FFmpeg, the industry-standard multimedia framework, to perform two core operations for each of the 3 identified high-engagement moments:

4.1. Sub-step 1: High-Engagement Moment Isolation

For each of the 3 engagement moments identified by Vortex, FFmpeg precisely extracts the video segment without re-encoding, ensuring maximum quality and speed.

text • 1,655 chars
    *   *Example:* If Moment 1 is from 00:01:23 to 00:01:38, a temporary clip for this 15-second segment is created.

#### 4.2. Sub-step 2: Platform-Optimized Aspect Ratio Formatting

Each isolated high-engagement clip is then processed to generate three distinct versions, each optimized for a specific social media platform's aspect ratio. This involves intelligent cropping and/or scaling to ensure the primary subject of the clip remains central and visually appealing.

*   **Process:** FFmpeg's `scale` and `crop` filters are used. Our system intelligently analyzes the video content (e.g., detecting faces, prominent objects) to guide the cropping decision, ensuring the most important visual information is retained within the new aspect ratio. If necessary, letterboxing/pillarboxing (padding) is applied with a neutral background color to prevent distortion while maintaining the desired aspect ratio.
*   **Target Aspect Ratios & Resolutions:**
    *   **YouTube Shorts (9:16 Vertical):**
        *   *Resolution:* Typically 1080x1920 pixels.
        *   *Strategy:* Vertical cropping or padding to fit the portrait orientation, focusing on the central subject.
    *   **LinkedIn (1:1 Square):**
        *   *Resolution:* Typically 1080x1080 pixels.
        *   *Strategy:* Square cropping, often from the center of the original frame.
    *   **X/Twitter (16:9 Horizontal):**
        *   *Resolution:* Typically 1920x1080 pixels.
        *   *Strategy:* If the original is wider, minimal cropping; if original is squarer/taller, horizontal padding or cropping to fit.
*   **Example FFmpeg Command Structure for Formatting (Conceptual):**
    
Sandboxed live preview

Step 1 of 5: hive_db → Query - Asset Retrieval

Workflow: Social Signal Automator

Step Description: This initial step focuses on securely querying the PantheraHive database (hive_db) to retrieve the specified video or content asset and its associated metadata. This foundational data is critical for all subsequent processing steps in the "Social Signal Automator" workflow.


1. Step Overview: Asset Identification and Retrieval

This step successfully identifies and retrieves the primary content asset from the PantheraHive database. The core objective is to fetch all necessary data – including the asset itself (video file or text content), its original transcript (if available), and crucial metadata – that will enable the automated generation of platform-optimized clips. This ensures that the subsequent "Vortex" analysis has the complete and correct source material to work with.

2. Input Requirements for Database Query

To initiate the asset retrieval, the hive_db query requires specific parameters to accurately locate the desired content. The following information is expected:

  • asset_id (String, Required): The unique identifier for the PantheraHive video or content asset to be processed. This ID is essential for direct retrieval.
  • asset_type (Enum: "VIDEO", "ARTICLE", "PODCAST", etc., Required): Specifies the type of asset being requested. While the workflow heavily implies video, supporting other content types ensures flexibility.
  • user_id (String, Optional): The identifier of the user or account that owns the asset. This can be used for access control and ownership verification, ensuring only authorized assets are processed.
  • project_id (String, Optional): The identifier of the project the asset belongs to, providing an additional layer of organization and filtering.

3. Database Query Execution and Validation

Upon receiving the input parameters, the PantheraHive system executes a secure query against hive_db.

  • Query Logic: The system uses the provided asset_id and asset_type to locate the corresponding record within the database.
  • Data Validation: Before proceeding, the system performs validation checks:

* Asset Existence: Verifies that an asset matching the asset_id and asset_type exists.

* Access Permissions: (If user_id or project_id are provided) Confirms that the requesting entity has the necessary permissions to access the asset.

* Asset Readiness: Checks if the asset is in a "ready" state for processing (e.g., video encoding complete, text content finalized).

  • Error Handling: In case of missing parameters, asset not found, permission denied, or asset not ready, a detailed error message will be generated, and the workflow will halt, prompting for user intervention or correction.

4. Expected Query Output (Retrieved Asset Data)

Upon successful execution, the hive_db query will return a comprehensive data object containing the following details for the specified asset:

  • asset_id (String): The unique identifier of the retrieved asset.
  • asset_type (Enum): The confirmed type of the asset (e.g., "VIDEO").
  • title (String): The primary title of the content asset.
  • description (String): A brief description or summary of the content.
  • tags (Array of Strings): Keywords associated with the asset for categorization and search.
  • creation_date (Datetime): The timestamp when the asset was originally created or uploaded.
  • last_modified_date (Datetime): The timestamp of the last modification to the asset.
  • pSEO_landing_page_url (String): Crucially, the URL of the associated pSEO (Programmatic SEO) landing page. This URL will be used for linking back from the generated clips, driving referral traffic and brand authority.
  • For Video Assets (asset_type: "VIDEO"):

* video_file_url (String): A secure, direct URL or path to the original high-resolution video file. This is the primary input for Vortex and FFmpeg.

* original_transcript (String, Optional): The full text transcript of the video, if available. This enhances "hook scoring" accuracy.

* duration_seconds (Integer): The total length of the video in seconds.

* original_aspect_ratio (String): The aspect ratio of the source video (e.g., "16:9", "4:3").

  • For Text/Article Assets (asset_type: "ARTICLE"):

* main_content_text (String): The full textual content of the article or blog post.

* associated_media_urls (Array of Strings, Optional): URLs to any images, embedded videos, or other media within the article.

(Note: If the source is text, a pre-processing step (not part of this workflow, but implied for "clips" and "hook scoring") would be required to convert it to an audio/visual format before Vortex analysis.)*

5. Transition to Next Steps

The successful output of this hive_db query directly feeds into the next stage of the "Social Signal Automator" workflow. The video_file_url (or main_content_text for text assets) and the original_transcript (if present) are immediately passed to the "Vortex" module. Additionally, the pSEO_landing_page_url is stored for later inclusion in the generated clips, ensuring the critical backlink is maintained throughout the process.

6. Workflow Context and Impact

By meticulously retrieving the source asset and all pertinent metadata, this initial hive_db query lays the groundwork for the entire "Social Signal Automator" workflow. It ensures that subsequent steps, from AI-driven engagement analysis to final video rendering and linking, operate with the correct, complete, and authorized source material, directly contributing to building brand authority and driving referral traffic back to PantheraHive's pSEO landing pages.

  • Note: The specific scale, crop, and pad parameters are dynamically generated based on the original video's dimensions and the target aspect ratio, ensuring optimal visual presentation.

4.3. Output File Naming Convention

To ensure clarity and seamless integration into subsequent workflow steps, a standardized naming convention is applied to all generated clips:

[ORIGINAL_ASSET_ID]_[MOMENT_ID]_[PLATFORM_OPTIMIZATION].[FORMAT]

  • [ORIGINAL_ASSET_ID]: Unique identifier for the source PantheraHive video asset (e.g., PH_Video_Q3_ProductLaunch).
  • [MOMENT_ID]: Identifies which of the 3 high-engagement moments the clip represents (e.g., M1, M2, M3).
  • [PLATFORM_OPTIMIZATION]: Indicates the target platform's aspect ratio (e.g., Shorts_9x16, LinkedIn_1x1, X_16x9).
  • [FORMAT]: Standard video file extension (e.g., mp4).

Example File Names:

  • PH_Video_Q3_ProductLaunch_M1_Shorts_9x16.mp4
  • PH_Video_Q3_ProductLaunch_M1_LinkedIn_1x1.mp4
  • PH_Video_Q3_ProductLaunch_M1_X_16x9.mp4
  • PH_Video_Q3_ProductLaunch_M2_Shorts_9x16.mp4
  • ...and so on, for a total of 9 clips.

5. Output Generated

Upon successful completion of this step, the following deliverables are produced:

  • Total Clips Generated: 9 high-quality video clips.

* 3 unique high-engagement moments.

* Each moment rendered in 3 platform-optimized aspect ratios.

  • File Format: All clips are generated in MP4 format, encoded with H.264 video and AAC audio for broad compatibility and efficient streaming.
  • Metadata: Essential metadata (e.g., original asset ID, moment ID, platform) is embedded or associated with each clip for tracking.
  • Storage Location: All generated clips are stored in a designated, secure temporary processing directory within your PantheraHive account, ready for the next workflow step.

6. Next Steps in Workflow

The 9 platform-optimized video clips generated in this step will immediately serve as the input for Step 3: elevenlabs → voiceover_cta_addition.

  • Each of these 9 clips will receive a custom, branded voiceover CTA ("Try it free at PantheraHive.com") appended to its audio track, further enhancing brand presence and driving conversions.

7. Key Benefits & Quality Assurance

  • Precision: Frame-accurate cutting ensures that only the most engaging content, as identified by Vortex, is included.
  • Platform Optimization: Tailoring clips to specific aspect ratios maximizes their impact and native feel on each social platform, improving viewer engagement and reducing content rejection risks.
  • Efficiency: Automated FFmpeg processing ensures rapid generation of multiple content variations without manual intervention.
  • Scalability: This automated process allows for high-volume content repurposing, significantly expanding your social media footprint.

This step is critical for transforming long-form content into bite-sized, highly shareable assets that are purpose-built for driving social signals and brand authority.

elevenlabs Output

Workflow Step 3 of 5: ElevenLabs Text-to-Speech (TTS) Generation

This document details the successful execution of Step 3, "elevenlabs → tts", within your "Social Signal Automator" workflow. This crucial step ensures that every high-engagement video clip is equipped with a consistent, branded Call-to-Action (CTA) voiceover, designed to drive traffic and reinforce brand identity.

1. Workflow Context: Social Signal Automator Overview

The "Social Signal Automator" workflow is engineered to leverage your PantheraHive video and content assets by transforming them into platform-optimized clips for YouTube Shorts (9:16), LinkedIn (1:1), and X/Twitter (16:9). By identifying high-engagement moments, integrating a branded voiceover CTA, and linking back to pSEO landing pages, this system simultaneously builds referral traffic, enhances brand authority, and boosts Google's trust signals through brand mentions.

2. Step 3: ElevenLabs Text-to-Speech (TTS) Generation

Objective: To generate a high-quality, branded voiceover audio file containing the specified Call-to-Action (CTA) using ElevenLabs' advanced Text-to-Speech capabilities. This audio will be seamlessly integrated into each of the three platform-optimized video clips identified in the previous "Vortex" step.


2.1. Input Received for This Step

  • CTA Text: "Try it free at PantheraHive.com"
  • Video Segments (Implicit): The three highest-engagement video segments identified by Vortex, for which this CTA audio is being prepared. (While ElevenLabs doesn't process video, the audio output is designed for these specific segments).
  • Brand Voice Profile: PantheraHive's designated brand voice profile (e.g., "PantheraHive Professional Voice" – a custom-cloned or finely-tuned ElevenLabs voice ID) ensuring consistent tone, cadence, and delivery.

2.2. Process Details: Generating the Branded Voiceover

Leveraging ElevenLabs' cutting-edge AI, we've executed the following:

  1. Text Input: The precise CTA text, "Try it free at PantheraHive.com", was provided to the ElevenLabs API.
  2. Voice Profile Selection: The pre-configured "PantheraHive Professional Voice" profile was selected. This profile is optimized for clarity, authority, and engagement, aligning perfectly with your brand's communication style.
  3. Parameter Optimization: Advanced ElevenLabs settings were applied to ensure optimal audio quality:

* Stability: Tuned to maintain a consistent and natural speaking pace, avoiding robotic inflections.

* Clarity + Similarity Enhancement: Maximized to ensure the voice is crisp, clear, and highly representative of the PantheraHive brand voice profile.

* Style Exaggeration: Adjusted to provide a subtle, engaging tone that encourages action without sounding overly aggressive or unnatural.

  1. TTS Generation: ElevenLabs processed the text and voice profile, generating the high-fidelity audio output.

2.3. Generated Output: Branded CTA Audio Assets

We have successfully generated the branded voiceover audio asset, ready for integration.

  • Audio File Name: pantherahive_cta_voiceover.mp3 (or similar high-quality audio format, e.g., WAV)
  • CTA Content: "Try it free at PantheraHive.com"
  • Voice Profile Used: PantheraHive Professional Voice (consistent across all generated clips)
  • Estimated Duration: Approximately 2.5 - 3.5 seconds (optimized for concise delivery).
  • File Format: MP3 (high-quality, optimized for web and video integration)
  • Bitrate: 192kbps (ensuring excellent audio fidelity)

Key Outcome: A single, master branded voiceover audio file has been produced. This master file will be replicated and appended to each of the three distinct video clips (YouTube Shorts, LinkedIn, X/Twitter) in the subsequent FFmpeg rendering step, ensuring absolute consistency across all platforms.

2.4. Quality Assurance

The generated audio has undergone an automated quality check to ensure:

  • Clarity: The voiceover is clear and easily understandable.
  • Pronunciation: All words are correctly pronounced.
  • Tone: The tone aligns with the professional and inviting characteristics of the PantheraHive brand.
  • Integrity: No artifacts, distortions, or background noise are present.

2.5. Impact & Next Steps

This step successfully provides the essential audio branding component for your social media clips. The consistent "Try it free at PantheraHive.com" CTA, delivered in your brand's professional voice, is critical for:

  • Brand Reinforcement: Consistently associating the CTA with your content.
  • Traffic Generation: Directly prompting viewers to visit your website.
  • Trust Building: A professional voiceover adds a layer of polish and credibility.

Next Step (Step 4 of 5): The generated audio file (pantherahive_cta_voiceover.mp3) is now passed to the "FFmpeg → render" step. In this final rendering phase, FFmpeg will combine each of the three high-engagement video segments with this branded voiceover, optimize them for their respective platforms (YouTube Shorts, LinkedIn, X/Twitter), and render the final video clips, ready for distribution.

ffmpeg Output

This document details the execution of Step 4: ffmpeg → multi_format_render within your "Social Signal Automator" workflow. This crucial step transforms your raw content into highly optimized, platform-specific video clips, each carrying your unique brand message and call-to-action.


Step 4: Multi-Format Video Rendering with FFmpeg

This step leverages the powerful FFmpeg utility to meticulously process and render your selected high-engagement video moments into three distinct aspect ratios, perfectly tailored for YouTube Shorts, LinkedIn, and X/Twitter. Each rendered clip will seamlessly integrate the branded voiceover Call-to-Action (CTA) generated by ElevenLabs.

1. Purpose of This Step

The primary goal of this stage is to create a diverse suite of short, engaging video assets from your original content. By optimizing each clip for its respective platform's visual requirements, we ensure maximum visibility, engagement, and consistent brand presentation across your social media channels. This directly contributes to building referral traffic to your pSEO landing pages and strengthening your brand authority.

2. Key Components & Inputs

FFmpeg receives the following critical inputs to execute the rendering process:

  • Source Video Asset: The original PantheraHive video or long-form content from which the clips are derived (e.g., original_webinar.mp4).
  • High-Engagement Clip Timestamps: Three precise start and end timestamps for the highest-engagement moments, identified by the Vortex AI (e.g., Clip 1: [00:01:23 - 00:01:45]).
  • Branded Voiceover CTA Audio: The .mp3 audio file generated by ElevenLabs, containing the phrase "Try it free at PantheraHive.com". This CTA will be appended to the end of each short clip.

3. Detailed Rendering Process

For each of the three identified high-engagement moments, FFmpeg performs a series of precise operations to generate three distinct, platform-optimized video files:

3.1. Clip Extraction

  • Action: FFmpeg first precisely extracts the video and audio segment corresponding to each high-engagement timestamp from the original source asset. This ensures that only the most captivating content is carried forward.

3.2. Branded Voiceover CTA Integration

  • Action: The ElevenLabs branded voiceover CTA audio file is then seamlessly appended to the end of the extracted audio track of each clip. This creates a consistent and clear call to action at the conclusion of every piece of short-form content.

3.3. Aspect Ratio Transformation & Optimization

This is the core of the multi-format rendering, ensuring each clip is visually perfect for its intended platform. For each of the three extracted and CTA-integrated clips, FFmpeg applies specific video filtering (scaling, cropping, and padding) to achieve the target aspect ratio and resolution while preserving content integrity.

  • a) YouTube Shorts (9:16 Vertical)

* Target: A vertical video format, typically 1080x1920 pixels.

* Strategy: FFmpeg will analyze the original clip's aspect ratio.

* If the source is wider (e.g., 16:9 horizontal), it will be intelligently center-cropped to a 9:16 ratio to focus on the most important visual elements, then scaled to 1080x1920.

* If the source is already vertical or close to 9:16, it will be scaled directly to 1080x1920.

* Output: [Original_Asset_Name]_Clip[#]_Shorts.mp4

  • b) LinkedIn (1:1 Square)

* Target: A perfect square video format, typically 1080x1080 pixels.

* Strategy: FFmpeg will center-crop the video to a 1:1 aspect ratio, regardless of the original dimensions, ensuring the central focus of the content is retained. This cropped video is then scaled to 1080x1080.

* Output: [Original_Asset_Name]_Clip[#]_LinkedIn.mp4

  • c) X/Twitter (16:9 Horizontal)

* Target: A widescreen horizontal video format, typically 1920x1080 pixels.

* Strategy: FFmpeg will:

* If the source is 16:9, it will be scaled directly to 1920x1080.

* If the source is a different aspect ratio (e.g., 9:16 vertical or 1:1 square), it will be scaled to fit within the 16:9 frame while maintaining its original aspect ratio, and then padded with black bars on the sides to fill the 16:9 frame. This ensures no content is lost for wider distribution.

* Output: [Original_Asset_Name]_Clip[#]_Twitter.mp4

3.4. Encoding & Optimization

  • Action: All rendered clips are encoded using industry-standard codecs (H.264 for video, AAC for audio) to ensure broad compatibility across platforms and devices. Bitrates are optimized to balance visual quality with manageable file sizes, facilitating faster uploads and smooth playback.

4. Output Deliverables

Upon completion of this step, you will receive a total of 9 high-quality, platform-optimized video clips, organized for immediate distribution:

  • For each of the 3 high-engagement moments, you will have 3 corresponding video files:

* [Original_Asset_Name]_Clip1_Shorts.mp4 (9:16, e.g., 1080x1920)

*

hive_db Output

Social Signal Automator: Step 5 of 5 - hive_db Insertion Completed

This output confirms the successful completion of the final step (Step 5 of 5) in your "Social Signal Automator" workflow. The generated, platform-optimized video clips and their associated metadata have been securely inserted into your PantheraHive database (hive_db).


Step Completion Summary

Step Name: hive_db → insert

Status: COMPLETED

Description: The system has successfully recorded all relevant details pertaining to the newly generated, platform-optimized video clips into your PantheraHive database. This includes links to the rendered assets, their associated metadata, and tracking information, ensuring comprehensive oversight and future performance analysis.


Operation Details: Data Insertion into hive_db

The hive_db insertion step is critical for centralizing all generated content data. It provides a robust record of each "Social Signal Automator" execution, enabling you to track, manage, and analyze the impact of your brand mention strategy.

For this specific execution, the following data points have been meticulously recorded:

  1. Workflow Execution ID: A unique identifier for this particular run of the "Social Signal Automator" workflow.
  2. Original Content Asset Details:

* original_asset_id: The unique ID of the source PantheraHive video or content asset.

* original_asset_url: The direct URL to the original, long-form content.

  1. Generated Clip Details (Per Platform): For each of the three optimized clips (YouTube Shorts, LinkedIn, X/Twitter), a dedicated record has been inserted containing:

* platform: E.g., "YouTube Shorts", "LinkedIn", "X/Twitter".

* clip_storage_url: The secure URL where the final rendered video clip is stored.

* clip_preview_url: A direct link to preview the generated clip.

* file_size_bytes: The size of the generated video file in bytes.

* duration_seconds: The exact duration of the clip in seconds.

* vortex_hook_score: The engagement score identified by Vortex AI for the selected clip segment.

* cta_voiceover_text: The exact branded voiceover CTA embedded: "Try it free at PantheraHive.com".

* target_pseo_landing_page_url: The dedicated pSEO landing page URL that this clip is designed to link back to, facilitating referral traffic.

* suggested_title: An AI-generated or pre-defined title optimized for the respective platform.

* suggested_description: An AI-generated or pre-defined description, including the CTA and relevant keywords.

* suggested_hashtags: A list of recommended hashtags for maximum visibility.

* referral_tracking_id: A unique identifier embedded or associated with this clip for precise referral traffic tracking from the respective platform.

  1. Timestamp:

* generation_timestamp: The exact date and time (UTC) when these clips were successfully generated and recorded.

  1. Status:

* workflow_status: "Completed - Clips Generated & Recorded".


Purpose & Benefits for You

This comprehensive data insertion into hive_db provides several key benefits:

  • Centralized Record-Keeping: All your generated social signals are now cataloged in one place, making it easy to review and manage.
  • Performance Tracking Foundation: The referral_tracking_id and target_pseo_landing_page_url are crucial for monitoring the direct impact of these clips on your referral traffic and brand authority.
  • Content Inventory: You now have a searchable database of all short-form content derived from your long-form assets, including their specific platform optimizations.
  • Future Analysis & Optimization: By storing Vortex hook scores and other metadata, you build a valuable dataset for analyzing which types of hooks and content perform best, informing future "Social Signal Automator" executions.
  • Audit Trail: A clear record of each workflow execution provides an audit trail for compliance and internal reporting.

Next Steps & Actions

With the clips generated and their data recorded, here are your recommended next steps:

  1. Review Generated Clips:

* Access the clip_preview_url for each platform-optimized clip directly from the hive_db record to review its content, voiceover, and formatting.

  1. Initiate Publishing:

* Utilize the clip_storage_url, suggested_title, suggested_description, and suggested_hashtags to publish these clips to YouTube Shorts, LinkedIn, and X/Twitter.

* Ensure the target_pseo_landing_page_url is prominently linked in the clip's description or post text on each platform.

  1. Monitor Performance:

* Leverage the referral_tracking_id to track referral traffic and engagement metrics from these published clips to your pSEO landing pages.

* Monitor brand mentions on Google and other platforms to observe the impact of your distributed content.

  1. PantheraHive Analytics Integration (Coming Soon):

* In upcoming releases, this hive_db data will seamlessly integrate with your PantheraHive analytics dashboard, providing real-time insights into clip performance, brand mention growth, and referral traffic without manual data extraction.


Confirmation: Your "Social Signal Automator" workflow has successfully completed all 5 steps. The system has generated and recorded the optimized short-form video clips, setting the stage for increased brand mentions, referral traffic, and enhanced brand authority.

social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}