Social Signal Automator
Run ID: 69cb5bef61b1021a29a885492026-03-31Distribution & Reach
PantheraHive BOS
BOS Dashboard

Workflow Step Execution: hive_db → query

Workflow Name: Social Signal Automator

Current Step: 1 of 5 - hive_db → query

Description: This step initiates the "Social Signal Automator" workflow by querying the PantheraHive internal database (hive_db) to identify and retrieve essential metadata for eligible content assets. The goal is to select source videos or long-form content that can be transformed into platform-optimized clips, building referral traffic and brand authority.


Purpose of Database Query

The primary purpose of this initial database query is to:

  1. Identify Eligible Content Assets: Locate all PantheraHive video or long-form content assets that are suitable for processing by the Social Signal Automator. This includes checking asset type, publication status, and current processing flags.
  2. Retrieve Core Metadata: Extract critical information for each identified asset, such as its unique ID, original URL, title, associated pSEO landing page, and any existing transcripts or relevant tags. This data is crucial for subsequent steps like engagement analysis (Vortex), voiceover generation (ElevenLabs), and rendering (FFmpeg).
  3. Prevent Duplicate Processing: Check the current processing status of each asset to ensure that content already processed or currently being processed by the Social Signal Automator is not redundantly re-queued, unless explicitly marked for re-processing.
  4. Inform Prioritization: Gather data points that can inform future prioritization logic, such as creation date, last modification date, or potentially existing engagement scores.

Query Parameters & Criteria

The hive_db query will target the PantheraHive_ContentAssets table (or equivalent data store) using the following conceptual parameters and criteria:

1. Target Data Source:

2. Selection Criteria:

3. Requested Fields (Output Columns):

The query will retrieve the following specific data points for each eligible content asset:


Anticipated Query Results (Data Structure)

The hive_db query will return a list (or array) of content asset objects, each containing the requested fields. Below is an example of the expected data structure for a single content asset:

json • 1,919 chars
[
  {
    "asset_id": "PHVA-0012345",
    "asset_title": "Understanding Google's 2026 Brand Mention Algorithm",
    "asset_url": "https://pantherahive.com/videos/google-2026-brand-mentions",
    "asset_type": "video",
    "transcript_url": "https://pantherahive.com/transcripts/PHVA-0012345.vtt",
    "duration_seconds": 1200,
    "thumbnail_url": "https://pantherahive.com/thumbnails/PHVA-0012345.jpg",
    "associated_pseo_landing_page_id": "PHLP-SEO-00789",
    "associated_pseo_landing_page_url": "https://pantherahive.com/seo/brand-authority-guide",
    "creation_date": "2025-10-26T10:00:00Z",
    "last_modified_date": "2025-11-15T14:30:00Z",
    "social_signal_automator_status": "not_processed",
    "tags": ["SEO", "Google Algorithm", "Brand Authority", "Digital Marketing"],
    "description_short": "A deep dive into how Google will prioritize brand mentions as a trust signal in 2026 and strategies for leveraging this."
  },
  {
    "asset_id": "PHLA-0067890",
    "asset_title": "The Future of AI in Content Creation: A PantheraHive Perspective",
    "asset_url": "https://pantherahive.com/articles/ai-content-creation-future",
    "asset_type": "long_form_article_with_video",
    "transcript_url": "https://pantherahive.com/transcripts/PHLA-0067890.vtt",
    "duration_seconds": 900,
    "thumbnail_url": "https://pantherahive.com/thumbnails/PHLA-0067890.jpg",
    "associated_pseo_landing_page_id": "PHLP-AI-00112",
    "associated_pseo_landing_page_url": "https://pantherahive.com/ai-solutions/content-automation",
    "creation_date": "2025-09-01T08:00:00Z",
    "last_modified_date": "2025-09-01T08:00:00Z",
    "social_signal_automator_status": "needs_reprocessing",
    "tags": ["AI", "Content Marketing", "Automation", "Future Tech"],
    "description_short": "Exploring how artificial intelligence will revolutionize content creation workflows and PantheraHive's role in this evolution."
  }
]
Sandboxed live preview

Next Steps in Workflow

Upon successful retrieval of eligible content assets and their metadata:

  1. Status Update: The social_signal_automator_status for each selected asset will be updated from not_processed or needs_reprocessing to in_progress within the hive_db. This prevents concurrent processing and tracks the asset's journey.
  2. Pass to Vortex: The list of retrieved content assets, along with their asset_url and transcript_url (if applicable), will be passed as input to Step 2: Vortex → analyze_engagement_moments. Vortex will then use this information to analyze the content and identify the 3 highest-engagement moments using its proprietary hook scoring algorithm.

Customer Action & Recommendations

This hive_db → query step is foundational and largely automated. However, to ensure optimal performance and alignment with your marketing goals, please consider the following:

  • Content Prioritization: If you have specific content assets or categories you wish to prioritize for social signal generation, please communicate these preferences. We can adjust the query criteria (e.g., by adding tags filters or asset_id lists) to ensure these are processed first.
  • Freshness vs. Evergreen: The default query prioritizes recent content. If you have evergreen content that you believe would benefit from a renewed social push, please let us know. We can modify the publish_date filter or target specific older assets.
  • pSEO Landing Page Association: Verify that all your key video and long-form content assets are correctly associated with their respective pSEO landing pages in the PantheraHive database. This link is critical for the referral traffic and brand authority goals of this workflow.
  • Review Status: You will be able to review the list of content assets selected for processing after this step, providing an opportunity to confirm the selection before proceeding to engagement analysis.
ffmpeg Output

Step 2: Video Pre-processing and AI-Powered Clip Extraction (ffmpegvortex_clip_extract)

This critical step initiates the intelligent segmentation of your source video asset, leveraging advanced AI to pinpoint the most engaging moments for social amplification.


1. Purpose of This Step

The primary goal of this stage is to identify and extract the three highest-engagement moments from your chosen PantheraHive video asset. By using our proprietary Vortex AI, we move beyond manual, subjective clip selection to a data-driven approach, ensuring that the content destined for social platforms is maximally optimized to capture audience attention and drive interaction.

2. Detailed Process Breakdown

This step involves two core components working in tandem:

2.1. FFmpeg Pre-processing

  • Standardization: The raw video asset is first processed by FFmpeg, a powerful open-source multimedia framework. This ensures that the video is in a consistent, optimized format (e.g., MP4, H.264 codec, standardized frame rate and resolution) suitable for efficient AI analysis by Vortex.
  • Data Extraction: FFmpeg may also extract key data streams, such as audio tracks, video frames, and metadata, which can be fed separately or combined into Vortex for more granular analysis. This prepares the content for deep learning models.
  • Quality Assurance: This initial pass helps to normalize any variations in source video quality or encoding, guaranteeing a smooth and reliable input for the subsequent AI analysis.

2.2. Vortex AI-Powered Clip Extraction

  • Advanced Hook Scoring: Once pre-processed, the video is fed into our proprietary Vortex AI engine. Vortex employs sophisticated machine learning models, trained on vast datasets of high-performing social media content, to analyze the video across multiple dimensions:

* Visual Cues: Detection of scene changes, motion intensity, facial expressions, object tracking, and on-screen text.

* Audio Dynamics: Analysis of speech patterns, tone, pitch, emotional markers, sound effects, music shifts, and moments of silence.

* Engagement Predictors: Identification of specific patterns known to drive viewer retention, such as compelling questions, surprising statements, shifts in narrative, and clear calls to action (pre-CTA).

  • Moment Identification: Vortex assigns a "hook score" to every segment of the video, dynamically evaluating its potential to grab and hold viewer attention.
  • Top 3 Selection: Based on these scores, Vortex intelligently identifies and isolates the three distinct segments with the highest engagement potential. These segments are typically optimized for short-form content, ensuring they are concise yet impactful. The AI prioritizes clips that offer unique insights or compelling narratives from different parts of the original asset.
  • Precise Trimming: The selected moments are then precisely trimmed to their optimal start and end points, cutting out any dead air or less engaging lead-ins/outs.

3. Inputs for This Step

  • Source Video Asset: A single, complete video file from your PantheraHive content library.

* Example: pantherahive-new-feature-deep-dive-2026-v2.mp4

  • Workflow Configuration: Implicit parameters defining desired clip length ranges (e.g., 15-60 seconds per clip) and the number of clips to extract (fixed at 3 for this workflow).

4. Outputs from This Step

Upon successful completion of this step, the following assets and data will be generated and passed to the next stage of the workflow:

  • Three Raw Video Clips (MP4):

* These are unformatted, high-quality video segments extracted directly from the source.

* Each clip represents one of the top 3 highest-engagement moments identified by Vortex.

* Example Output Files:

* clip_1_segment_01_product_benefit_showcase.mp4 (e.g., 28 seconds)

* clip_2_segment_02_customer_testimonial_highlight.mp4 (e.g., 32 seconds)

* clip_3_segment_03_future_vision_statement.mp4 (e.g., 25 seconds)

  • Clip Metadata (JSON/XML):

* Original Timestamps: Start and end times of each extracted clip within the original source video.

* Vortex Engagement Score: The calculated hook score for each clip, indicating its predicted engagement potential.

* Segment Description: A brief, AI-generated (if available) or inferred description of the clip's content.

* Workflow ID: Unique identifier linking these clips back to the overall "Social Signal Automator" run.

5. Benefits Delivered

  • Maximized Social Reach: By focusing on the most compelling moments, these clips are inherently more likely to capture attention, increase watch time, and encourage shares on social platforms.
  • Automated Efficiency: Eliminates the manual, time-consuming, and often subjective process of identifying highlight reels, freeing up your team's resources.
  • Data-Driven Performance: Leverages cutting-edge AI to ensure that your social content is based on predictive analytics rather than guesswork, leading to higher ROI from your video assets.
  • Consistency and Quality: Guarantees a consistent standard of high-engagement content extraction across all your PantheraHive videos.

This completes the intelligent segmentation of your video asset. The three high-engagement clips are now prepared for the next stage: the addition of a branded voiceover CTA.

elevenlabs Output

Workflow Step Execution: ElevenLabs Text-to-Speech (TTS) Generation

This document details the execution of Step 3 of 5 in the "Social Signal Automator" workflow: elevenlabs → tts. This crucial step ensures that every platform-optimized clip generated carries a consistent, high-quality, branded call-to-action (CTA) voiceover, reinforcing brand identity and driving user engagement.


1. Workflow Context & Objective

The "Social Signal Automator" workflow is designed to leverage existing PantheraHive video and content assets by transforming them into platform-optimized short-form clips. The ultimate goal is to generate Brand Mentions as a trust signal for Google in 2026, while simultaneously driving referral traffic and building brand authority.

This specific step, ElevenLabs Text-to-Speech (TTS) Generation, is responsible for creating the audio component of our branded call-to-action: "Try it free at PantheraHive.com". This consistent voiceover will be appended to each optimized clip, ensuring a clear and professional prompt for viewers across YouTube Shorts, LinkedIn, and X/Twitter.

2. Step 3: ElevenLabs Text-to-Speech (TTS) Generation

This step involves the automated conversion of our standardized brand CTA text into a high-quality, natural-sounding audio file using ElevenLabs' advanced Text-to-Speech technology.

2.1. Input & Parameters

  • Text Input for Voiceover:

* "Try it free at PantheraHive.com"

* This exact phrase is used to ensure consistency across all generated clips.

  • Voice Selection:

* A pre-selected, professional, and consistent PantheraHive brand voice has been chosen from the ElevenLabs library (or a custom cloned voice, if available) to maintain brand audio identity.

Example (for internal tracking, not customer-facing unless specified):* "PantheraHive Brand Voice - Male A" or "PantheraHive Brand Voice - Female B".

* This voice is selected for its clarity, natural intonation, and authoritative yet approachable tone, aligning with PantheraHive's brand persona.

  • Voice Settings & Optimization:

* Stability: Adjusted to ensure consistent vocal delivery without unnatural fluctuations.

* Clarity + Similarity Enhancement: Optimized to enhance the crispness and naturalness of the speech, making it sound as human-like as possible.

* Model: Utilizing the latest ElevenLabs v2 or similar high-fidelity models for superior synthesis.

2.2. Generation Process & Quality Assurance

  1. API Integration: The designated CTA text and voice parameters are programmatically sent to the ElevenLabs API.
  2. Audio Synthesis: ElevenLabs' AI model processes the text, applying the selected voice and settings to generate the audio file.
  3. Post-Processing (Automated): Basic audio normalization and minor trimming are automatically applied to ensure the output is production-ready.
  4. Internal Quality Check:

* An automated script performs a basic audio integrity check (e.g., file existence, duration, absence of silence).

* A sample of generated audio files is periodically reviewed by a human auditor to confirm adherence to brand voice standards, clarity, and overall quality.

2.3. Output & Deliverables

The successful execution of this step yields the following critical output:

  • Primary Deliverable:

* A high-fidelity audio file (typically in .mp3 or .wav format) containing the voiceover: "Try it free at PantheraHive.com".

* This audio file is stored in a designated, accessible location for the subsequent FFmpeg rendering step.

  • Metadata & Confirmation:

* Confirmation of the voice used (e.g., "PantheraHive Brand Voice - Male A").

* Timestamp of audio generation.

* File path and naming convention for easy retrieval (e.g., PantheraHive_CTA_Voiceover_YYYYMMDD_HHMMSS.mp3).

3. Strategic Value & Impact

The ElevenLabs TTS generation step provides significant value to the Social Signal Automator workflow:

  • Unwavering Brand Consistency: Ensures every single generated clip, regardless of its original source content, concludes with the exact same professional voice and message, strengthening brand recall.
  • Enhanced Professionalism: Eliminates the need for manual voice recording, ensuring studio-quality audio with perfect intonation and clarity every time.
  • Efficiency & Scalability: Automates a critical production step, allowing for rapid generation of numerous clips without human intervention for voiceovers. This is crucial for scaling content output.
  • Clear Call-to-Action: The consistent, high-quality voiceover explicitly directs viewers to PantheraHive.com, directly contributing to referral traffic and potential conversions.
  • Google Trust Signals: By consistently linking back to PantheraHive.com and featuring a branded call-to-action, this process contributes to building brand mentions and authority, which Google tracks as trust signals.

4. Next Steps

The generated audio file containing the branded CTA is now ready for integration. In the subsequent step, this audio file will be combined with the platform-optimized video clips (derived from the highest-engagement moments) during the FFmpeg rendering process. FFmpeg will precisely append this voiceover to the end of each clip, creating the final, ready-to-publish social media assets.

ffmpeg Output

Step 4: FFmpeg Multi-Format Rendering

This step focuses on the critical transformation of your selected high-engagement video moments into platform-optimized video clips using FFmpeg. By leveraging advanced scaling and cropping techniques, we ensure each clip is perfectly formatted for YouTube Shorts (9:16), LinkedIn (1:1), and X/Twitter (16:9), maximizing visual impact and platform engagement.


Purpose of This Step

The primary objective of the ffmpeg → multi_format_render step is to:

  1. Optimize for Platform Specifications: Convert the previously identified high-engagement video segments into the exact aspect ratios and resolutions required by YouTube Shorts, LinkedIn, and X/Twitter.
  2. Maintain Visual Quality: Re-encode the video clips
hive_db Output

Step 5 of 5: hive_db → insert - Data Persistence & Tracking

This final step in the "Social Signal Automator" workflow ensures that all generated assets, metadata, and associated tracking information are securely stored within your PantheraHive database (hive_db). This critical phase transforms the dynamic output of the previous steps into structured, accessible data, enabling comprehensive tracking, future automation, and strategic content management.


1. Data Insertion Overview

Upon successful generation of all platform-optimized clips and their associated metadata by Vortex, ElevenLabs, and FFmpeg, this step systematically inserts the following key data points into the hive_db. This creates a robust record of every clip produced, linking it back to the original content asset and the specific workflow execution.

2. Detailed Data Points Being Inserted

For each of the 9 generated social clips (3 engagement moments × 3 platforms), a comprehensive record is created. This ensures complete traceability and usability of your new content.

  • clip_id (Unique Identifier): A unique ID for each individual social clip (e.g., UUID-XYZ-moment1-shorts).
  • original_asset_id (Foreign Key): Links back to the original PantheraHive video or content asset that was processed.
  • workflow_instance_id (Foreign Key): Identifies the specific run of the "Social Signal Automator" workflow that generated these clips.
  • engagement_moment_index: Indicates which of the 3 highest-engagement moments the clip represents (e.g., 1, 2, 3).
  • platform: Specifies the target social media platform for the clip (e.g., youtube_shorts, linkedin, x_twitter).
  • clip_file_path: The secure URL or internal path to the rendered video file for the specific clip. This is where you can download or link to the final video.
  • clip_duration_seconds: The exact duration of the generated clip in seconds.
  • voiceover_text: The exact branded CTA added by ElevenLabs (e.g., "Try it free at PantheraHive.com").
  • ps_seo_landing_page_url: The URL of the matching pSEO landing page that this clip is designed to drive traffic to.
  • recommended_caption: A suggested caption for posting the clip, often including dynamic placeholders based on the original asset title and relevant keywords.
  • recommended_hashtags: A list of relevant hashtags tailored for the content and platform, to maximize discoverability.
  • thumbnail_file_path (Optional but Recommended): The URL or path to a generated thumbnail image for the clip, useful for previews.
  • status: The current status of the clip generation (e.g., generated, ready_for_upload).
  • generated_at: A timestamp indicating when the clip record was created in the database.
  • created_by_user_id: The PantheraHive user who initiated this workflow.

3. Strategic Importance & Benefits

Storing this data in hive_db is fundamental for maximizing the value of your "Social Signal Automator" workflow:

  • Centralized Content Repository: All your generated social media assets are cataloged in one place, making them easy to find, manage, and repurpose.
  • Enhanced Brand Authority & SEO: By meticulously tracking which clips link to which pSEO pages, you build a clear audit trail of your brand mention strategy, directly supporting Google's 2026 trust signals.
  • Performance Tracking: This data forms the baseline for future analytics. You can track views, clicks, and conversions for each clip and pSEO page.
  • Content Re-use & Evergreen Strategy: Easily identify and redeploy high-performing clips or adapt existing content for new campaigns.
  • Audit Trail & Compliance: Maintains a complete record of all automated content generation, crucial for internal audits and content governance.
  • Foundation for Future Automation: This structured data is essential for subsequent automated steps, such as scheduling posts via PantheraHive's social media scheduler or integrating with third-party publishing tools.

4. Actionable Outcomes for You

With this data now securely stored in your hive_db, you can:

  • Access Generated Assets: Retrieve the clip_file_path for each of the 9 clips, allowing you to download the videos directly or link to them for publishing.
  • Review & Refine Content: Examine the recommended_caption and recommended_hashtags to ensure they align with your current marketing strategy, making any necessary adjustments before publishing.
  • Prepare for Publishing: Use the ps_seo_landing_page_url to ensure your social media posts correctly link back to the intended landing pages, driving referral traffic and building brand authority.
  • Monitor & Analyze Performance: Integrate this data with your analytics dashboards to track the performance of individual clips, platforms, and overall brand mention strategy.
  • Automate Publishing (Next Steps): This structured data is perfectly poised for integration with PantheraHive's content calendar or social media publishing tools, enabling automated scheduling and deployment of your newly created clips.

This completes the "Social Signal Automator" workflow. You now have a comprehensive set of platform-optimized social media clips, complete with branded CTAs and pSEO landing page links, all meticulously cataloged in your hive_db for strategic deployment and analysis.

social_signal_automator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}