Site SEO Auditor
Run ID: 69cb3f9161b1021a29a874e82026-03-31SEO & Growth
PantheraHive BOS
BOS Dashboard

Step 3 of 5: Gemini AI Fix Generation (batch_generate)

This document details the execution of Step 3 in your "Site SEO Auditor" workflow: leveraging Gemini AI for intelligent, automated generation of fixes for identified SEO issues. Following the comprehensive crawl and audit performed by our headless crawler (Puppeteer), this step focuses on transforming raw audit findings into actionable solutions.


1. Workflow Step Context: From Audit to Actionable Fixes

After the headless crawler thoroughly scans your website and identifies specific SEO deficiencies across a 12-point checklist (e.g., missing H1s, duplicate meta descriptions, broken links), the raw audit data is fed into this crucial step. Here, Google's Gemini AI acts as an intelligent SEO consultant, analyzing each identified issue and generating precise, implementable solutions.

The primary goal of this batch_generate step is to automate the typically time-consuming and manual process of diagnosing and prescribing fixes for SEO problems, significantly accelerating your site's optimization efforts.


2. Gemini's Role in Intelligent Fix Generation

Gemini's advanced capabilities are employed to understand the context of each SEO issue and propose the most effective remedy. Instead of simply flagging a problem, Gemini provides the exact fix, often in the form of code snippets or specific content recommendations, tailored to the detected problem and its surrounding HTML context.

Key Functions:


3. Detailed Fix Generation Process

The process for generating these fixes is robust and designed for accuracy and practicality:

3.1. Input Data to Gemini

For each identified SEO issue, Gemini receives a structured payload containing:

3.2. Gemini's Analytical Workflow

Upon receiving the input, Gemini performs the following:

  1. Issue Interpretation: Understands the specific SEO guideline being violated and the implications.
  2. Contextual Analysis: Examines the provided HTML and content to grasp the page's structure, purpose, and existing elements.
  3. Best Practice Application: Cross-references the issue with current SEO best practices and technical standards.
  4. Solution Formulation: Generates a precise, actionable fix. This might involve:

* Creating new HTML elements.

* Modifying existing attributes.

* Suggesting alternative content.

* Providing specific instructions for developers.

3.3. Output: Actionable Fixes

The output from this step is a collection of detailed fix recommendations, structured for easy implementation. Each fix includes:

* Code Snippet: The exact HTML, CSS, or JavaScript code to be added or modified.

* Content Suggestion: Optimized text for meta descriptions, alt attributes, etc.

Instructional Guidance: Clear, human-readable instructions on how and where* to apply the fix.


4. Examples of AI-Generated SEO Fixes

Here are illustrative examples of the type of detailed fixes Gemini provides for common SEO issues:

Example 1: Missing H1 Tag

html • 305 chars
    <!-- Original: <meta name="description" content="Discover our amazing widgets, perfect for every need."> -->
    <!-- Proposed Change: -->
    <meta name="description" content="Explore the innovative Blue Widget: unparalleled performance, sleek design, and essential for modern homes. Shop now!">
    
Sandboxed live preview

Step 1 of 5: Initiating the Site SEO Audit Crawl (Puppeteer)

This document details the successful execution of Step 1: puppeteer → crawl for your Site SEO Auditor workflow. This foundational step involves systematically visiting every page on your website using a headless browser to gather comprehensive, real-time data, which is critical for all subsequent SEO audit points.


1. Introduction: The Foundation of Your SEO Audit

The initial crawl is the most critical phase of your SEO audit. It's where we simulate a real user's browser experience to discover all accessible pages and collect their rendered content. Unlike traditional static crawlers, our approach leverages a headless browser to ensure that dynamic content, JavaScript-rendered elements, and client-side interactions are fully processed and captured, providing an accurate representation of what search engines and users actually see.

2. Technology Stack: Puppeteer Headless Browser

This step utilizes Puppeteer, a Node.js library developed by Google. Puppeteer provides a high-level API to control headless (or full) Chrome or Chromium over the DevTools Protocol.

  • Headless Mode: Puppeteer operates in headless mode, meaning it runs Chrome without a visible user interface. This is essential for efficient, large-scale crawling, allowing it to navigate, interact with, and extract data from web pages just like a standard browser.
  • Why Puppeteer?: It's chosen for its unparalleled ability to:

* Execute JavaScript and render dynamic content, crucial for modern web applications.

* Capture a complete snapshot of the Document Object Model (DOM) after all scripts have run.

* Monitor network requests and responses, providing insights into page load performance.

* Simulate user interactions, ensuring accurate content discovery.

3. Detailed Crawling Process

The crawling process is designed for thoroughness, accuracy, and robustness:

3.1. Seed URL Initialization

The crawl begins with your primary domain (e.g., https://yourwebsite.com) as the initial "seed" URL. This ensures that the audit starts from the main entry point of your site.

3.2. Page Discovery and Traversal

  1. Page Load & Render: Puppeteer navigates to the seed URL, waits for all page resources to load, and executes any client-side JavaScript. This ensures the page is fully rendered as a user would experience it.
  2. Internal Link Extraction: Once loaded, Puppeteer systematically parses the rendered HTML content to identify all valid internal <a> links within your specified domain.
  3. Queue Management: Discovered URLs are added to a prioritized queue for subsequent visits, ensuring efficient traversal across your site.
  4. Scope Adherence: The crawler strictly adheres to your domain, preventing the accidental crawling of external websites and focusing solely on your site's content.

3.3. Dynamic Content Handling

A key advantage of using Puppeteer is its ability to handle dynamic content:

  • JavaScript Execution: It fully executes all JavaScript on each page, ensuring that content loaded asynchronously (e.g., via AJAX, React, Vue, Angular) is present in the DOM and available for analysis.
  • Asynchronous Content: The crawler waits for network idle conditions or specific DOM elements to appear, guaranteeing that even lazily loaded content is captured.

3.4. Resource Collection During Crawl

For each successfully crawled page, Puppeteer collects a rich set of data:

  • Full Rendered HTML/DOM: A complete snapshot of the page's HTML after all JavaScript has executed and the page is fully rendered. This is vital for analyzing all SEO elements.
  • HTTP Status Code: The server response code (e.g., 200 OK, 301 Redirect, 404 Not Found) for the initial request and the final URL after redirects.
  • Final URL: The canonical URL of the page after all redirects have been followed, providing the actual destination.
  • Network Request Log: A detailed log of all network requests made by the page (e.g., images, scripts, stylesheets, API calls), including their URLs, types, status, and timing. This data is crucial for Core Web Vitals analysis.
  • Basic Page Metadata: Initial extraction of elements like <title>, <meta name="description">, and <link rel="canonical"> tags directly from the rendered DOM.
  • Discovered Internal Links: A list of all unique internal links found on the page, used to expand the crawl and analyze internal linking structures.

3.5. Crawl Management and Robustness

  • Concurrency Control: The crawler operates with configurable concurrency settings, allowing multiple browser instances or tabs to process pages in parallel, significantly speeding up the audit while preventing server overload.
  • Rate Limiting: Intelligent rate limiting is implemented to respect your server's capacity and adhere to any Crawl-delay directives specified in your robots.txt file.
  • Error Handling:

* HTTP Errors: Gracefully handles and logs pages returning 4xx (client errors) and 5xx (server errors) status codes.

* Timeouts: Configurable timeouts are in place for pages that take too long to load, preventing the crawl from getting stuck indefinitely.

* JavaScript Errors: Logs any client-side JavaScript errors encountered during page rendering, which can indicate potential issues affecting user experience or content visibility.

* Retry Mechanisms: Implements a retry logic for transient network issues or temporary server unresponsiveness.

4. Output & Deliverables of Step 1: Raw Crawled Data

The successful completion of Step 1 produces a comprehensive, structured dataset for every unique internal URL discovered on your website. This data is the raw material for the subsequent SEO audit points.

For each unique URL, the following detailed information is collected and prepared for the next processing stage:

  • url: The absolute URL of the page.
  • httpStatus: The HTTP status code received (e.g., 200, 301, 404).
  • finalUrl: The URL after all redirects have been resolved.
  • htmlContent: The complete, fully rendered HTML content of the page.
  • networkRequests: An array of objects, each representing a network request made during page load, including:

* requestUrl: The URL of the requested resource.

* resourceType: e.g., 'document', 'stylesheet', 'script', 'image'.

* statusCode: HTTP status of the request.

* timing: Detailed timing metrics (e.g., DNS lookup, TCP connect, TTFB, total duration).

  • discoveredLinks: An array of unique internal URLs found on the page.
  • crawlTimestamp: The exact timestamp when the page was successfully crawled.
  • pageMetrics: Initial raw performance metrics captured during the page load (e.g., DOMContentLoaded, Load event timings).
  • crawlErrors: Any specific errors encountered during the crawl of this page (e.g., timeout, JS console errors).

5. Transition to Next Steps

This rich dataset is now ready to be processed by the subsequent steps of the SEO Auditor workflow. The htmlContent will be parsed for specific SEO elements (meta titles, descriptions, H1s, alt tags, canonicals, Open Graph, structured data). The networkRequests and pageMetrics will be analyzed for Core Web Vitals. The discoveredLinks will be used for internal link density and broken link checks.

This thorough crawl ensures that no stone is left unturned, providing a robust foundation for identifying and fixing critical SEO issues on your site.

hive_db Output

Step 2 of 5: hive_db → Diff Generation Report

This document details the execution and output of Step 2 of the "Site SEO Auditor" workflow: hive_db → Diff Generation. This crucial step involves retrieving your site's previous and current SEO audit reports from our secure hive_db (MongoDB) and generating a comprehensive "diff" report. This diff highlights all changes, improvements, regressions, and new issues identified between the two audit runs, providing you with a clear, actionable overview of your site's SEO performance evolution.


1. Purpose of Diff Generation

The primary objective of the hive_db → Diff Generation step is to provide a historical perspective and actionable insights into your website's SEO health. Instead of just presenting a snapshot of the current state, the diff report allows you to:

  • Track Progress: Monitor the effectiveness of SEO optimizations implemented since the last audit.
  • Identify Regressions: Quickly detect any new issues or re-emergence of old problems that might negatively impact your search rankings.
  • Prioritize Fixes: Understand which issues have changed status (e.g., resolved, new, worsened) to focus your efforts effectively.
  • Measure Impact: Quantify the before-and-after effects of changes, including the application of Gemini-generated fixes.
  • Maintain SEO Health: Ensure continuous improvement and prevent silent degradation of your site's SEO performance over time.

2. Data Retrieval and Comparison Process

The diff generation process involves a meticulous comparison of two SiteAuditReport documents stored in your dedicated hive_db instance.

2.1. Data Retrieval

  1. Current Audit Report (After State): The system fetches the most recently completed SiteAuditReport from MongoDB. This report contains the audit results from the latest crawl, representing the "after" state.
  2. Previous Audit Report (Before State): The system then retrieves the SiteAuditReport immediately preceding the current one. This report serves as the "before" state for comparison.
  3. Handling First Audit: If this is the very first audit for your site, the "before" state will be empty, and the diff report will effectively serve as the initial comprehensive audit report, marking all identified issues as "new."

2.2. Detailed Comparison Logic

Once both reports are retrieved, a sophisticated comparison algorithm is applied:

  • Page-Level Comparison:

* New Pages: Identifies any URLs present in the "after" report that were not found in the "before" report. These new pages will undergo a full SEO audit.

* Removed Pages: Identifies URLs present in the "before" report but no longer found in the "after" report. This helps track content changes or deletions.

* Existing Pages: For pages present in both reports, a detailed, metric-by-metric comparison is performed.

  • Metric-by-Metric Analysis: For each audited page, the system compares the status and values of all 12 SEO checklist points:

* Meta Title Uniqueness: Changes in title content, length, or duplication status.

* Meta Description Uniqueness: Changes in description content, length, or duplication status.

* H1 Presence: Whether an H1 is now present/missing, or if its content has significantly changed.

* Image Alt Coverage: Improvements or regressions in the percentage of images with alt attributes.

* Internal Link Density: Changes in the number of internal links on a page.

* Canonical Tags: Detection of new, missing, incorrect, or changed canonical tags.

* Open Graph Tags: Status changes for essential OG tags (e.g., og:title, og:description, og:image).

* Core Web Vitals (LCP/CLS/FID): Improvements or degradations in performance scores, potentially crossing thresholds (e.g., "Good" to "Needs Improvement").

* Structured Data Presence: Detection of new, missing, or changed structured data (e.g., Schema.org markup).

* Mobile Viewport: Verification of correct viewport meta tag presence and configuration.

  • Issue Status Tracking: The system tracks the status of each identified issue across audits:

* Resolved: An issue present in the "before" report but no longer present in the "after" report.

* New: An issue not present in the "before" report but newly identified in the "after" report.

* Unchanged: An issue that persists in both reports with the same severity.

* Worsened/Improved: For quantifiable metrics (e.g., Core Web Vitals), changes in score that indicate a positive or negative trend.


3. Key Information Captured in the Diff Report

The generated diff report will meticulously detail changes across the 12-point SEO checklist, providing specific insights for each:

  • Meta Titles & Descriptions:

* New duplicate titles/descriptions.

* Resolved duplicate titles/descriptions.

* Pages with titles/descriptions that are now too long/short.

* Content changes in titles/descriptions.

  • H1 Tags:

* Pages now missing an H1.

* Pages that have gained an H1.

* Multiple H1s detected on a page.

  • Image Alt Attributes:

* Pages with a decrease in alt text coverage.

* Pages with an increase in alt text coverage.

* Specific images identified with missing alt text.

  • Internal Linking:

* Pages experiencing a significant drop or increase in internal link count.

* Identification of potential orphaned pages (low internal link density).

  • Canonical Tags:

* Pages with newly missing canonical tags.

* Pages with newly incorrect or self-referencing canonical tags pointing to non-canonical URLs.

* Resolved canonical tag issues.

  • Open Graph Tags:

* Pages with newly missing or incorrectly configured essential Open Graph tags.

* Resolved Open Graph tag issues.

  • Core Web Vitals (LCP, CLS, FID):

* Pages where LCP, CLS, or FID scores have crossed performance thresholds (e.g., from "Good" to "Needs Improvement" or vice-versa).

* Specific numerical changes in these metrics.

  • Structured Data:

* Pages where structured data has been newly added or removed.

* Detection of new syntax errors or validation warnings in existing structured data.

  • Mobile Viewport:

* Pages where the mobile viewport meta tag is now missing or incorrectly configured.


4. Structure of the Generated Diff Report

The output of the hive_db → Diff Generation step will be a structured report, designed for clarity and actionability.

4.1. Executive Summary

  • Overall Performance Trend: High-level indicators (e.g., "Overall SEO health improved by X%", "Number of critical issues decreased/increased by Y").
  • Key Highlights: Quick summary of major improvements, regressions, and new critical issues.
  • Comparison Overview: Total pages audited, new pages, removed pages, total issues identified, total issues resolved, total new issues.

4.2. Detailed Changes by Category

This section will break down changes for each of the 12 SEO checklist points.

  • Example: Meta Titles

* Resolved Issues: List of URLs where duplicate or problematic titles have been fixed.

* New Issues: List of URLs with newly identified duplicate or problematic titles.

* Worsened/Improved: Pages where title length or content has changed, potentially impacting SEO.

  • Example: Core Web Vitals

* Improved Pages: URLs where LCP, CLS, or FID scores have moved into a "Good" category or shown significant positive improvement.

* Regressed Pages: URLs where LCP, CLS, or FID scores have moved into "Needs Improvement" or "Poor" categories, or shown significant negative regression.

* Unchanged Critical: Pages that continue to have poor Core Web Vitals scores.

4.3. Page-Level Change Log

For each page that has undergone a change, a specific entry will detail:

  • URL: The affected page.
  • Status: New, Removed, or Existing (with changes).
  • Specific Changes: Bulleted list of all SEO metrics that have changed for that page (e.g., "H1 missing (NEW)", "LCP improved from 3.5s to 2.1s (IMPROVED)", "Meta Description too short (UNCHANGED)").

4.4. Gemini Fixes Status Tracking

This section specifically tracks the impact of fixes previously generated by Gemini:

  • Applied & Resolved: List of issues for which Gemini provided a fix, and the issue is now marked as "Resolved" in the current audit.
  • Applied & Persisting: List of issues for which Gemini provided a fix, but the issue still persists in the current audit (requiring further investigation or refinement of the fix).
  • New Issues with Fix Suggestions: For newly identified broken elements, a link to the Gemini-generated fix suggestions will be provided here.

5. Actionable Insights and Next Steps

The generated diff report is designed to be highly actionable:

  • Prioritize New Critical Issues: Focus immediate attention on any "New Critical" issues identified, as these represent fresh problems impacting your SEO.
  • Verify Resolved Issues: Confirm that previously identified issues have indeed been fixed, especially those where Gemini provided suggestions.
  • Investigate Regressions: Dive deep into pages or metrics that have regressed to understand the root cause and prevent further decline.
  • Leverage Gemini Fixes: For any "New" broken elements, utilize the automatically generated Gemini fixes to expedite resolution.
  • Monitor Trends: Use the historical data to identify long-term trends in your site's SEO performance and adjust your strategy accordingly.

6. Example Diff Scenarios

To illustrate the utility, consider these scenarios:

  • Scenario A: Successful Optimization

* Diff Output: "Pages with improved LCP: 15 (e.g., /product-a, /category-b). All 15 pages previously flagged as 'Poor' or 'Needs Improvement' are now 'Good'. Meta description duplicates resolved: 5 (e.g., /old-blog-post-1, /old-blog-post-2)."

* Action: Validate the applied optimizations and consider replicating strategies on other pages.

  • Scenario B: New Issue Detection

* Diff Output: "New H1 missing issues: 3 pages (e.g., /new-service-page, /landing-page-v2). New duplicate meta title issues: 2 pages (e.g., /temp-promo-page, /blog/latest-article). Core Web Vitals regression: /homepage (LCP moved from Good to Needs Improvement)."

* Action: Immediately address the missing H1s and duplicate titles using Gemini's suggestions. Investigate the homepage LCP regression.

  • Scenario C: Gemini Fix Application

Diff Output: "Gemini Fixes Applied & Resolved: 7 (e.g., alt text for images on /gallery-page, canonical tag for /old-url). Gemini Fixes Applied & Persisting: 1 (e.g., internal link density for /resource-hub - further investigation needed*)."

* Action: Celebrate the resolved issues. Re-evaluate the persisting issue and consider alternative approaches or more detailed fixes.


This hive_db → Diff Generation step ensures you always have a clear, data-driven understanding of your site's SEO journey, enabling proactive management and continuous improvement.

Guidance: "Implement the provided canonical tag in the <head> section of /category/shoes?color=blue to designate https://yourwebsite.com/category/shoes as the preferred version, preventing duplicate content issues."


5. Benefits of AI-Powered Fix Generation

  • Efficiency: Dramatically reduces the time and effort required to identify and prescribe fixes for SEO issues.
  • Accuracy: Gemini's deep understanding of SEO best practices and contextual analysis leads to highly accurate and effective solutions.
  • Actionability: Provides ready-to-implement code snippets and clear instructions, minimizing guesswork for developers and content creators.
  • Scalability: Can process a large number of issues across thousands of pages without manual intervention.
  • Consistency: Ensures fixes adhere to a consistent standard across the entire website.
  • Proactive Optimization: Shifts focus from merely identifying problems to actively providing solutions for continuous improvement.

6. Integration and Storage

The generated fixes are not just ephemeral suggestions. Each fix, along with its corresponding original issue, is meticulously stored within your MongoDB SiteAuditReport. This data forms the "after" state, allowing for a comprehensive before/after diff in the final report. This persistent storage ensures that you have a complete record of all identified issues and their proposed solutions.


7. Next Steps

With the fixes now intelligently generated by Gemini, the workflow proceeds to its final stages:

  • Step 4: MongoDB → Store Report: The complete audit report, including both the raw findings and the Gemini-generated fixes, will be stored in your dedicated MongoDB instance.
  • Step 5: MongoDB → Generate Diff: A detailed before/after diff report will be generated, highlighting the specific changes recommended by Gemini and providing a clear roadmap for your SEO improvements. This final report will be available for review and implementation.
hive_db Output

This output details Step 4 of 5 for the "Site SEO Auditor" workflow, focusing on the hive_db → upsert operation. This crucial step ensures that all collected SEO audit data, including AI-generated fixes and performance differentials, is persistently stored and organized within your MongoDB database.


Step 4 of 5: hive_db → upsert

This step is dedicated to the robust and intelligent persistence of your comprehensive SEO audit data into your designated MongoDB database. It ensures that every detail, from individual page diagnostics to AI-generated fixes and historical performance comparisons, is securely stored and readily accessible for analysis and reporting.

Purpose of This Step

The hive_db → upsert step serves several critical functions:

  • Data Persistence: Ensures all collected SEO audit data, including the 12-point checklist results for every page, Core Web Vitals, and detected issues, are permanently stored.
  • Historical Tracking: Creates a chronological record of your site's SEO performance, allowing for long-term trend analysis and tracking of improvements or regressions.
  • Diff Storage: Persists the calculated "before/after diff" at both page and overall site levels, providing immediate insight into changes between audit runs.
  • Foundation for Action: Establishes the data backbone for subsequent reporting, dashboard visualizations, and automated notification systems, enabling proactive SEO management.
  • AI Fix Storage: Stores the exact fixes generated by Gemini for broken elements, ensuring that recommended actions are documented alongside the audit results.

Data Model: SiteAuditReport

The SiteAuditReport is the central document schema used to store the audit results in your MongoDB database. It is meticulously structured to capture all facets of the audit, including detailed page-level data and overall site performance metrics.

SiteAuditReport Document Structure:

  • _id: (ObjectId) MongoDB's default unique identifier for the document.
  • siteId: (String, Indexed) A unique identifier for the audited website (e.g., www.yourdomain.com).
  • auditDate: (ISODate, Indexed) The timestamp indicating when this specific audit was completed. This serves as a key component for historical tracking.
  • status: (String) The overall status of the audit run (e.g., "completed", "failed", "partial").
  • totalPagesAudited: (Number) The total count of unique pages successfully crawled and audited during this run.
  • overallScore: (Number, Optional) An aggregated, high-level score (e.g., 0-100) reflecting the overall SEO health of the site based on the audit findings.
  • pages: (Array of Objects) A detailed array containing the audit results for each individual page.

* url: (String, Indexed) The full URL of the audited page.

* statusCode: (Number) The HTTP status code returned by the page (e.g., 200, 404, 301).

* seoChecks: (Object) Comprehensive results for the 12-point SEO checklist.

* metaTitle:

* value: (String) The extracted meta title.

* length: (Number) Length of the meta title in characters.

* isUnique: (Boolean) true if the title is unique across all audited pages, false otherwise.

* issues: (Array of Strings, Optional) e.g., ["Too Long (70 chars)", "Missing Title"].

* metaDescription:

* value: (String) The extracted meta description.

* length: (Number) Length of the meta description in characters.

* isUnique: (Boolean) true if unique, false otherwise.

* issues: (Array of Strings, Optional) e.g., ["Too Short (50 chars)", "Duplicate Description"].

* h1Presence:

* exists: (Boolean) true if an H1 tag is found.

* value: (String, Optional) The content of the first H1 tag found.

* issues: (Array of Strings, Optional) e.g., `["

hive_db Output

Step 5 of 5: hive_dbconditional_update - Site SEO Audit Report Storage

This final step in the "Site SEO Auditor" workflow is dedicated to securely storing the comprehensive SEO audit results and generated fixes within your dedicated MongoDB instance (hive_db). This ensures all historical and current audit data is readily accessible, allowing for powerful trend analysis and tracking of SEO improvements over time.


Execution Summary: Site Audit Report Storage

Upon completion of the headless crawl, detailed SEO audit, and AI-powered fix generation, all collected data is compiled into a SiteAuditReport document. This document is then processed for storage or update in your MongoDB database.

Key Actions Performed:

  1. Data Consolidation: All audit findings, including page-specific metrics, identified issues, and Gemini-generated fixes, are aggregated into a structured JSON document.
  2. Conditional Update Logic: The system performs a conditional check.

* If a previous SiteAuditReport for the same site exists, the current report is compared against it to generate a detailed "before/after diff". This diff highlights specific changes, improvements, or new issues since the last audit. The existing document might be updated with new metrics, or a new document is created with a reference to the previous one and the diff.

* If no previous report exists, a new SiteAuditReport document is created.

  1. Database Ingestion: The finalized SiteAuditReport document is ingested into the site_audit_reports collection within your hive_db instance.

Detailed Output: SiteAuditReport Structure and Content

The SiteAuditReport document stored in MongoDB is meticulously structured to provide a comprehensive, page-by-page breakdown of your site's SEO health.

Document Structure (SiteAuditReport Schema):


{
  "_id": ObjectId, // Unique identifier for the report
  "siteUrl": "https://www.yourwebsite.com", // The root URL of the audited site
  "auditDate": ISODate, // Timestamp of when the audit was completed
  "status": "completed" | "failed", // Status of the audit process
  "totalPagesCrawled": Number,
  "reportSummary": {
    "overallScore": Number, // An aggregated score based on all metrics (e.g., 0-100)
    "issuesFound": Number,
    "fixesGenerated": Number,
    "coreWebVitalsAverage": {
      "lcp": Number, // Average LCP across all pages
      "cls": Number, // Average CLS across all pages
      "fid": Number  // Average FID across all pages (or INP if available)
    },
    "metaTitleDescriptionUniqueness": {
      "uniqueCount": Number,
      "duplicateCount": Number
    },
    "h1PresenceCoverage": {
      "presentCount": Number,
      "missingCount": Number
    },
    "imageAltCoverage": {
      "coveredCount": Number,
      "missingCount": Number
    },
    // ... other aggregated summaries
  },
  "pages": [
    {
      "url": "https://www.yourwebsite.com/page-1",
      "statusCode": Number,
      "pageTitle": String,
      "metaDescription": String,
      "h1": String | null,
      "canonicalTag": String | null,
      "hasMobileViewport": Boolean,
      "coreWebVitals": {
        "lcp": Number,
        "cls": Number,
        "fid": Number
      },
      "ogTags": {
        "ogTitle": String | null,
        "ogDescription": String | null,
        "ogImage": String | null,
        // ... other Open Graph tags
      },
      "structuredDataDetected": [String], // Array of detected schema types (e.g., ["Article", "BreadcrumbList"])
      "internalLinks": {
        "count": Number,
        "anchors": [String] // List of internal link anchor texts
      },
      "issues": [
        {
          "type": "META_TITLE_TOO_LONG",
          "severity": "high" | "medium" | "low",
          "description": "Meta title exceeds 60 characters.",
          "currentValue": "Your very long meta title here...",
          "recommendedFix": "Shorten the meta title to be concise and within character limits."
        },
        {
          "type": "MISSING_H1",
          "severity": "high",
          "description": "No H1 tag found on the page.",
          "recommendedFix": "Add a descriptive H1 tag that accurately reflects the page content."
        },
        {
          "type": "MISSING_IMAGE_ALT",
          "severity": "medium",
          "description": "Image is missing an alt attribute.",
          "elementSelector": "img[src='/path/to/image.jpg']",
          "currentValue": "<img src='/path/to/image.jpg'>",
          "geminiFix": {
            "prompt": "Generate a concise alt text for an image showing a 'red sports car' on a 'scenic mountain road'.",
            "generatedText": "A sleek red sports car drives along a winding mountain road under a clear sky."
          }
        },
        // ... other issues with Gemini fixes where applicable
      ]
    },
    // ... data for other crawled pages
  ],
  "diffFromPreviousReport": {
    "previousReportId": ObjectId, // Reference to the _id of the previous report
    "changes": [
      {
        "pageUrl": "https://www.yourwebsite.com/page-1",
        "metric": "metaDescription",
        "oldValue": "Old meta description.",
        "newValue": "New, optimized meta description."
      },
      {
        "pageUrl": "https://www.yourwebsite.com/page-2",
        "metric": "issues",
        "type": "FIXED",
        "description": "MISSING_H1 issue resolved."
      },
      {
        "pageUrl": "https://www.yourwebsite.com/page-3",
        "metric": "issues",
        "type": "NEW",
        "description": "NEW_CANONICAL_MISMATCH issue detected."
      }
      // ... other detailed changes
    ]
  }
}

Key Data Points Captured for Each Page:

  • URL & Status Code: The specific page URL and its HTTP status.
  • Page Title & Meta Description: Current values and uniqueness check.
  • H1 Presence: Whether an H1 tag exists and its content.
  • Image Alt Coverage: Identification of images missing alt attributes.
  • Internal Link Density: Count of internal links and their anchor texts.
  • Canonical Tags: Presence and correctness of the canonical URL.
  • Open Graph Tags: Presence and content of key Open Graph meta tags (e.g., og:title, og:description, og:image).
  • Core Web Vitals: Measured LCP, CLS, and FID scores for performance insights.
  • Structured Data Presence: Detection of Schema.org markup types.
  • Mobile Viewport: Verification of the <meta name="viewport"> tag for mobile responsiveness.
  • Identified Issues: A list of all detected SEO issues, categorized by type, severity, and a description.
  • Gemini Fixes: For each broken element or issue where an automated fix is possible, the prompt sent to Gemini and its generated, actionable solution are stored.

Accessibility and Automation

  • Automated Execution: This workflow runs automatically every Sunday at 2 AM, ensuring regular and consistent monitoring of your site's SEO health without manual intervention.
  • On-Demand Audits: You can trigger an audit at any time through your PantheraHive dashboard, providing immediate insights after site updates or content deployments.
  • Report Access: The stored SiteAuditReport documents can be accessed directly from your MongoDB instance for in-depth analysis or integrated into custom dashboards and reporting tools via the PantheraHive API. The platform will also provide a user-friendly interface to view these reports, including the "before/after diffs".

Customer Value and Next Steps

By storing this rich, historical data, the "Site SEO Auditor" provides immense value:

  • Historical Tracking: Monitor SEO improvements and regressions over time.
  • Data-Driven Decisions: Use concrete data to prioritize SEO efforts and validate the impact of changes.
  • Actionable Insights: Leverage Gemini's generated fixes to directly address technical SEO issues.
  • Comprehensive Overview: Gain a holistic understanding of your site's SEO performance at a granular, page-by-page level.

You can now review the latest SiteAuditReport in your MongoDB hive_db or through the PantheraHive UI to identify areas for improvement and implement the suggested fixes.

site_seo_auditor.html
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}