Site SEO Auditor
Run ID: 69cba21661b1021a29a8ae742026-03-31SEO & Growth
PantheraHive BOS
BOS Dashboard

Step 3 of 5: Gemini AI Fix Generation (batch_generate)

Workflow Description: A headless crawler visits every page on your site and audits it against a 12-point SEO checklist. Broken elements get sent to Gemini, which generates the exact fix.

This document details the output from the "gemini → batch_generate" step, where our AI-powered engine analyzes identified SEO issues and provides precise, actionable solutions.


1. Introduction to Gemini AI Fix Generation

Following the comprehensive site crawl and audit, our system has identified specific SEO discrepancies and areas for improvement across your website. In this crucial step, these identified "broken elements" are systematically fed into our advanced Gemini AI model. Gemini then leverages its deep understanding of web standards, SEO best practices, and code generation capabilities to produce exact, ready-to-implement fixes.

The goal of this phase is to move beyond mere identification of problems to providing concrete, developer-ready solutions, significantly streamlining your optimization efforts.

2. Process Overview: From Issue to Solution

  1. Issue Aggregation: All detected SEO issues from the crawler (e.g., missing H1s, duplicate meta descriptions, absent alt text, suboptimal Core Web Vitals) are collected and categorized.
  2. Contextualization: For each issue, Gemini receives the relevant page URL, the specific HTML snippet or content surrounding the issue, and the nature of the problem.
  3. AI Analysis & Generation: Gemini analyzes the context and problem, then generates the most appropriate and compliant fix. This can range from HTML code snippets, content suggestions, configuration recommendations, to CSS/JS optimization advice.
  4. Structured Output: The generated fixes are then structured and presented clearly, indicating the affected URL, the specific issue, the proposed solution, and an explanation of its impact.

3. Gemini-Generated Fixes: Detailed Examples

Below are examples of the detailed, actionable fixes generated by Gemini for various common SEO issues. Each example includes the identified problem, Gemini's proposed solution (often with code snippets), and an explanation of the fix.


Example 1: Missing H1 Heading

html • 674 chars
    <!-- Original hero image tag (before fix): -->
    <!-- <img src="/images/hero-banner-full.jpg" alt="Homepage hero banner"> -->

    <!-- Gemini's generated fix for LCP optimization: -->
    <img
        src="/images/hero-banner-full.webp"
        alt="Homepage hero banner"
        loading="eager"
        fetchpriority="high"
        width="1920"
        height="1080"
        srcset="/images/hero-banner-full-small.webp 600w, /images/hero-banner-full-medium.webp 1200w, /images/hero-banner-full.webp 1920w"
        sizes="(max-width: 600px) 600px, (max-width: 1200px) 1200px, 1920px"
    >
    <link rel="preload" as="image" href="/images/hero-banner-full.webp">
    
Sandboxed live preview

Step 1 of 5: Site Crawl & Initial Data Collection (Puppeteer)

This document details the execution of the first crucial step in your Site SEO Auditor workflow: the comprehensive crawl of your website using Puppeteer. This phase is designed to simulate a real user's browser experience, ensuring an accurate and thorough collection of all publicly accessible pages and their associated raw data, forming the foundation for subsequent SEO analysis.


Objective

The primary objective of this step is to systematically visit and render every discoverable page on your specified website, collecting essential raw data that will be processed in the subsequent SEO audit steps. This process ensures that the audit is based on the actual content and structure experienced by users and search engine crawlers.


Process Details: Comprehensive Site Crawl

Our headless browser, powered by Puppeteer, executes a meticulous crawl using the following methodology:

1. Headless Browser Initialization & Configuration

  • Browser Emulation: A headless instance of Google Chrome is launched, configured to mimic a standard desktop browser environment. This ensures that JavaScript is executed, dynamic content is rendered, and the page's final Document Object Model (DOM) is available for analysis, just as a user or a modern search engine bot would perceive it.
  • Viewport Configuration: The browser is configured with a default viewport size (e.g., 1920x1080) to ensure consistent rendering and prevent issues related to responsive layouts during the initial data capture. (Note: Mobile viewport checks occur in a later audit step.)
  • Rate Limiting & Concurrency: To avoid overwhelming your server and to ensure a smooth crawl, the system implements intelligent rate limiting and manages concurrent page requests. This balances crawl speed with server load considerations.

2. Page Discovery & Navigation

  • Starting Point: The crawl initiates from your site's primary domain (e.g., https://www.yourwebsite.com/).
  • Internal Link Following: Puppeteer navigates the site by identifying and following all valid internal <a> (anchor) links within the rendered HTML of each visited page. This ensures comprehensive discovery of all interconnected pages.
  • Respecting robots.txt: The crawler adheres strictly to the directives specified in your robots.txt file, ensuring that no pages or sections explicitly disallowed for crawling are accessed. This respects your site's crawl policies.
  • Duplicate URL Handling: A robust mechanism is in place to track visited URLs and prevent redundant processing of the same page, optimizing crawl efficiency.

3. Data Extraction per Page

For each successfully crawled page, the following critical data points are extracted:

  • Page URL: The canonical URL of the page.
  • Full HTML Content: The complete, rendered HTML of the page, including any content dynamically generated by JavaScript. This is crucial for analyzing meta tags, headings, image attributes, and other on-page elements.
  • Initial Page Load Metrics: Basic performance indicators captured during the page load, such as:

* Time to First Byte (TTFB): Measures the responsiveness of your web server.

* DOM Content Loaded: Indicates when the initial HTML document has been completely loaded and parsed.

* Load Event Fired: Signifies when the page and all its dependent resources (stylesheets, images, etc.) have finished loading.

  • Response Status Code: The HTTP status code returned by the server (e.g., 200 OK, 301 Redirect, 404 Not Found). This helps identify broken links or redirects.
  • Internal Links Discovered: A list of all internal links found on the page, which are then added to the queue for subsequent crawling.

4. Robustness & Error Handling

  • Timeout Management: Pages that fail to load within a predefined timeout period are flagged, and the crawler moves on to prevent indefinite hangs.
  • Resource Handling: The crawler is configured to handle various page resources (images, CSS, JS) efficiently, ensuring they load correctly to provide an accurate rendered view.
  • Retry Mechanisms: Temporary network issues or server glitches are managed with intelligent retry logic to ensure maximum page coverage.

Expected Output of this Step

Upon completion of the crawl, the system will have generated a comprehensive dataset containing the raw information for every discovered page. This output is then passed to the next stage of the workflow for in-depth SEO analysis.

  • Raw Page Data Collection: A structured collection of data for each unique URL discovered, including:

* url: The absolute URL of the page.

* status_code: The HTTP status code (e.g., 200, 301, 404).

* html_content: The full HTML source of the rendered page.

* initial_load_metrics: An object containing TTFB, DOMContentLoaded, and Load Event times.

* discovered_internal_links: An array of new internal URLs found on this page.

* timestamp: The time the page was crawled.

This raw data is the essential input for the subsequent SEO audit steps, providing the unadulterated content against which all 12 SEO checklist items will be evaluated.


Next Steps

The collected raw page data will now be fed into Step 2: SEO Data Extraction & Analysis (Gemini). In this next phase, the HTML content of each page will be parsed, and a dedicated AI model (Gemini) will meticulously extract and analyze the specific SEO elements (meta tags, H1s, alt text, etc.) required for your comprehensive audit.

hive_db Output

Site SEO Auditor: Step 2 of 5 - Comprehensive Audit Diff Generation

This document details the completion of Step 2: hive_db → diff for your Site SEO Auditor workflow. This crucial step involves generating a comprehensive "before-and-after" difference report by comparing your latest SEO audit results with the previously stored audit data in our hive_db (MongoDB).


1. Step Overview: hive_db → diff

Purpose: The primary goal of this step is to provide a clear, actionable comparison between your site's current SEO performance and its previous state. By identifying changes, improvements, and regressions, we enable a proactive approach to SEO maintenance and optimization.

Mechanism:

  1. Data Retrieval: The system fetches the most recent SiteAuditReport from MongoDB, which was generated in the previous step by the headless crawler.
  2. Historical Comparison: It then retrieves the immediately preceding SiteAuditReport for your domain from MongoDB.
  3. Diff Generation: A sophisticated comparison algorithm is applied to both audit reports. This algorithm meticulously analyzes each of the 12 SEO checklist points across all audited pages to identify discrepancies.
  4. Report Structuring: The identified differences are then structured into a detailed, human-readable "diff" report, categorizing changes for easy interpretation.

2. Key Components of the Diff Report

The generated diff report provides a granular comparison across all pages and the 12-point SEO checklist, highlighting:

  • Overall Site Health Summary: A high-level overview indicating the net change in critical issues (e.g., "5 new issues detected, 12 issues resolved").
  • New Issues Introduced: Specific pages and the exact SEO checklist items that are now failing or exhibiting suboptimal performance, which were not present in the previous audit.

* Example: Page /blog/new-post now has a duplicate meta title.

  • Resolved Issues: Pages and SEO checklist items that were previously flagged as problematic but have now been successfully rectified.

* Example: Page /products/item-x previously had missing image alt tags, which are now all present.

  • Metric Changes (Improvements/Regressions): Quantitative shifts in performance metrics.

* Core Web Vitals: Pages where LCP, CLS, or FID scores have improved or deteriorated (e.g., LCP for /homepage went from 2.8s to 3.5s).

* Internal Link Density: Pages where the number of internal links significantly increased or decreased.

  • Content/Structure Changes:

* Meta Title/Description Uniqueness: Identification of new duplicate titles/descriptions or resolution of existing ones.

* H1 Presence: Pages where an H1 was added or removed.

* Image Alt Coverage: Pages with newly missing alt attributes or pages where alt attributes were added.

* Canonical Tags: Pages with newly missing, incorrect, or newly correctly implemented canonical tags.

* Open Graph Tags: Pages with new issues or fixes related to Open Graph metadata.

* Structured Data Presence: Pages where structured data was added or removed.

* Mobile Viewport: Pages that gained or lost proper mobile viewport configuration.

  • Page-Level Details: For each affected page, a specific breakdown of what changed for which SEO element.

3. Actionable Insights & Next Steps

This comprehensive diff report provides immediate actionable insights:

  • Prioritize Fixes: Quickly identify new regressions that require immediate attention.
  • Validate Optimizations: Confirm that previous SEO efforts have successfully resolved identified issues.
  • Monitor Trends: Understand the overall trajectory of your site's SEO health over time.
  • Identify Root Causes: Pinpoint potential areas of your content management or development process that might be introducing new issues.

The output of this step is crucial for the subsequent steps in the workflow:

  • Step 3 (Gemini → Fix): Any newly identified broken elements from this diff report will be automatically fed into Gemini to generate precise, actionable fixes.
  • Step 4 (Database → Store): This detailed diff, along with the full current audit report, will be stored in MongoDB, maintaining a complete historical record for future comparisons.

4. Sample Diff Report Structure (Illustrative)


## Site SEO Audit Diff Report: [Your Domain] - [Current Date] vs. [Previous Audit Date]

### Overall Summary

*   **Total Issues Detected (Current Audit):** 85
*   **Total Issues Detected (Previous Audit):** 92
*   **Net Change in Issues:** -7 (Improvement)

*   **New Issues Introduced:** 3
*   **Issues Resolved:** 10
*   **Metrics Improved:** 5 pages (e.g., LCP)
*   **Metrics Regressed:** 1 page (e.g., CLS)

### New Issues Detected (Current Audit Only)

*   **Page: `/new-product-launch`**
    *   **Issue:** Missing H1 Tag
    *   **Issue:** Missing Open Graph `og:image`
*   **Page: `/blog/latest-news`**
    *   **Issue:** Duplicate Meta Title (identical to `/blog/archive`)

### Issues Resolved (Fixed Since Last Audit)

*   **Page: `/about-us`**
    *   **Resolved:** All Images Now Have Alt Attributes
    *   **Resolved:** Correct Canonical Tag Implemented
*   **Page: `/contact`**
    *   **Resolved:** LCP Score Improved from 3.8s to 2.1s
*   **Page: `/privacy-policy`**
    *   **Resolved:** Structured Data (Organization Schema) Added

### Metric Changes

*   **Core Web Vitals - Improvements:**
    *   `/homepage`: LCP improved from 2.5s to 1.8s
    *   `/category/electronics`: CLS improved from 0.15 to 0.08
*   **Core Web Vitals - Regressions:**
    *   `/product/item-z`: FID regressed from 50ms to 120ms (Needs Investigation)

*   **Internal Link Density:**
    *   `/resources`: Internal links increased from 15 to 28 (Positive Change)
    *   `/old-blog-post`: Internal links decreased from 10 to 3 (Potential Issue)

### Detailed Page-Level Changes

**Page: `/products/featured-item`**
*   **Previous Status:** H1 Present, All Images Alted, LCP: 2.2s
*   **Current Status:** H1 Present, **1 Image Alt Missing**, LCP: 2.2s
*   **Change:** New Issue - Missing Alt Tag for image `product-image-id-xyz.jpg`

**Page: `/blog/guide-to-seo`**
*   **Previous Status:** Missing Canonical Tag, No Structured Data
*   **Current Status:** Canonical Tag `https://yourdomain.com/blog/guide-to-seo` Present, Article Schema Structured Data Present
*   **Change:** Resolved Issue - Canonical Tag Implemented. New Feature - Structured Data Added.

---

This detailed diff report ensures that you are always informed about the dynamic changes in your site's SEO landscape, enabling timely interventions and continuous improvement.
  • Explanation & Rationale: Gemini identified the hero image as a key LCP element. The fix involves multiple optimizations:

* Image Format: Suggests converting to .webp for better compression.

* loading="eager" & fetchpriority="high": Explicitly tells the browser to load this critical image immediately and with high priority.

* width & height: Provides intrinsic dimensions to prevent layout shifts.

* srcset & sizes: Implements responsive images, serving appropriately sized images for different viewport widths, reducing unnecessary data transfer.

* preload: Instructs the browser to fetch the image even earlier in the rendering process.

These combined efforts significantly improve the LCP metric, enhancing perceived page load speed.


4. Actionable Recommendations

  • Review and Prioritize: The full audit report (to be stored in MongoDB) will contain all identified issues and Gemini's proposed fixes. We recommend reviewing these fixes, prioritizing them based on their impact and ease of implementation.
  • Developer Implementation: The generated code snippets and recommendations are designed to be directly actionable by your development team. They can be integrated into your website's codebase.
  • Staging Environment Testing: Always test the implemented fixes in a staging environment before deploying to production to ensure no regressions or unintended side effects.
  • Continuous Monitoring: Our automated weekly audits will track the effectiveness of these fixes and identify any new issues that may arise, providing a "before/after" diff in subsequent reports.

5. Next Steps

The output from this "gemini → batch_generate" step (all identified issues with their corresponding Gemini-generated fixes) will now be:

  1. Stored in MongoDB: Integrated into the SiteAuditReport document for your site, providing a comprehensive record of issues and their proposed solutions.
  2. Used for Diffing: In subsequent audits, this baseline of fixed issues will be used to generate a "before/after" diff, clearly showing progress and any new regressions.
  3. Presented in Final Report: The detailed fixes will be a core component of your final Site Audit Report, accessible on demand and after every scheduled Sunday audit.
hive_db Output

Step 4 of 5: Data Persistence - SiteAuditReport Upsert

This document details the successful execution of Step 4: hive_db → upsert within your "Site SEO Auditor" workflow. This crucial step ensures that all collected audit data, including detailed findings, AI-generated fixes, and historical comparisons, are securely and persistently stored in your dedicated database.


1. Purpose of this Step

The hive_db → upsert operation is responsible for taking the comprehensive SiteAuditReport generated by the headless crawler and AI analysis, and storing it within your PantheraHive database. "Upsert" intelligently handles data storage:

  • Update if Exists: If an audit report for the same site and audit run (e.g., a re-run of an on-demand audit or an update to a scheduled report) already exists, the system will update the existing record with the latest information.
  • Insert if Not Exists: If this is a new audit report (e.g., the first scheduled audit or a new on-demand request), a new record will be created in the database.

This mechanism is vital for maintaining a complete, evolving history of your site's SEO performance.

2. Data Model: SiteAuditReport Structure

The following detailed structure represents the SiteAuditReport document that has been successfully upserted into your MongoDB instance. Each field is designed to provide actionable insights and track changes over time.


{
  "_id": "65e0a7b2c3d4e5f6a7b8c9d0", // Unique identifier for this audit report
  "siteUrl": "https://www.yourwebsite.com", // The root URL of the audited site
  "auditTimestamp": "2024-02-29T02:00:00.000Z", // UTC timestamp of when the audit was completed
  "auditType": "scheduled", // "scheduled" or "on-demand"
  "pagesAuditedCount": 150, // Total number of unique pages crawled and audited
  "overallStatus": "Needs Improvement", // Overall status: "Pass", "Fail", "Needs Improvement"
  "auditSummary": {
    "totalIssuesFound": 35,
    "criticalIssuesFound": 7,
    "pagesWithIssues": 28,
    "pagesWithGeminiFixes": 15
  },
  "pageReports": [ // Array of detailed reports for each audited page
    {
      "pageUrl": "https://www.yourwebsite.com/product/example-product",
      "issueCount": 3,
      "criticalIssueCount": 1,
      "seoMetrics": {
        "metaTitle": {
          "status": "fail",
          "value": "Example Product",
          "issueDetails": "Meta title is too short (14 chars). Recommended: 50-60 chars.",
          "geminiFix": "Consider updating the meta title to: 'Example Product Name - Buy Online | Your Brand' (55 chars)."
        },
        "metaDescription": {
          "status": "pass",
          "value": "Discover our amazing example product...",
          "issueDetails": null,
          "geminiFix": null
        },
        "h1Presence": {
          "status": "pass",
          "value": "Example Product Name",
          "issueDetails": null,
          "geminiFix": null
        },
        "imageAltCoverage": {
          "status": "fail",
          "details": [
            {
              "src": "/images/product-hero.jpg",
              "alt": "",
              "issue": "Missing alt attribute for critical image."
            },
            {
              "src": "/images/logo.png",
              "alt": "Your Brand Logo",
              "issue": null
            }
          ],
          "geminiFix": "Add descriptive alt text to '/images/product-hero.jpg', e.g., 'Close-up of Example Product in blue'."
        },
        "internalLinkDensity": {
          "status": "pass",
          "count": 12,
          "issueDetails": null
        },
        "canonicalTag": {
          "status": "pass",
          "value": "https://www.yourwebsite.com/product/example-product",
          "issueDetails": null,
          "geminiFix": null
        },
        "openGraphTags": {
          "status": "fail",
          "details": {
            "og:title": "Example Product",
            "og:description": "Discover our amazing example product...",
            "og:image": null // Missing
          },
          "issueDetails": "Missing `og:image` tag, which can impact social sharing previews.",
          "geminiFix": "Add an `og:image` tag pointing to a high-quality image (e.g., 'https://www.yourwebsite.com/images/og-product.jpg')."
        },
        "coreWebVitals": {
          "LCP": { "value": 3.2, "status": "fail" }, // Largest Contentful Paint (seconds)
          "CLS": { "value": 0.15, "status": "fail" }, // Cumulative Layout Shift
          "FID": { "value": 55, "status": "pass" }, // First Input Delay (milliseconds)
          "issueDetails": "LCP and CLS values are above recommended thresholds, indicating poor page load performance and layout instability.",
          "geminiFix": "Optimize image sizes and consider lazy loading for above-the-fold content to improve LCP. Investigate layout shifts caused by dynamic content loading to reduce CLS."
        },
        "structuredDataPresence": {
          "status": "pass",
          "typesFound": ["Product", "BreadcrumbList"],
          "issueDetails": null,
          "geminiFix": null
        },
        "mobileViewport": {
          "status": "pass",
          "value": "<meta name='viewport' content='width=device-width, initial-scale=1'>",
          "issueDetails": null,
          "geminiFix": null
        }
      },
      "geminiFixesGenerated": [ // List of specific fixes generated by Gemini for this page
        "Update meta title for better length and keywords.",
        "Add alt text to product hero image.",
        "Implement og:image for improved social sharing.",
        "Address LCP and CLS issues for better page performance."
      ]
    }
    // ... more page reports
  ],
  "diffReport": { // Detailed comparison with the previous audit report
    "previousAuditId": "65dff6a1b2c3d4e5f6a7b8c9", // Reference to the previous audit document
    "newIssues": [ // Issues found in THIS audit that were NOT present in the previous one
      {
        "pageUrl": "https://www.yourwebsite.com/new-blog-post",
        "metric": "h1Presence",
        "issue": "Missing H1 tag."
      }
    ],
    "resolvedIssues": [ // Issues present in the PREVIOUS audit that are NO LONGER present
      {
        "pageUrl": "https://www.yourwebsite.com/old-product",
        "metric": "metaDescription",
        "issue": "Duplicate meta description."
      }
    ],
    "metricChanges": [ // Significant changes in key metrics (e.g., CWV improvements/degradations)
      {
        "pageUrl": "https://www.yourwebsite.com/homepage",
        "metric": "coreWebVitals.LCP",
        "oldValue": 4.1,
        "newValue": 2.8,
        "change": "Improved"
      }
    ]
  },
  "reportGeneratedBy": "PantheraHive Site SEO Auditor"
}

3. Key Benefits of this Persistent Storage

  • Historical Tracking: By storing each SiteAuditReport, you gain a clear timeline of your site's SEO performance, allowing you to track improvements or regressions over time.
  • Performance Monitoring: The diffReport provides immediate insights into what has changed since the last audit, highlighting new issues or confirming the resolution of previous ones.
  • Actionable Insights: Each issue is accompanied by a geminiFix, offering concrete, AI-generated recommendations directly within your audit report.
  • Comprehensive Record: All 12 SEO checklist points are meticulously recorded for every page, ensuring no aspect of your on-page SEO is overlooked.
  • Reporting & Analysis: The structured data facilitates easy generation of custom reports, dashboards, and deep-dive analysis into specific SEO challenges.

4. Confirmation of Successful Upsert

The SiteAuditReport for https://www.yourwebsite.com (or the siteUrl specified in your input) generated from the latest audit run has been successfully upserted into the hive_db. This means:

  • If a previous audit report for this site and audit type existed, it has been updated with the latest data.
  • If this is a new audit, a new document has been created.
  • The _id for the new or updated document is 65e0a7b2c3d4e5f6a7b8c9d0 (example ID).

You can now access this comprehensive report through the PantheraHive UI or directly query your database for detailed insights and actionable fixes.


Next Steps

With the data successfully stored, the final step involves presenting this information to you in an easily digestible and actionable format. This typically includes:

  • Generating an executive summary.
  • Highlighting critical issues and their Gemini-generated fixes.
  • Visualizing trends and performance changes over time.
  • Providing options for exporting the full report.
hive_db Output

Step 5 of 5: hive_dbconditional_update - Site SEO Auditor Report Storage and Diffing

This final step in the "Site SEO Auditor" workflow is responsible for persistently storing the comprehensive SEO audit report within your dedicated MongoDB instance (hive_db) and executing a conditional update. This ensures that all audit results, including detailed findings, AI-generated fixes, and a crucial before-and-after comparison, are securely saved and accessible for review and tracking.


1. Purpose and Functionality

The conditional_update operation serves several critical functions:

  • Persistent Storage: Stores the newly generated SiteAuditReport document in MongoDB.
  • Before/After Diffing: Compares the current audit results with the previous audit for the same site and records the differences. This is a core feature for tracking SEO performance over time.
  • Status Management: Updates the audit's status from 'in_progress' to 'completed' or 'failed' based on the outcome of the audit and storage operation.
  • Next Run Scheduling: Records the timestamp for the next automated audit run, ensuring the weekly schedule is maintained.
  • Data Integrity: Ensures that only valid and complete audit reports are stored and linked correctly.

2. Data Model: SiteAuditReport Document Structure

The audit results are stored as a SiteAuditReport document in a dedicated site_audit_reports collection within your hive_db. The document structure is designed to be comprehensive and facilitate easy querying and analysis.


{
  "_id": ObjectId("..."),
  "auditId": "uuid-for-this-audit-run",
  "siteUrl": "https://www.example.com",
  "auditTimestamp": ISODate("2023-10-27T02:00:00.000Z"),
  "status": "completed", // or "failed"
  "overallScore": 85, // Aggregate SEO score (0-100)
  "overallSummary": "Good overall SEO health with minor improvements needed for image alt attributes and Core Web Vitals on specific pages.",
  "pagesAudited": [
    {
      "pageUrl": "https://www.example.com/",
      "pageScore": 90,
      "metaTitle": {
        "present": true,
        "unique": true,
        "length": 65,
        "issue": null,
        "fix": null
      },
      "metaDescription": {
        "present": true,
        "unique": true,
        "length": 155,
        "issue": null,
        "fix": null
      },
      "h1Presence": {
        "present": true,
        "count": 1,
        "issue": null,
        "fix": null
      },
      "imageAltCoverage": {
        "coveredPercentage": 80,
        "totalImages": 10,
        "imagesMissingAlt": 2,
        "issue": "2 images missing alt text.",
        "fix": "Add descriptive alt text to images with src: /img/logo.png, /img/banner.jpg."
      },
      "internalLinkDensity": {
        "count": 25,
        "issue": null,
        "fix": null
      },
      "canonicalTags": {
        "present": true,
        "correct": true,
        "issue": null,
        "fix": null
      },
      "openGraphTags": {
        "present": true,
        "correct": true,
        "issue": null,
        "fix": null
      },
      "coreWebVitals": {
        "LCP": 2.1, // seconds
        "CLS": 0.05, // score
        "FID": 50 // ms
      },
      "structuredDataPresence": {
        "present": true,
        "types": ["Organization", "WebPage"],
        "issue": null,
        "fix": null
      },
      "mobileViewport": {
        "configured": true,
        "issue": null,
        "fix": null
      },
      "brokenElements": [
        {
          "elementSelector": "img[src='/img/banner.jpg']",
          "issueDescription": "Image missing 'alt' attribute.",
          "geminiFix": "Add `alt=\"Descriptive text for banner image\"` to the `<img>` tag."
        }
      ]
    }
    // ... additional pages
  ],
  "previousAuditId": "uuid-of-last-completed-audit", // Reference to the previous audit
  "diffReport": {
    "overallChanges": "Overall score decreased by 5 points. New issues detected on /blog page.",
    "pageLevelChanges": [
      {
        "pageUrl": "https://www.example.com/blog",
        "metricsChanged": ["metaTitle_length_increased", "h1Presence_missing_new"],
        "detailedDiff": {
          "metaTitle": { "old": 50, "new": 70 },
          "h1Presence": { "old": true, "new": false, "issue_new": "H1 tag is now missing." }
        }
      }
      // ... more page-level changes
    ]
  },
  "nextScheduledRun": ISODate("2023-11-03T02:00:00.000Z") // Next automated run timestamp
}

3. Conditional Update Logic

The conditional_update process follows these steps:

  1. Retrieve Previous Audit: Before inserting the new report, the system queries the site_audit_reports collection to find the most recent completed audit for the siteUrl. This is crucial for generating the diffReport.

* Condition: {"siteUrl": "YOUR_SITE_URL", "status": "completed"}

* Sort: {"auditTimestamp": -1} (to get the latest)

  1. Generate Diff Report:

* If a previousAuditId is found: A detailed comparison is performed between the current audit's pagesAudited array and the pagesAudited array of the previous audit.

* Metrics Compared: Uniqueness, presence, length (for meta tags), counts (for H1, links), percentages (for alt coverage), and Core Web Vitals scores.

* Change Detection: Identifies improvements, regressions, and new issues.

* diffReport Population: The diffReport field in the new SiteAuditReport document is populated with a summary of overall changes and specific page-level changes, including detailed metric differences.

* If no previous audit is found (first run), the diffReport will indicate "No previous audit found for comparison."

  1. Construct New Document: The complete SiteAuditReport document is assembled, including all audit findings, the previousAuditId (if applicable), and the newly generated diffReport. The status is set to completed.
  1. Insert/Update Operation:

* The system performs an insert operation for the new SiteAuditReport document.

* Conditional Aspect: If, for some reason, an audit with the exact auditId already exists (e.g., due to a retry mechanism), the system can be configured to either fail gracefully or perform an upsert to update the existing document, ensuring idempotency. For this workflow, a new document is generally inserted for each unique audit run.

  1. Schedule Next Run: The nextScheduledRun field is calculated based on the current auditTimestamp plus 7 days, ensuring the weekly automation.

4. Database Interaction Details (MongoDB)

  • Collection: site_audit_reports
  • Operations:

* find(): To retrieve the latest previous audit.

* insertOne(): To store the new SiteAuditReport document.

  • Indexing:

* It is highly recommended to have indexes on siteUrl, auditTimestamp, and status to optimize query performance for retrieving previous audits and general reporting.

* db.site_audit_reports.createIndex({"siteUrl": 1, "auditTimestamp": -1})

* db.site_audit_reports.createIndex({"status": 1})

5. Error Handling

Robust error handling is implemented for this step:

  • Database Connection Failure: If the connection to hive_db cannot be established, the step will fail, and an alert will be triggered.
  • Write Errors: Any issues during the insertOne operation (e.g., network issues, permission errors) will result in a step failure.
  • Diff Generation Errors: While unlikely, if an error occurs during the diffing process, the diffReport might indicate an error, but the core audit data will still be saved.
  • Workflow Status Update: In case of failure, the overall workflow status will be marked as 'failed', and relevant error logs will be generated.

6. Deliverable and Next Steps for the Customer

Upon successful completion of this step, the following actions and deliverables are available:

  • Audit Report Available: A new SiteAuditReport document is now stored in your hive_db.
  • Access via UI/API: You can access the detailed audit report and its diff via:

* PantheraHive UI: A dedicated "Site SEO Reports" section will display a list of all audits for your sites, allowing you to view each report, its summary, and the before/after diff.

* Direct Database Access: You can query the site_audit_reports collection in MongoDB to retrieve raw audit data.

* PantheraHive API: An API endpoint will be available to programmatically fetch audit reports.

  • Notifications: Depending on your configuration, a notification (email, Slack, etc.) might be sent to relevant stakeholders, summarizing the audit results and highlighting critical changes.
  • Scheduled for Next Run: The system is now automatically scheduled to perform the next audit for your site at the nextScheduledRun timestamp.

This step ensures that your SEO performance is consistently monitored, documented, and actionable insights are readily available through comprehensive reporting and historical diffs.

site_seo_auditor.html
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}