Site SEO Auditor
Run ID: 69cadd3674bac0555ea3100c2026-03-30SEO & Growth
PantheraHive BOS
BOS Dashboard

Step 3 of 5: Gemini AI Fix Generation (Batch Generate)

This deliverable outlines the crucial "Gemini AI Fix Generation" step, where our advanced AI system automatically processes identified SEO issues and generates precise, actionable solutions. This step transforms raw audit findings into a comprehensive set of ready-to-implement fixes, significantly streamlining your SEO optimization efforts.


1. Overview: Automated SEO Solution Generation

Following the comprehensive site crawl and audit (Step 2), our system compiles a detailed list of all "broken elements" – specific SEO non-compliances and opportunities for improvement. In this step, these identified issues are systematically fed into the Google Gemini AI model. Gemini then leverages its advanced understanding of web standards, SEO best practices, and contextual analysis to generate exact, code-level or content-level fixes for each detected problem.

This automated generation of solutions ensures that you receive not just a report of problems, but a prescriptive guide to resolving them, complete with the specific code snippets or content recommendations required.

2. Process Description: From Issue to Solution

The "gemini → batch_generate" process involves the following stages:

3. Key Features & Benefits of AI-Powered Fix Generation

4. Examples of Generated Fixes

Here are illustrative examples of the specific types of fixes Gemini generates:

* Issue: Duplicate meta title for /products/item-a.

* Fix: "<title>Product Item A - High Quality Gadgets | YourBrand</title>"

* Fix: "<meta name='description' content='Discover Product Item A: detailed features, specifications, and customer reviews. Shop now for exclusive offers!' />"

* Issue: Page /blog/latest-news has no H1 tag.

* Fix: <h1>Latest News & Updates from YourBrand</h1> (suggested based on page content)

* Issue: Image <img src="/img/hero.jpg"> on homepage has no alt attribute.

* Fix: <img src="/img/hero.jpg" alt="[Descriptive Text for Hero Image e.g., 'Modern office workspace with team collaboration']">

* Issue: Broken internal link to /old-page-url.

* Fix: Suggest replacement with /new-page-url or removal, based on site structure.

* Issue: Missing or incorrect canonical tag on /category?page=2.

* Fix: <link rel="canonical" href="https://www.yourdomain.com/category" />

* Issue: Missing Open Graph tags for social sharing on /article/my-post.

* Fix:

html • 439 chars
        <meta property="og:title" content="My Post Title | YourBrand Blog" />
        <meta property="og:description" content="A compelling summary of My Post content for social media." />
        <meta property="og:image" content="https://www.yourdomain.com/img/my-post-thumbnail.jpg" />
        <meta property="og:url" content="https://www.yourdomain.com/article/my-post" />
        <meta property="og:type" content="article" />
        
Sandboxed live preview

Step 1 of 5: Website Crawl (Puppeteer) - Execution Report

This report details the execution and outcomes of the initial crawling phase for your website, a critical first step in the "Site SEO Auditor" workflow. Our headless crawler, powered by Puppeteer, systematically navigates and collects data from every accessible page on your site, simulating a real user's browser experience.


1. Objective of the Crawl Phase

The primary objective of this phase is to perform a comprehensive, deep crawl of your entire website. By leveraging Puppeteer, we ensure that even dynamically loaded content (e.g., JavaScript-rendered elements, Single Page Applications) is fully processed and available for subsequent SEO analysis. This crawl establishes the foundational dataset for the 12-point SEO audit.

2. Crawling Mechanism: Puppeteer Headless Browser

Our crawler utilizes Puppeteer, a Node.js library that provides a high-level API to control headless Chrome (or Chromium). This approach offers significant advantages:

  • Real User Simulation: Puppeteer renders web pages exactly as a modern browser would, executing JavaScript, loading CSS, and displaying all dynamic content. This is crucial for accurately assessing modern websites where much of the content and structure might be generated client-side.
  • Comprehensive Page Interaction: It allows us to interact with pages (e.g., click buttons, fill forms) if necessary, though for a standard SEO crawl, the focus is on robust page loading and DOM inspection.
  • Robust Data Extraction: We can precisely target and extract any element from the Document Object Model (DOM) once the page is fully loaded.

3. Detailed Crawl Process

The crawling process follows a systematic and resilient methodology:

  1. Initialization:

* A headless Chromium instance is launched.

* A default viewport (e.g., desktop resolution) is configured to ensure consistent rendering.

* A standard user-agent string is set to mimic a typical browser, ensuring the website serves content as it would to a regular visitor.

  1. Starting Point:

* The crawl initiates from your website's provided base URL (typically the homepage).

  1. Page Discovery & Queuing:

* Upon loading a page, the crawler scans the DOM for all <a> (anchor) tags with href attributes.

* These links are then filtered:

* Only internal links (within your specified domain) are added to a processing queue.

* External links, mailto: links, and anchor links within the same page (#fragment) are noted but not added to the crawl queue.

* Duplicate URLs and URLs already visited are ignored to prevent infinite loops and redundant processing.

* A sophisticated queue management system ensures efficient traversal and prioritization of unvisited internal pages.

  1. Page Loading & Content Rendering:

* For each URL in the queue, Puppeteer navigates to the page.

* The crawler waits for the page to fully load and render all dynamic content, typically by monitoring network activity and DOM readiness (networkidle0 or domcontentloaded combined with timeouts). This ensures that all elements relevant for SEO, including those loaded asynchronously, are present.

  1. Rate Limiting & Throttling:

* To prevent overwhelming your server and to mimic natural user behavior, intelligent delays are introduced between page requests. This ensures the crawl is respectful of your website's resources.

  1. Error Handling & Robustness:

* Timeouts: Pages that take excessively long to load are flagged, and the crawl proceeds to the next URL.

* HTTP Status Codes: All HTTP status codes (e.g., 200 OK, 301 Redirect, 404 Not Found, 500 Server Error) are recorded for each visited URL. This helps identify broken links or server issues.

* Retries: Failed page loads due to transient network issues are automatically retried a set number of times.

* Infinite Loop Detection: Mechanisms are in place to detect and break out of potential crawl traps or infinite redirect chains.

4. Initial Data Collected Per Page

During this crawling phase, the following raw data points are meticulously extracted from each unique internal URL:

  • URL: The full canonical URL of the visited page.
  • HTTP Status Code: The server's response code (e.g., 200, 301, 404).
  • Page Title: The text content within the <title> tag.
  • Meta Description: The content of the <meta name="description"> tag.
  • Meta Robots Tag: The content of the <meta name="robots"> tag (e.g., index, follow, noindex, nofollow).
  • All H1 Tags: The text content of all <h1> tags found on the page.
  • Image Alt Attributes: The src and alt attributes for all <img> tags.
  • Internal & External Links: A comprehensive list of all <a> tags, including their href, text content, and rel attributes.
  • Canonical Link Tag: The href attribute from the <link rel="canonical"> tag.
  • Open Graph Tags: All <meta property="og:..."> tags (e.g., og:title, og:description, og:image).
  • Structured Data: The raw content of <script type="application/ld+json"> blocks and other structured data formats (microdata, RDFa).
  • Viewport Meta Tag: The content of the <meta name="viewport"> tag.
  • Core Web Vitals Metrics (Initial Capture): Initial measurements for Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS) are captured for performance baseline. (First Input Delay (FID) is harder to accurately measure in a headless environment and will be inferred or estimated for general performance context).
  • Timestamp: The exact time the page was visited.

5. Outcome of Step 1

Upon completion of this phase, a comprehensive map of your website's architecture has been generated, and a rich dataset of raw SEO-relevant information has been collected for every accessible page. This data is now prepared and ready for the subsequent analytical steps of the 12-point SEO audit checklist.


Next Step: The collected data will now be passed to the audit engine for detailed analysis against each point of the SEO checklist, identifying areas for improvement and potential issues.

hive_db Output

The "Site SEO Auditor" workflow is designed to provide comprehensive, actionable insights into your website's search engine optimization. This detailed output describes Step 2: Database Retrieval and Change Detection (hive_db → diff), which is crucial for understanding the evolution of your site's SEO performance.


Step 2: Database Retrieval and Change Detection (hive_db → diff)

This step is pivotal for tracking your website's SEO progress and identifying any regressions or improvements over time. It involves retrieving your site's previous SEO audit report from our MongoDB database (hive_db) and performing a sophisticated "diff" operation against the newly generated audit results from Step 1 (the headless crawl).

1. Objective

The primary objective of this step is to:

  • Retrieve the most recent historical SEO audit report for your website.
  • Compare the current audit's findings against the historical data to identify specific changes.
  • Generate a comprehensive "diff" report highlighting new issues, resolved issues, performance improvements, and regressions.
  • Prepare this comparative data for storage and subsequent analysis by AI (Gemini).

2. Data Retrieval from MongoDB (hive_db)

Upon completion of the latest site crawl and audit (Step 1), the system queries our secure MongoDB instance to fetch the most recent SiteAuditReport document associated with your website's URL.

  • Database: hive_db
  • Collection: SiteAuditReport
  • Query Parameters: The system uses your website's root domain as the primary identifier to locate the latest completed audit report. Reports are indexed by date to ensure the most recent historical comparison is selected.
  • Data Retrieved: The full structure of the previous SiteAuditReport, including:

* Overall site-level metrics (e.g., average LCP, total issues).

* Page-level audit details for every URL previously crawled (meta tags, H1s, image alts, links, Core Web Vitals, etc.).

* Structured data findings, canonical tags, Open Graph tags.

3. Comprehensive Diffing Logic

Once the previous audit report is retrieved, a sophisticated comparison algorithm is applied to contrast it with the newly generated audit data. This comparison is granular, analyzing changes at both a site-wide and page-specific level across all 12 SEO checklist points.

3.1. Page-Level Comparison

The system first compares the list of crawled URLs:

  • New Pages Found: Identifies URLs present in the current audit that were not in the previous one.
  • Pages No Longer Found: Identifies URLs present in the previous audit that were not found in the current one (e.g., 404s, redirects, or removed pages).
  • Existing Pages: For pages present in both audits, a detailed element-by-element comparison is performed.

3.2. SEO Checklist Item Comparison

For each existing page, and for site-wide metrics, the following comparisons are made:

  • Meta Title & Description:

* Uniqueness: Identifies newly created duplicate titles/descriptions. Flags resolved duplicates.

* Presence: Detects newly missing titles/descriptions or those that have been added.

* Content Changes: Highlights significant changes in the text of titles/descriptions.

  • H1 Presence:

* Identifies pages that are now missing an H1 or have multiple H1s (if previously compliant).

* Flags pages where an H1 has been correctly added or corrected.

  • Image Alt Coverage:

* Calculates the percentage change in images missing alt attributes.

* Lists specific images that are newly missing alt text or have had alt text added.

  • Internal Link Density:

* Tracks changes in the number of internal links per page and across the site.

* Highlights pages with significant increases or decreases in internal link count.

  • Canonical Tags:

* Detects new instances of missing, incorrect, or conflicting canonical tags.

* Flags resolved canonical tag issues.

  • Open Graph Tags:

* Identifies pages where critical Open Graph tags (e.g., og:title, og:description, og:image) are newly missing or contain incorrect values.

* Notes improvements in Open Graph implementation.

  • Core Web Vitals (LCP, CLS, FID):

* Performance Regression/Improvement: Compares the scores for Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and First Input Delay (FID) for each page.

* Flags pages where scores have worsened (regression) or improved (better user experience).

* Identifies pages that have newly passed or failed the Core Web Vitals thresholds.

  • Structured Data Presence:

* Detects new pages missing expected structured data (e.g., Schema.org markup).

* Identifies pages where structured data has been correctly implemented or removed.

Notes changes in the type* of structured data present.

  • Mobile Viewport:

* Flags pages that have newly lost their mobile viewport meta tag or have incorrect configurations.

* Notes pages where mobile viewport issues have been resolved.

3.3. Issue Status Categorization

Each identified change is categorized to provide clear actionable insights:

  • New Issue: An issue present in the current audit but not in the previous one.
  • Resolved Issue: An issue present in the previous audit but resolved in the current one.
  • Improved: A metric (e.g., LCP score) has gotten better, or coverage has increased (e.g., more image alts).
  • Regressed: A metric has worsened, or coverage has decreased.
  • No Change: The metric or issue status remains the same.

4. Structured Diff Report Generation

The output of this step is a meticulously structured "diff" report, which will be integrated directly into the new SiteAuditReport document. This report provides a clear, comparative overview of changes.

Key components of the generated diff report include:

  • overall_summary_diff:

* Total number of new issues identified.

* Total number of issues resolved.

* Overall change in critical issues, warnings, and informational findings.

* Summary of average Core Web Vitals changes across the site.

  • page_level_diffs: An array of objects, each representing a URL with identified changes:

* url: The specific page URL.

* status_change: (e.g., "new_page", "page_removed", "existing_page_changed").

* changes: A detailed list of specific SEO elements that have changed on that page, including:

* metric: (e.g., "meta_title", "h1_presence", "lcp_score").

* old_value: The value from the previous audit.

* new_value: The value from the current audit.

* diff_type: (e.g., "new_issue", "resolved_issue", "regressed", "improved").

* description: A human-readable explanation of the change.

  • seo_metric_diffs: Site-wide trends for key metrics:

* average_lcp_change: Delta in average LCP score.

* image_alt_coverage_change: Delta in overall image alt coverage percentage.

* total_duplicate_meta_titles_change: Increase or decrease in duplicate meta titles.

  • new_pages_found: A list of URLs that are newly discovered.
  • pages_no_longer_found: A list of URLs that were previously audited but are now returning 4xx/5xx errors or are no longer part of the sitemap/crawl.

5. Storage and Impact

The generated diff report is stored within the new SiteAuditReport document in MongoDB. This ensures a complete historical record, allowing you to track your website's SEO performance over time with a clear before-and-after snapshot for every audit run.

This detailed diff output is critical for:

  • Tracking Progress: Easily visualize the impact of your SEO efforts.
  • Identifying Regressions: Quickly spot new issues or performance degradations that require immediate attention.
  • Informing AI Analysis: The structured diff data will be fed into the next step (Gemini analysis) to intelligently generate precise, actionable fixes specifically for newly identified or regressed issues.

This step transforms raw audit data into actionable intelligence, providing a clear roadmap for continuous SEO improvement.

  • Structured Data Presence:

* Issue: Blog post /article/my-post lacks Article schema.

* Fix: Generates a complete JSON-LD script for Article schema, populating fields like headline, author, datePublished, image, publisher, etc., based on page content.

  • Mobile Viewport:

* Issue: Missing or incorrect viewport meta tag.

* Fix: <meta name="viewport" content="width=device-width, initial-scale=1.0" />

5. Integration and Next Steps

The generated fixes are meticulously organized and associated with their original audit findings. This rich dataset, including the original issue and the AI-generated solution, is then prepared for storage in MongoDB.

The next step (Step 4) involves storing these comprehensive audit reports, including the "before" state (issues) and the "after" state (generated fixes), into your dedicated MongoDB database. This allows for historical tracking, performance comparisons, and the generation of before/after diffs to visualize improvement over time.

This AI-powered fix generation step is a cornerstone of the Site SEO Auditor, transforming raw data into actionable intelligence and accelerating your path to optimal search engine performance.

hive_db Output

Step 4: hive_db → Upsert - Site Audit Report Storage

This step is critical for persisting the comprehensive SEO audit results, including any AI-generated fixes, into the hive_db (MongoDB). It ensures that all data collected from the headless crawl, detailed SEO checklist evaluation, Core Web Vitals measurement, and Gemini-powered fix generation is securely stored, enabling historical tracking, trend analysis, and a valuable "before/after" comparison for continuous site optimization.


1. Purpose of the hive_db Upsert

The primary purpose of this hive_db upsert operation is to:

  • Persist Audit Data: Store the complete SiteAuditReport generated by the previous steps, including all page-level SEO metrics, identified issues, and recommended fixes.
  • Enable Historical Tracking: Create a time-stamped record of each audit run, allowing for a chronological view of the site's SEO performance.
  • Generate "Before/After" Diff: Facilitate the comparison of the current audit results against the most recent previous audit, highlighting improvements, regressions, and new issues.
  • Support Reporting & Analytics: Provide the foundational data for dashboards, custom reports, and performance trend analysis over time.
  • Track Fix Implementation: Store the suggested fixes generated by Gemini, which can later be marked as applied or ignored by the user, creating a feedback loop for fix verification.

2. SiteAuditReport Data Model (MongoDB Schema)

The audit results are stored in a MongoDB collection, typically named siteAuditReports, following a detailed schema designed for comprehensive SEO data management.


{
  "_id": ObjectId, // MongoDB's default unique ID for the document
  "auditId": String, // A unique identifier for this specific audit run (e.g., UUID)
  "siteUrl": String, // The base URL of the site being audited (e.g., "https://www.example.com")
  "auditDate": ISODate, // Timestamp when the audit was completed
  "status": String, // Current status of the audit (e.g., "completed", "failed", "in_progress")
  "totalPagesAudited": Number, // Total number of unique pages successfully crawled and audited

  "summary": {
    "totalIssuesFound": Number, // Aggregate count of all issues across all pages
    "overallSeoScore": Number, // A calculated overall SEO score for the site (e.g., 0-100)
    "issueBreakdown": { // Counts of issues by category
      "metaTitleIssues": Number,
      "metaDescriptionIssues": Number,
      "h1Missing": Number,
      "imageAltMissing": Number,
      "lowInternalLinkDensity": Number,
      "canonicalTagIssues": Number,
      "openGraphTagIssues": Number,
      "lcpIssues": Number,
      "clsIssues": Number,
      "fidIssues": Number,
      "structuredDataMissing": Number,
      "mobileViewportIssues": Number
      // ... more specific breakdowns as needed
    },
    "coreWebVitalsSummary": {
      "lcpAverage": String, // Average LCP across audited pages
      "clsAverage": String, // Average CLS across audited pages
      "fidAverage": String // Average FID across audited pages
    }
  },

  "pages": [ // Array of detailed audit results for each individual page
    {
      "url": String, // The full URL of the audited page
      "pageTitle": String, // The title of the page as found
      "issuesFoundOnPage": Boolean, // True if any issues were found on this specific page
      "seoMetrics": { // Results for the 12-point SEO checklist
        "metaTitle": {
          "status": String, // "pass", "fail", "not_found"
          "value": String, // The actual meta title content
          "issueDetails": String, // Description of the issue if status is "fail"
          "length": Number, // Length of the meta title
          "uniqueness": String // "unique", "duplicate"
        },
        "metaDescription": {
          "status": String,
          "value": String, // The actual meta description content
          "issueDetails": String,
          "length": Number,
          "uniqueness": String
        },
        "h1Presence": {
          "status": String, // "pass" (H1 present), "fail" (H1 missing/multiple)
          "value": String, // Content of the first H1 if present
          "issueDetails": String
        },
        "imageAltCoverage": {
          "status": String, // "pass", "fail"
          "totalImages": Number,
          "imagesMissingAlt": Number,
          "details": [ // Array of specific image alt statuses
            {
              "src": String, // Image source URL
              "alt": String, // Actual alt text (or null if missing)
              "status": String // "pass", "fail" (if alt is missing or too short/generic)
            }
          ],
          "issueDetails": String
        },
        "internalLinkDensity": {
          "status": String, // "pass", "fail" (if below threshold)
          "count": Number, // Number of internal links found
          "issueDetails": String
        },
        "canonicalTag": {
          "status": String, // "pass", "fail" (missing/incorrect)
          "value": String, // The canonical URL specified
          "issueDetails": String
        },
        "openGraphTags": {
          "status": String, // "pass", "fail"
          "ogTitle": String,
          "ogDescription": String,
          "ogImage": String,
          "issueDetails": String // Details if essential OG tags are missing
        },
        "coreWebVitals": {
          "LCP": { // Largest Contentful Paint
            "status": String, // "pass", "fail"
            "value": String, // e.g., "2.5s"
            "threshold": String, // e.g., "<2.5s"
            "issueDetails": String
          },
          "CLS": { // Cumulative Layout Shift
            "status": String, // "pass", "fail"
            "value": String, // e.g., "0.05"
            "threshold": String, // e.g., "<0.1"
            "issueDetails": String
          },
          "FID": { // First Input Delay (or INP for newer audits)
            "status": String, // "pass", "fail"
            "value": String, // e.g., "50ms"
            "threshold": String, // e.g., "<100ms"
            "issueDetails": String
          }
        },
        "structuredDataPresence": {
          "status": String, // "pass", "fail" (if expected schema is missing)
          "typesFound": [String], // e.g., ["Article", "BreadcrumbList"]
          "issueDetails": String
        },
        "mobileViewport": {
          "status": String, // "pass", "fail" (if viewport meta tag is missing/incorrect)
          "issueDetails": String
        }
      },
      "geminiFixes": [ // Array of recommended fixes generated by Gemini for this page
        {
          "type": String, // e.g., "metaTitle", "imageAlt", "h1Missing"
          "elementLocator": String, // CSS selector, XPath, or description to locate the broken element
          "originalProblem": String, // Detailed description of the original issue
          "suggestedFix": String, // Gemini's generated fix (e.g., new meta title text, HTML snippet for alt tag)
          "fixStatus": String // "pending", "applied", "ignored" (can be updated by user)
        }
      ]
    }
  ],

  "previousAuditReportId": ObjectId, // Reference to the _id of the immediately preceding audit report
  "diffReport": { // Details changes compared to the previous audit
    "newIssuesFound": [ // Issues present in THIS audit that were NOT in the previous one
      {
        "pageUrl": String,
        "issueType": String, // e.g., "metaTitleMissing", "lcpRegression"
        "details": String
      }
    ],
    "resolvedIssues": [ // Issues present in the PREVIOUS audit that are NOT in this one
      {
        "pageUrl": String,
        "issueType": String,
        "details": String
      }
    ],
    "metricChanges": [ // Significant changes in key metrics (e.g., Core Web Vitals)
      {
        "pageUrl": String,
        "metric": String, // e.g., "LCP", "overallSeoScore"
        "oldValue": String,
        "newValue": String,
        "change": String // "improved", "regressed", "no_change"
      }
    ],
    "pageLevelChanges": {
      "newPagesDiscovered": [String], // URLs of pages found in this audit but not the previous
      "pagesNoLongerFound": [String] // URLs of pages found in previous but not this
    }
  },

  "createdAt": ISODate, // Timestamp when this report document was first created
  "updatedAt": ISODate  // Timestamp when this report document was last updated
}

3. Upsert Logic and Implementation

The hive_db → upsert operation is executed as follows:

  1. Retrieve Previous Report: Before inserting the new audit, the system attempts to fetch the most recent SiteAuditReport for the siteUrl in question. This is crucial for generating the "before/after" diff.
  2. Generate Current Report: The complete SiteAuditReport document (as described above) is constructed, incorporating all data from the crawling, auditing, and Gemini fix generation steps.
  3. Calculate diffReport:

* If a previousAuditReport is found:

* The current audit's pages array and summary metrics are compared against the previous report's data.

* newIssuesFound are identified by checking if a current issue existed in the previous report.

* resolvedIssues are identified by checking if a previous issue is no longer present in the current report.

* metricChanges are computed for key performance indicators like Core Web Vitals and overall SEO score.

* pageLevelChanges track new pages discovered and pages that might have been removed or are no longer crawlable.

The previousAuditReportId field in the current report is populated with the _id of the previously retrieved* report.

* If no previousAuditReport exists (first audit for the site), the diffReport will be empty, and previousAuditReportId will be null.

  1. Perform Upsert:

* A MongoDB insertOne operation is performed to add the newly generated SiteAuditReport document to the siteAuditReports collection.

* This operation is atomic and ensures that the complete report, including the diff, is stored.

  1. Index Creation: Ensure appropriate indexes are in place on fields like auditId, siteUrl, auditDate, and status to optimize query performance for reporting and historical lookups.

4. Actionable Outcomes & Deliverables

Storing this detailed data in MongoDB enables the following direct benefits and deliverables for the customer:

  • Comprehensive Audit History: Access to a complete chronological record of all SEO audits, visible through a dedicated dashboard or reporting interface.
  • Performance Trend Analysis: Ability to visualize how SEO metrics (e.g., Core Web Vitals, overall SEO score) have evolved over time, identifying positive or negative trends.
  • Issue Tracking & Resolution: Clear identification of newIssuesFound and resolvedIssues between audit runs, providing direct feedback on the impact of implemented changes.
  • Prioritized Fixes: The geminiFixes array provides actionable, AI-generated solutions directly linked to identified problems, which can be prioritized and tracked within the platform.
  • Custom Reporting: The structured data allows for flexible, on-demand custom reports focusing on specific metrics, pages, or issue types relevant to the user's current optimization goals.
  • Verification of Improvements: After applying Gemini's suggested fixes or other SEO changes, subsequent audit runs will demonstrate whether issues have been resolvedIssues or if metrics have shown improved metricChanges.

This hive_db → upsert step transforms raw audit data into a structured, queryable, and

hive_db Output

Site SEO Auditor: Audit Report Generation & Database Update Complete

This document confirms the successful completion of the "Site SEO Auditor" workflow, specifically the final step hive_db -> conditional_update. All audit data, identified issues, and recommended fixes have been meticulously processed and stored in your dedicated hive_db.


1. Workflow Summary

The "Site SEO Auditor" is a robust, automated system designed to provide continuous, in-depth SEO analysis of your website. It employs a headless crawler (Puppeteer) to navigate every page, performing a comprehensive 12-point SEO audit. Critical issues are automatically diagnosed, and precise fixes are generated by Gemini. All findings, including a valuable before/after diff, are stored for your review and action.

2. Step 5: hive_db Conditional Update - Action Performed

This final step signifies the successful storage and update of your website's latest SEO audit report within the PantheraHive database.

Upon the completion of the crawling, auditing, and AI-driven fix generation phases, the system has performed the following actions:

  • Consolidated Audit Data: All findings from the 12-point SEO checklist across every audited page have been compiled into a single, comprehensive report.
  • Integrated Gemini Fixes: For every identified broken element or SEO issue, the exact, actionable fix generated by Gemini has been embedded directly into the report.
  • Before/After Differential Calculation: A detailed comparison has been made with the previous audit (if available) to highlight changes, improvements, or new regressions.
  • Database Upsert: The SiteAuditReport document has been created or updated in your MongoDB instance within hive_db. If a report for this specific site and audit type already existed, it was updated with the latest data; otherwise, a new document was inserted.
  • Status Confirmation: The system has internally marked this audit run as completed and successful.

3. Stored Data: SiteAuditReport Structure

A new SiteAuditReport document, or an update to an existing one, has been committed to your hive_db. This document is a rich, structured record containing all pertinent information from this audit run.

The SiteAuditReport document typically includes, but is not limited to, the following key fields:

  • auditId: Unique identifier for this specific audit run.
  • siteUrl: The root URL of the website that was audited.
  • timestamp: Date and time when the audit was completed and stored.
  • overallStatus: A high-level status (e.g., Pass, Warning, Fail) based on critical issues.
  • summary: An aggregated overview of the audit, including total pages crawled, total issues found, and critical issues count.
  • pagesAudited: An array of objects, each representing a page on your site and its specific audit results:

* pageUrl: URL of the audited page.

* seoMetrics: Detailed results for each of the 12 checklist points (e.g., metaTitle, metaDescription, h1Presence, imageAltCoverage, canonicalTag, openGraphTags, coreWebVitals, structuredData, mobileViewport).

* issuesFound: An array of specific issues detected on that page.

* type: (e.g., MissingH1, DuplicateMetaTitle, LowLCP, MissingAltText)

* severity: (e.g., Critical, High, Medium, Low)

* description: Human-readable explanation of the issue.

* element: The specific HTML element or context related to the issue.

* geminiFix: The exact, actionable code or instruction generated by Gemini to resolve the issue.

* internalLinks: Count and list of internal links found on the page.

* externalLinks: Count and list of external links found on the page.

  • beforeAfterDiff: An object detailing the changes since the last audit:

* newIssues: Issues identified in this audit that were not present previously.

* resolvedIssues: Issues from the previous audit that are no longer present.

* metricChanges: Significant changes in key metrics (e.g., LCP improved by X ms, H1 coverage increased).

  • configUsed: The configuration parameters used for this specific audit run.

4. Key Deliverables & Insights

  • Comprehensive SEO Health Check: Your website has undergone a thorough SEO examination, covering critical on-page and technical SEO factors.
  • Actionable Fixes: For every identified issue, you now have a direct, AI-generated solution ready for implementation, significantly reducing the time and effort required for remediation.
  • Trend Analysis with Before/After Diff: The stored differential report allows you to easily track your SEO progress, identify regressions, and measure the impact of your optimization efforts over time.
  • Centralized Reporting: All audit data is now available in a structured, queryable format within your hive_db, facilitating integration with other tools or custom reporting dashboards.

5. Accessing Your Audit Report

You can access the full SiteAuditReport document and its contents through the PantheraHive UI or by directly querying your hive_db instance.

  • PantheraHive UI: Navigate to the "SEO Auditor" section and select your website. You will find a detailed breakdown of the latest audit, including a summary, page-specific issues, and the Gemini-generated fixes.
  • Direct Database Access: Query the siteAuditReports collection in your hive_db using the siteUrl and timestamp fields to retrieve the latest report.

6. Next Steps & Recommendations

  1. Review the Report: Carefully examine the overallStatus and summary sections in the UI.
  2. Prioritize Issues: Focus on Critical and High severity issues first, especially those related to Core Web Vitals, mobile usability, and unique content (meta titles/descriptions, H1s).
  3. Implement Gemini Fixes: Utilize the provided geminiFix recommendations to address identified problems efficiently. These are designed to be precise and actionable.
  4. Monitor Progress: After implementing fixes, you can trigger another on-demand audit or wait for the next scheduled run to verify the improvements using the beforeAfterDiff feature.
  5. Leverage for Strategy: Use the insights from this report to inform your broader SEO strategy and content planning.

7. Automation & Future Audits

This audit was either triggered on-demand or as part of your scheduled automation. Remember that the "Site SEO Auditor" is set to run automatically every Sunday at 2 AM, ensuring continuous monitoring and timely detection of new SEO issues. You can also initiate an on-demand audit at any time through the PantheraHive interface.


We are confident that this detailed audit report, now stored in your hive_db, will be an invaluable asset in enhancing your website's search engine performance and user experience. Please reach out to support if you have any questions or require further assistance.

site_seo_auditor.html
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}