Site SEO Auditor
Run ID: 69cbbe4861b1021a29a8bed12026-03-31SEO & Growth
PantheraHive BOS
BOS Dashboard

As part of the "Site SEO Auditor" workflow, Step 2 focuses on generating a comprehensive diff report. This crucial step compares the latest SEO audit results against the previously stored audit report in the hive_db (MongoDB). The objective is to provide a clear, actionable overview of changes, improvements, and regressions across your website's SEO performance.


Step 2 of 5: hive_dbdiff Report Generation

This step is dedicated to analyzing the differences between your site's current SEO status and its previous state, providing a historical perspective on your SEO health.

1. Purpose of the diff Report

The diff report serves several critical functions:

2. Inputs for diff Generation

To generate an accurate comparison, this step utilizes two primary data sources:

3. diff Generation Process

The system performs a detailed, page-by-page and metric-by-metric comparison:

  1. URL Mapping:

* Identifies all URLs present in both the current and previous reports.

* Flags New Pages: URLs found in the current report but not the previous one.

* Flags Removed Pages: URLs found in the previous report but no longer present in the current one (potentially due to deletion, redirects, or changes in site structure).

  1. Metric-Level Comparison (for matching URLs): For each common URL, the system compares all 12 SEO checklist points:

* Meta Title & Description: Checks for changes in content, length, and uniqueness status.

* H1 Presence: Detects if an H1 tag was added, removed, or modified.

* Image Alt Coverage: Compares the percentage of images with alt text, highlighting specific images that gained or lost alt attributes.

* Internal Link Density: Monitors changes in the number of internal links on a page.

* Canonical Tags: Verifies if canonical tags were added, removed, or changed.

* Open Graph Tags: Checks for presence, changes, or removal of essential social sharing tags.

* Core Web Vitals (LCP, CLS, FID): Compares performance scores, noting improvements or regressions and identifying new threshold breaches.

* Structured Data Presence: Detects additions, removals, or changes in the type of structured data (e.g., Schema.org markup).

* Mobile Viewport: Confirms the consistent presence and correctness of the mobile viewport meta tag.

  1. Change Categorization: Each identified change is categorized for clarity:

* Improvement: A metric's status has moved from "failing" to "passing," or a performance score has significantly improved.

* Regression: A metric's status has moved from "passing" to "failing," or a performance score has significantly worsened.

* New Issue: A page that previously passed a specific check now fails it.

* Resolved Issue: A page that previously failed a specific check now passes it.

* Content Change: A textual element (like meta title) has been altered without necessarily impacting its "pass/fail" status.

* No Change: The metric remains identical between audits.

  1. Site-Wide Aggregation: A high-level summary is generated, tallying the total number of improvements, regressions, new issues, and resolved issues across the entire website.

4. Output of the diff Report

The generated diff report is a structured JSON object, which will be embedded directly within the new SiteAuditReport document in hive_db. This ensures that each audit report contains its own historical comparison.

The report structure includes:

json • 1,642 chars
{
  "auditId": "current_audit_id",
  "previousAuditId": "previous_audit_id",
  "diffSummary": {
    "totalImprovements": 15,
    "totalRegressions": 3,
    "totalNewIssues": 2,
    "totalResolvedIssues": 10,
    "newPagesFound": 2,
    "pagesNoLongerFound": 1
  },
  "pageLevelChanges": [
    {
      "url": "https://www.yourdomain.com/blog/article-1",
      "status": "Mixed (Improvements & Regressions)",
      "changes": [
        {
          "metric": "metaTitle",
          "type": "ContentChange",
          "before": "Old Article Title",
          "after": "New and Improved Article Title"
        },
        {
          "metric": "h1Presence",
          "type": "ResolvedIssue",
          "before": "Missing",
          "after": "Present"
        },
        {
          "metric": "lcpScore",
          "type": "Regression",
          "before": "1.8s (Passing)",
          "after": "3.5s (Failing)"
        }
      ]
    },
    {
      "url": "https://www.yourdomain.com/product/new-item",
      "status": "New Page",
      "changes": [] // No 'before' data for new pages, only current status
    },
    {
      "url": "https://www.yourdomain.com/old-page",
      "status": "Page No Longer Found",
      "changes": []
    }
    // ... more page-level changes
  ],
  "overallMetricChanges": {
    "metaTitleUniqueness": {
      "status": "Improved",
      "details": "Overall unique meta titles increased by 5%"
    },
    "imageAltCoverage": {
      "status": "Regression",
      "details": "Overall image alt coverage decreased by 2% due to 3 new images missing alt text."
    }
    // ... aggregated changes for all 12 metrics
  }
}
Sandboxed live preview

Workflow Step 1 of 5: Crawl Initialization (Puppeteer)

Workflow: Site SEO Auditor

Step: puppeteer → crawl

This document details the execution and outcomes of the initial crawling phase for your website's SEO audit. This critical first step utilizes a headless browser to meticulously discover and prepare all accessible pages for subsequent in-depth analysis.


1. Purpose of Step 1: Crawl Initialization

The primary objective of this step is to systematically visit every discoverable page on your website, simulating a real user's interaction. This ensures that all content, including dynamically loaded elements and JavaScript-rendered components, is identified and captured. Without an exhaustive and accurate crawl, subsequent SEO auditing would be incomplete or flawed.

2. Mechanism: Headless Browser Crawling (Puppeteer)

We employ Puppeteer, a Node.js library, to control a headless Chrome or Chromium browser. This sophisticated approach offers significant advantages over traditional HTTP GET request-based crawlers:

  • Real User Simulation: Puppeteer opens a full browser instance, rendering pages exactly as a user would see them. This is crucial for modern websites that rely heavily on client-side JavaScript to load content, generate links, or modify the DOM.
  • Dynamic Content Discovery: It ensures that all links, images, and content generated or modified by JavaScript after the initial HTML load are discovered and processed.
  • Accurate DOM Snapshot: For each page, Puppeteer provides a complete snapshot of the Document Object Model (DOM) after all scripts have executed, which is essential for accurate SEO element extraction in the next steps.
  • Core Web Vitals Readiness: By operating a full browser, this step lays the groundwork for accurately measuring Core Web Vitals (LCP, CLS, FID) later in the audit process, as these metrics require a rendered page context.

3. Crawl Scope and Configuration

The crawl process is configured to be comprehensive yet respectful of your server resources:

  • Starting Point: The crawl begins from the primary URL provided for your website (e.g., https://www.yourdomain.com).
  • Internal Link Discovery: The crawler navigates your site by identifying and following all valid internal <a> (anchor) links within the rendered DOM of each page.
  • Scope Limitation: The crawl is strictly confined to your primary domain and its subdomains (if specified). External links are identified but not followed, ensuring the audit focuses solely on your site's SEO health.
  • Robots.txt Adherence: The crawler respects your robots.txt file, ensuring that pages or sections you've excluded from crawling are not accessed.
  • Sitemap Integration (Optional): If a sitemap.xml is present and discoverable, it will be used as an additional source to discover URLs, ensuring maximum coverage, especially for pages that might not be easily discoverable via internal linking alone.
  • Rate Limiting & Politeness: To prevent overwhelming your server, the crawler incorporates intelligent delays between requests and limits concurrent page fetches. This ensures a thorough audit without impacting your site's performance during the crawl.
  • User-Agent String: Requests are made with a distinct User-Agent string (e.g., SiteSEOAuditor/1.0 (+https://pantherahive.com/seo-auditor)) to clearly identify our crawler in your server logs.

4. Data Collected During Crawl (Pre-Audit)

During this initial phase, the following foundational data points are gathered for each discovered URL:

  • Unique URL List: A comprehensive list of every unique, discoverable internal URL on your website.
  • HTTP Status Code: The HTTP response code (e.g., 200 OK, 301 Redirect, 404 Not Found) for each URL. This helps identify broken links or redirection issues early.
  • Initial Response Time: Basic timing metrics for the initial server response, providing a preliminary indicator of page load performance.
  • Redirect Chains: Identification and mapping of any redirect chains encountered (e.g., A -> B -> C), noting the final destination URL.
  • Content Type: The Content-Type header (e.g., text/html, image/jpeg) to filter out non-HTML resources from the main audit.
  • DOM Snapshot (for audit): The complete, rendered HTML content (DOM) of each page after all scripts have executed. This is the primary input for the subsequent auditing steps.

5. Outputs of Step 1

Upon completion, this step generates the following critical outputs:

  • Comprehensive URL Inventory: A definitive list of all internal pages identified on your website.
  • Crawl Map/Graph: An internal representation of your site's structure, showing how pages link to each other.
  • Initial Page Data: A dataset containing the URL, HTTP status code, and initial response time for each discovered page.
  • Error Log: A detailed log of any network errors, timeouts, or unhandled exceptions encountered during the crawling process, indicating pages that could not be successfully accessed or rendered.

6. Next Steps

The output from this crawling phase serves as the direct input for Step 2: Page Auditing. Each URL and its corresponding rendered DOM snapshot will be individually analyzed against the 12-point SEO checklist, extracting specific elements and evaluating their compliance with best practices.


This thorough crawling process ensures that your SEO audit is built upon a complete and accurate understanding of your website's accessible content, providing a solid foundation for identifying and rectifying any SEO deficiencies.

5. Actionable Insights from the diff Report

The generated diff report provides immediate value:

  • Prioritize Regressions: Focus on pages and metrics flagged with "Regression" or "New Issue" as these represent a decline in SEO health that needs urgent attention.
  • Verify Optimizations: Use "Resolved Issue" and "Improvement" flags to confirm the positive impact of your SEO efforts.
  • Monitor New Content: Ensure "New Pages" adhere to all SEO best practices from the outset.
  • Address Removed Pages: Investigate "Pages No Longer Found" to ensure they were intentionally removed or properly redirected to avoid broken links and lost SEO value.
  • Strategic Planning: Utilize the overallMetricChanges to understand broader trends and inform your long-term SEO strategy.

This detailed diff report empowers you with the knowledge to maintain and continuously improve your website's search engine visibility and performance.

gemini Output

Step 3: AI-Powered Fix Generation (Gemini Batch Processing)

This crucial step leverages the advanced capabilities of Google's Gemini AI to transform identified SEO issues into precise, actionable solutions. Once the headless crawler (Puppeteer) has completed its comprehensive audit and flagged any "broken elements" or non-compliant SEO attributes across your site, these issues are systematically batched and fed into our Gemini AI engine.

The primary objective of this phase is to move beyond mere problem identification, providing you with exact, ready-to-implement fixes that address the root cause of each SEO deficiency.


Overview of the gemini → batch_generate Process

  1. Issue Aggregation & Categorization: All identified SEO issues from the crawler's audit (e.g., missing alt tags, duplicate meta descriptions, incorrect structured data, sub-optimal Open Graph tags) are collected and categorized by type and affected URL.
  2. Contextual Data Enrichment: For each issue, Gemini receives not just the problem statement but also relevant contextual data from the page. This includes:

* The specific URL of the affected page.

* The problematic HTML snippet or content area.

* The surrounding HTML structure or content.

* The detected SEO rule violation (e.g., "Image on line X has no alt attribute").

* The page's overall content and theme (to inform content-based suggestions).

  1. Intelligent Fix Generation (Batch Mode): Gemini analyzes each issue within its context. Utilizing its vast understanding of web standards, SEO best practices, and code generation, it formulates the most appropriate and efficient "exact fix." This process is executed in a batch, allowing for efficient processing of numerous issues concurrently across your entire site.
  2. Output of Exact Fixes: For every detected issue, Gemini produces a detailed, specific recommendation or a direct code snippet that can be applied to rectify the problem. These fixes are designed to be immediately actionable by your development or content team.

Key Fix Generation Capabilities by Issue Type

Gemini is engineered to provide precise solutions for a wide range of SEO audit findings:

  • Meta Title & Description Uniqueness:

* Input: Duplicate or missing meta titles/descriptions, page content, existing heading structure.

* Output: Uniquely crafted, keyword-optimized meta titles and descriptions (max 60/160 chars) tailored to the page's specific content and intent, ensuring differentiation and improved CTR.

  • H1 Presence & Optimization:

* Input: Missing H1 tags, multiple H1s, or poorly optimized H1s; page content.

* Output: A single, clear, and keyword-relevant H1 tag suggestion that accurately reflects the page's primary topic.

  • Image Alt Coverage:

* Input: Images missing alt attributes, image URLs, surrounding text.

* Output: Descriptive and keyword-rich alt text for each identified image, improving accessibility and search engine understanding.

  • Internal Link Density & Anchor Text:

* Input: Pages with low internal link density, relevant content sections on other pages.

* Output: Suggestions for new internal linking opportunities, including specific source text (anchor text) and target URLs, to enhance site navigation and authority flow.

  • Canonical Tag Implementation:

* Input: Missing, incorrect, or self-referencing canonical tags, potential duplicate content issues.

* Output: The correct rel="canonical" tag, pointing to the definitive version of the page, preventing duplicate content issues.

  • Open Graph (OG) Tag Optimization:

* Input: Missing or incomplete Open Graph tags (e.g., og:title, og:description, og:image, og:type, og:url).

* Output: Complete and optimized Open Graph meta tags to ensure your content displays beautifully and effectively when shared on social media platforms.

  • Core Web Vitals (LCP/CLS/FID) - Actionable Recommendations:

* Input: Performance metrics from Lighthouse/CrUX, problematic script/CSS, layout shifts.

* Output: While direct code rewrite for complex performance issues is challenging, Gemini provides highly specific, actionable recommendations, such as:

* "Implement lazy loading for images and iframes below the fold."

* "Prioritize critical CSS for faster Largest Contentful Paint (LCP)."

* "Identify and defer non-critical JavaScript to improve First Input Delay (FID)."

* "Add width and height attributes to images to prevent Cumulative Layout Shift (CLS)."

* "Suggest specific CSS properties to stabilize layout elements causing CLS."

  • Structured Data Presence:

* Input: Pages lacking appropriate schema markup, page content (e.g., product details, article body, FAQ sections).

* Output: Relevant JSON-LD structured data snippets (e.g., Article, Product, FAQPage, LocalBusiness schema) accurately reflecting the page's content, ready for direct insertion into the HTML.

  • Mobile Viewport Configuration:

* Input: Missing or incorrect viewport meta tag.

* Output: The standard and recommended <meta name="viewport" content="width=device-width, initial-scale=1.0"> tag for optimal mobile responsiveness.


Example Scenario: Fixing a Missing Image Alt Tag

Detected Issue:


URL: https://www.yourdomain.com/products/example-product
Issue: Image at <img src="/images/product-hero.jpg"> is missing an 'alt' attribute.
Context: This image is the main hero shot for the "Eco-Friendly Water Bottle" product page.

Gemini's Generated Fix:


<!-- Original HTML (Problematic) -->
<img src="/images/product-hero.jpg" class="product-image">

<!-- Gemini's Suggested Fix -->
<img src="/images/product-hero.jpg" class="product-image" alt="Eco-Friendly Reusable Water Bottle - Stainless Steel 750ml">

(Gemini intelligently uses the product name and context to generate a descriptive and SEO-friendly alt tag.)


Deliverable & Next Steps

The output of this gemini → batch_generate step is a comprehensive list of all identified issues, each paired with its precise, AI-generated fix. This data will be structured for easy consumption and integration:

  • Detailed Report: A clear breakdown of each issue, its location (URL, line number if applicable), and the corresponding Gemini-generated solution.
  • Code Snippets: Direct HTML or JSON-LD snippets where applicable, ready for copy-pasting.
  • Actionable Instructions: Clear guidance for issues requiring strategic implementation rather than direct code changes.

This output will then be stored in MongoDB as part of the SiteAuditReport, enabling a clear "before" and "after" diff and providing your team with an efficient roadmap for optimizing your site's SEO.

hive_db Output

Step 4 of 5: hive_db → Upsert Site Audit Report

This step is critical for persisting the comprehensive SEO audit findings and their corresponding remediation strategies into our secure database. Upon successful completion of the headless crawling and AI-powered fix generation, all data is meticulously structured, compared against previous audits, and then stored or updated in MongoDB as a SiteAuditReport document.


1. Overview: Data Persistence and Historical Tracking

Following the execution of the headless crawler (Puppeteer) and the AI-driven analysis and fix generation (Gemini), this upsert operation ensures that all collected data is securely stored within the hive_db. This process involves:

  • Consolidating Audit Data: Gathering all page-level SEO metrics, performance data, and identified issues.
  • Generating AI-Powered Fixes: Incorporating the precise, actionable fixes generated by Gemini for any detected broken or non-compliant elements.
  • Creating a Before/After Diff: Comparing the current audit's findings against the most recent prior audit to highlight improvements, regressions, or new issues.
  • Storing in MongoDB: Persisting the complete SiteAuditReport document, including the historical diff, into the site_audit_reports collection.

This mechanism not only provides a snapshot of your site's SEO health but also builds a valuable historical record, enabling you to track progress, measure the impact of implemented fixes, and continuously optimize your online presence.


2. SiteAuditReport Data Structure

The SiteAuditReport document is a comprehensive record designed to capture every detail of your site's SEO performance. Each report is uniquely identified and contains the following key sections:

  • _id (Unique Identifier): A unique MongoDB ObjectId, often combined with siteId and auditTimestamp for easy lookup.
  • siteUrl (String): The root URL of the website that was audited.
  • auditTimestamp (Date): The exact date and time when the audit was performed.
  • status (String): Indicates the overall status of the audit (e.g., completed, partial, failed).
  • triggeredBy (String): Specifies how the audit was initiated (automatic or on_demand).

2.1. Overall Site Metrics (overallMetrics)

A high-level summary of the audit's findings:

  • totalPagesAudited (Number): The total number of unique pages successfully crawled and audited.
  • criticalIssuesCount (Number): The total number of high-priority SEO issues identified across the site.
  • warningIssuesCount (Number): The total number of medium-priority SEO warnings identified.
  • infoIssuesCount (Number): The total number of informational SEO notices.
  • overallSeoScore (Number): A calculated score reflecting the site's overall SEO health (e.g., out of 100).
  • averageLCP (Number): The average Largest Contentful Paint across all audited pages (in ms).
  • averageCLS (Number): The average Cumulative Layout Shift across all audited pages.
  • averageFID (Number): The average First Input Delay across all audited pages (in ms).

2.2. Page-Level Audit Details (pagesAudited)

An array of objects, where each object represents a single audited page and its specific findings:

  • url (String): The full URL of the audited page.
  • statusCode (Number): The HTTP status code returned by the page (e.g., 200, 301, 404).
  • pageLoadTime (Number): The total time it took to load the page (in ms).
  • seoChecks (Object): A detailed breakdown of the 12-point SEO checklist for the specific page:

* metaTitle:

* value (String): The extracted meta title.

* isUnique (Boolean): True if unique across the site, false otherwise.

* length (Number): Length of the meta title.

* status (String): pass, fail, warning.

* issue (String, optional): Description of the issue (e.g., "Too long", "Duplicate").

* geminiFix (String, optional): AI-generated recommendation for fixing the meta title.

* metaDescription:

* value (String): The extracted meta description.

* isUnique (Boolean): True if unique across the site, false otherwise.

* length (Number): Length of the meta description.

* status (String): pass, fail, warning.

* issue (String, optional): Description of the issue (e.g., "Missing", "Too short").

* geminiFix (String, optional): AI-generated recommendation for fixing the meta description.

* h1Presence:

* found (Boolean): True if an H1 tag is present.

* value (String, optional): The text content of the H1 tag.

* status (String): pass, fail.

* issue (String, optional): Description of the issue (e.g., "Missing H1", "Multiple H1s").

* geminiFix (String, optional): AI-generated recommendation for fixing the H1.

* imageAltCoverage:

* totalImages (Number): Total images on the page.

* missingAlts (Number): Number of images missing alt attributes.

* coverageRatio (Number): Percentage of images with alt attributes.

* status (String): pass, fail, warning.

* issue (String, optional): Description of the issue (e.g., "Missing alt attributes").

* geminiFixes (Array of Strings, optional): AI-generated recommendations for specific image alt texts.

* internalLinkDensity:

* internalLinks (Number): Count of internal links.

* externalLinks (Number): Count of external links.

* density (Number): Ratio of internal links to total links.

* status (String): pass, info.

* issue (String, optional): Informational note if density is unusually high/low.

* canonicalTag:

* present (Boolean): True if a canonical tag is found.

* value (String, optional): The URL specified in the canonical tag.

* isSelfReferencing (Boolean, optional): True if canonical points to itself.

* status (String): pass, fail, warning.

* issue (String, optional): Description of the issue (e.g., "Missing", "Incorrect URL").

* geminiFix (String, optional): AI-generated recommendation for canonical tag.

* openGraphTags:

* present (Boolean): True if Open Graph tags are found.

* title (String, optional): OG title.

* description (String, optional): OG description.

* image (String, optional): OG image URL.

* status (String): pass, fail, warning.

* issue (String, optional): Description of the issue (e.g., "Missing crucial OG tags").

* geminiFixes (Array of Strings, optional): AI-generated recommendations for fixing/adding OG tags.

* coreWebVitals:

* LCP (Number): Largest Contentful Paint (in ms).

* CLS (Number): Cumulative Layout Shift.

* FID (Number): First Input Delay (in ms).

* status (String): pass, fail, warning.

* issue (String, optional): Description of the issue (e.g., "Poor LCP performance").

* geminiFixes (Array of Strings, optional): AI-generated recommendations for improving Core Web Vitals.

* structuredData:

* present (Boolean): True if structured data (Schema.org) is detected.

* types (Array of Strings, optional): Detected schema types (e.g., Article, Product).

* status (String): pass, fail, warning.

* issue (String, optional): Description of the issue (e.g., "Missing required properties").

* geminiFix (String, optional): AI-generated recommendation for structured data.

* mobileViewport:

* present (Boolean): True if a viewport meta tag is correctly configured for mobile.

* status (String): pass, fail.

* issue (String, optional): Description of the issue (e.g., "Viewport not configured").

* geminiFix (String, optional): AI-generated recommendation for viewport configuration.

2.3. Previous Audit Reference and Diff Report

  • previousAuditId (ObjectId, optional): A reference to the _id of the immediately preceding SiteAuditReport for the same site. This is crucial for calculating the diff.
  • diffReport (Object, optional): This section is generated by comparing the current audit's findings with the previousAuditId audit:

newIssues (Array of Objects): Details of issues identified in the current audit that were not* present in the previous one. Each object includes url, issueType, description, and geminiFix.

resolvedIssues (Array of Objects): Details of issues present in the previous audit that are no longer found* in the current audit, indicating successful remediation. Each object includes url, issueType, description.

* changedMetrics (Array of Objects): Key performance indicators or overall scores that have significantly changed (e.g., overallSeoScore improved by 5 points, averageLCP decreased by 100ms). Each object includes metricName, oldValue, newValue, change.

* pageChanges (Array of Objects): Specific, granular changes identified on individual pages (e.g., "Meta Title updated on /page-x", "H1 added to /page-y"). Each object includes url, changeType, details.


3. Upsert Logic and Database Interaction

The hive_db → upsert step performs the following actions:

  1. Retrieve Previous Audit: Before storing the current report, the system attempts to retrieve the most recent SiteAuditReport for your site from the site_audit_reports collection. This previous audit is essential for generating the diffReport.
  2. Generate Diff Report: The current audit's data is systematically compared against the retrieved previous audit. All changes, new issues, and resolved issues are meticulously logged into the diffReport section of the new SiteAuditReport document.
  3. Construct SiteAuditReport Document: The complete document, structured as described above, is assembled, including all raw audit data, Gemini's fixes, and the historical diff.
  4. Upsert into MongoDB: Using the upsert operation, the system attempts to:

* Insert: If no prior audit exists for the site (e.g., first-time audit), a new SiteAuditReport document is inserted into the site_audit_reports collection.

* Update: If a prior audit exists, the system intelligently updates the relevant fields or inserts a new document while linking it to the previousAuditId, ensuring a continuous historical chain. This ensures that the latest audit is always available while preserving historical data.


4. Customer Benefits and Actionable Insights

By meticulously storing this data in MongoDB, we empower you with:

  • Comprehensive SEO Visibility: A single, consolidated view of your site's SEO health, broken down by page and specific SEO factors.
  • Actionable Fixes: Direct access to Gemini's AI-generated, precise
hive_db Output

Workflow Step: hive_db → conditional_update

This document outlines the final and critical step in the "Site SEO Auditor" workflow, focusing on the persistence and intelligent management of your SEO audit data within our secure MongoDB database (hive_db).


Purpose of this Step

The hive_db → conditional_update step serves as the robust data management layer for your SEO audit reports. Its primary objectives are:

  1. Data Persistence: Securely store all generated SEO audit reports, including the detailed 12-point checklist results, identified issues, and Gemini-generated fixes, in a structured and queryable format within MongoDB.
  2. Historical Tracking: Maintain a chronological record of all audits performed for your site. This enables long-term performance monitoring and trend analysis.
  3. Before/After Diff Generation: Crucially, this step compares the newly generated audit report with the most recent previous report. It intelligently identifies and records changes, providing a clear "before" and "after" snapshot of your site's SEO health. This diff is vital for understanding the impact of implemented changes and identifying new regressions.

Input Data for Update

This step receives a comprehensive, newly generated SiteAuditReport object as its primary input. This object encapsulates all findings from the preceding workflow steps, including:

  • Audit Metadata: Timestamp, site URL, and overall audit status.
  • Page-Level Details: For each crawled page:

* URL and HTTP status.

* Detailed results for each of the 12 SEO checklist points (e.g., meta title uniqueness, H1 presence, image alt coverage, Core Web Vitals scores, canonical tags, Open Graph data, structured data, mobile viewport configuration).

* Specific issues identified, including the failing metric, a detailed description of the problem, and its severity.

* The precise, actionable fix generated by Gemini for each identified broken element.

  • Aggregated Metrics: Overall site health scores or summaries derived from the page-level data.

Additionally, this step implicitly accesses the hive_db to retrieve the most recent prior SiteAuditReport for your specific website to facilitate the "before/after diff" comparison.


Process Details: Conditional Database Update

The conditional_update process is executed with the following logic to ensure data integrity, historical accuracy, and efficient storage:

  1. Retrieve Previous Report: Upon receiving the new SiteAuditReport, the system queries hive_db to fetch the latest existing audit report for your website (identified by its URL).
  2. Generate "Before/After" Diff:

* If a previous report is found, the system performs a deep comparison between the new report and the old report.

* It meticulously identifies changes in SEO metrics, new issues that have appeared, issues that have been resolved, and any changes in Core Web Vitals scores or other quantifiable metrics.

This comparison results in a structured diffFromPrevious object, which is then embedded within the new* SiteAuditReport. This diff highlights what has improved, what has deteriorated, and what has remained consistent.

  1. Database Operation:

* Insert (First Audit): If no previous audit report exists for your site (i.e., this is the very first audit), the new SiteAuditReport (without a diffFromPrevious object) is inserted as a new document into the SiteAuditReports collection.

* Update/Insert (Subsequent Audits): For all subsequent audits, the new SiteAuditReport (now containing the diffFromPrevious object) is inserted as a new document. This approach ensures an immutable historical record of each audit. The "conditional" aspect primarily refers to the conditional generation of the diff and the decision to always add a new document rather than overwriting, preserving the full history.

  1. Indexing: Appropriate indexes are maintained on fields like siteUrl and auditTimestamp to ensure rapid retrieval of historical data and the latest reports.

Example Data Structure (SiteAuditReport in MongoDB):


{
  "_id": ObjectId("..."),
  "siteUrl": "https://www.yourwebsite.com",
  "auditTimestamp": ISODate("2023-10-29T02:00:00.000Z"),
  "overallStatus": "Needs Attention", // e.g., "Pass", "Fail", "Needs Attention"
  "pages": [
    {
      "url": "https://www.yourwebsite.com/homepage",
      "seoMetrics": {
        "metaTitleUnique": true,
        "metaDescriptionUnique": false, // Issue
        "h1Presence": true,
        "imageAltCoverage": { "total": 10, "covered": 8, "percentage": 80 },
        // ... other 12-point checklist items
      },
      "issuesFound": [
        {
          "metric": "Meta Description Uniqueness",
          "description": "Duplicate meta description found on 3 other pages.",
          "severity": "Warning",
          "suggestedFix": "Update the meta description for '/homepage' to be unique and descriptive, focusing on keywords relevant to this specific page's content. Example: '<meta name=\"description\" content=\"Discover our unique services...\" />'",
          "fixStatus": "Pending"
        }
      ]
    }
    // ... more pages
  ],
  "diffFromPrevious": { // Only present if a previous report exists
    "summary": {
      "issuesResolved": 2,
      "newIssuesDetected": 1,
      "coreWebVitalsImproved": ["LCP"],
      "coreWebVitalsDegraded": [],
      "overallChange": "Mixed" // "Improved", "Degraded", "No Change"
    },
    "pageChanges": [
      {
        "url": "https://www.yourwebsite.com/product-page",
        "changes": [
          {
            "metric": "H1 Presence",
            "oldValue": false,
            "newValue": true,
            "status": "Improved"
          }
        ]
      }
    ]
  }
}

Outcome and Deliverables

The successful execution of this step provides the following direct deliverables:

  • Persistent Audit Record: A complete, time-stamped SiteAuditReport document is stored in your hive_db for every audit run. This ensures that no data is lost and all historical context is preserved.
  • Actionable Historical Insights: The embedded diffFromPrevious object within each report provides immediate context on changes since the last audit. This allows you to quickly identify if recent website updates have positively or negatively impacted your SEO.
  • Foundation for Automated Reporting: The structured data stored in MongoDB forms the bedrock for any automated reporting, dashboards, or alerts that you may integrate or that PantheraHive provides, allowing for powerful visualization and analysis of your SEO performance over time.

Customer Benefits

This final step is crucial for delivering tangible value to you:

  • Continuous Improvement Tracking: Easily monitor the effectiveness of your SEO efforts. See which issues have been resolved and track the progress of your Core Web Vitals over weeks and months.
  • Proof of SEO Progress: Generate reports showing clear improvements in your site's SEO health, demonstrating the ROI of your optimization strategies.
  • Targeted Remediation: The "before/after" diff helps pinpoint new regressions immediately, allowing for rapid intervention before they significantly impact your search rankings.
  • Compliance and Accountability: Maintain a clear, auditable history of your site's SEO performance, which can be valuable for internal reporting or external compliance requirements.
  • Reduced Manual Effort: The automatic storage and diff generation eliminate the need for manual tracking and comparison of audit reports, saving time and reducing human error.
site_seo_auditor.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}