Site SEO Auditor
Run ID: 69caeb34c8ebe3066ba6f7512026-03-30SEO & Growth
PantheraHive BOS
BOS Dashboard

Step 1 of 5: Puppeteer Crawl - Initial Site Discovery and Data Collection

This document details the execution and output of the "Puppeteer Crawl" step, which is the foundational phase of your Site SEO Auditor workflow. This step systematically discovers and collects raw data from every page on your website, preparing it for a comprehensive SEO audit.


1. Step Overview: Puppeteer Crawl

Objective: The primary goal of this step is to act as a headless crawler, navigating your website to identify all discoverable pages and capture their complete, rendered HTML content and associated metadata. This raw data forms the essential input for the subsequent SEO audit checks.

Technology: We leverage Puppeteer, a powerful Node.js library that provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer is ideal for this task as it can render pages identically to a real browser, including executing JavaScript, which is crucial for modern, dynamically-generated websites.

Key Functionality:


2. Execution Details: How the Crawl Works

The crawling process is designed for thoroughness and efficiency, ensuring that all accessible pages are identified and their content retrieved.

2.1. Initialization and Configuration

2.2. URL Discovery and Traversal

2.3. Page Rendering and Data Extraction

2.4. Politeness and Error Handling


3. Output of This Step: Raw Page Data Collection

Upon completion of the crawling process, this step generates a structured collection of raw data for every successfully discovered and processed URL. This data is then passed as input to the subsequent "Audit" step.

For each unique URL identified on your website, the following key data points are collected:

Example Raw Data Structure (per URL):

json • 448 chars
{
  "url": "https://www.yourwebsite.com/product/example-product",
  "finalUrl": "https://www.yourwebsite.com/product/example-product",
  "statusCode": 200,
  "htmlContent": "<!DOCTYPE html><html lang=\"en\">... (full HTML content) ...</html>",
  "timestamp": "2023-10-27T08:30:00.000Z",
  "internalLinks": [
    "https://www.yourwebsite.com/category/electronics",
    "https://www.yourwebsite.com/contact-us"
  ],
  "metaRobots": "index, follow"
}
Sandboxed live preview

This comprehensive dataset provides the necessary foundation for the detailed 12-point SEO checklist audit in the next phase.


4. Next Steps in the Workflow

The raw data collected in this "Puppeteer Crawl" step will now be passed to Step 2: SEO Audit Engine. In this subsequent step, the htmlContent and other collected metadata for each page will be systematically analyzed against the defined SEO checklist criteria (meta tags, H1s, alt text, canonicals, etc.) to identify any issues and generate actionable recommendations.

hive_db Output

Step 2: Data Comparison and Diff Generation (hive_db → diff)

This step is crucial for transforming raw audit data into actionable intelligence by comparing the latest crawl results against your site's previous SEO performance. The "diff" operation identifies changes, improvements, and regressions, providing a clear historical perspective on your site's SEO health.


Objective of the Diff Operation

The primary objective of the hive_db → diff step is to generate a comprehensive "before and after" comparison report. This report serves several critical purposes:

  • Track Progress: Monitor the impact of SEO changes implemented on your site over time.
  • Identify Regressions: Promptly detect any new issues or re-emergence of old problems that could negatively affect your search rankings.
  • Prioritize Fixes: Highlight the most significant changes or new critical errors, allowing for efficient resource allocation.
  • Validate Improvements: Confirm that previous fixes have been successfully implemented and are reflected in the current audit.
  • Historical Context: Build a detailed historical record of your site's SEO performance, enabling long-term trend analysis.

Mechanism of Diff Generation

Upon completion of the headless crawl and initial audit (Step 1), the newly generated SEO data is stored in your dedicated MongoDB instance (hive_db). This step then performs the following operations:

  1. Retrieve Previous Audit: The system queries MongoDB to retrieve the most recent SiteAuditReport for your domain. If no previous report exists (e.g., first-time audit), the current audit will serve as the baseline for future comparisons.
  2. Current vs. Previous Comparison: Each audited page and its respective SEO metrics are systematically compared against their counterparts from the previous report. This comparison is performed on a page-by-page and metric-by-metric basis.
  3. Identify Changes: A sophisticated algorithm analyzes the differences, categorizing them into:

* New Issues: Problems identified in the current audit that were not present in the previous one.

* Resolved Issues: Problems identified in the previous audit that are no longer present in the current one.

* Regressions: Metrics that have worsened since the last audit (e.g., Core Web Vitals scores degrading).

* Improvements: Metrics that have improved since the last audit (e.g., Core Web Vitals scores improving).

* Unchanged: Metrics that remain consistent between audits.

  1. Populate Diff Field: The identified changes are then structured and stored within the diff field of the new SiteAuditReport document in MongoDB. This ensures that every audit report contains not just the current state, but also a clear overview of how it has changed from the previous state.

Key Metrics and Elements Undergoing Diff Analysis

The diff operation meticulously compares all 12 points of the SEO checklist for every page crawled:

  • Meta Title Uniqueness & Content:

* Changes in title content.

* New duplicate titles detected.

* Resolved duplicate titles.

  • Meta Description Uniqueness & Content:

* Changes in description content.

* New duplicate descriptions detected.

* Resolved duplicate descriptions.

  • H1 Presence and Uniqueness:

* New pages missing H1s.

* Resolved H1 issues.

* Changes in H1 content.

  • Image Alt Coverage:

* New images missing alt text.

* Resolved missing alt text.

* Changes in alt text content.

  • Internal Link Density:

* Significant increases or decreases in internal link count per page.

* Identification of new orphaned pages (if link count drops to zero).

  • Canonical Tags:

* Changes in canonical URL.

* New pages missing canonical tags.

* Resolved incorrect canonical tags.

  • Open Graph (OG) Tags:

* New pages missing essential OG tags (e.g., og:title, og:description, og:image).

* Changes in OG tag content.

  • Core Web Vitals (LCP/CLS/FID):

Largest Contentful Paint (LCP):* Changes in LCP score (e.g., moving from "Good" to "Needs Improvement").

Cumulative Layout Shift (CLS):* Changes in CLS score.

First Input Delay (FID):* Changes in FID score.

  • Structured Data Presence:

* New pages missing detected structured data (e.g., Schema.org markup).

* Changes in structured data type or content.

  • Mobile Viewport Meta Tag:

* Detection of new pages missing the viewport meta tag.

* Resolution of missing tags.


Structure of the Diff Output in SiteAuditReport

The diff information will be stored within the SiteAuditReport document in MongoDB, typically as a structured object or array of changes. This structure allows for easy programmatic access and visualization.

Example Structure (Conceptual):


{
  "_id": "report_id_current_date",
  "domain": "yourwebsite.com",
  "auditDate": "2023-10-27T10:00:00Z",
  "previousAuditId": "report_id_previous_date", // Reference to the previous report
  "summary": {
    "totalPagesAudited": 150,
    "newIssuesFound": 15,
    "issuesResolved": 8,
    "criticalRegressions": 2,
    "overallHealthScore": 85 // Example metric
  },
  "pages": [
    // ... current audit data for each page
  ],
  "diff": {
    "overall": {
      "cwv_lcp_status_change": "Good -> Needs Improvement",
      "total_broken_elements_change": "+7"
    },
    "pageChanges": [
      {
        "url": "https://yourwebsite.com/blog/new-article",
        "type": "New Page Audited",
        "issuesFound": [
          {"metric": "h1_presence", "status": "missing"},
          {"metric": "meta_description_uniqueness", "status": "duplicate"}
        ]
      },
      {
        "url": "https://yourwebsite.com/product/item-a",
        "type": "Changes Detected",
        "changes": [
          {"metric": "meta_title", "previous": "Old Title", "current": "New Title"},
          {"metric": "image_alt_coverage", "status": "improved", "details": "2 images now have alt text"},
          {"metric": "cwv_lcp", "status": "regressed", "previousScore": 2.5, "currentScore": 3.8}
        ],
        "newIssues": [
          {"metric": "canonical_tag", "status": "missing"}
        ],
        "resolvedIssues": [
          {"metric": "open_graph_image", "status": "fixed"}
        ]
      },
      {
        "url": "https://yourwebsite.com/about-us",
        "type": "No Significant Changes"
      }
    ],
    "globalChanges": [
      {
        "metric": "total_duplicate_meta_titles",
        "previousCount": 10,
        "currentCount": 12,
        "changeType": "regression"
      },
      {
        "metric": "average_cwv_lcp_score",
        "previousValue": 2.8,
        "currentValue": 2.2,
        "changeType": "improvement"
      }
    ]
  }
}

Actionable Insights from the Diff Report

The generated diff report provides direct, actionable insights for your SEO strategy:

  • Immediate Alert for Regressions: Quickly identify pages or metrics that have worsened, allowing for prompt investigation and remediation before they impact rankings.
  • Validation of SEO Efforts: Clearly see which implemented fixes have successfully resolved issues, confirming the effectiveness of your SEO work.
  • Prioritized Task List: Focus on "New Issues" and "Regressions" as immediate priorities for your development and content teams.
  • Trend Analysis: Over multiple audit cycles, the diff reports will allow you to observe long-term trends in your site's performance, helping you refine your SEO strategy.
  • Content Management Oversight: Detect unintended changes made by content editors or developers that might inadvertently harm SEO (e.g., removing H1s, changing canonicals).

Next Steps

With the diff generated and stored in the SiteAuditReport, the workflow will proceed to Step 3. This next step will leverage the identified "broken elements" (especially new issues and regressions highlighted by the diff) and send them to Gemini. Gemini will then generate precise, actionable fixes, further empowering you to maintain and improve your site's SEO health.

gemini Output

Step 3 of 5: Gemini AI - Automated Fix Generation (batch_generate)

This crucial step leverages the advanced capabilities of Gemini AI to automatically generate precise, actionable fixes for every SEO issue identified during the comprehensive site crawl and audit. Rather than simply highlighting problems, our system provides you with the exact code snippets and content recommendations needed to resolve them, streamlining your SEO remediation process.


1. Purpose of This Step

Following the exhaustive site audit (Step 2), a detailed list of SEO violations and areas for improvement is compiled. This step takes that raw audit data and feeds it into the Gemini AI model. Gemini's purpose here is to interpret each detected issue, understand its context within the page's HTML and content, and then synthesize the most effective, ready-to-implement solution. This ensures that you receive more than just a report; you receive a solution blueprint.

2. Gemini's Intelligent Remediation Process

For each identified SEO issue, Gemini receives a rich context, including:

  • The specific audit rule violated: E.g., "Missing H1 tag," "Duplicate Meta Description," "Image missing Alt text."
  • The problematic HTML snippet or content segment: The exact code or text where the issue resides.
  • The full page content (if necessary): For context-aware suggestions like alt text generation or structured data.
  • The URL of the affected page: To maintain navigational and structural awareness.

Gemini then processes this information through its sophisticated natural language understanding and code generation capabilities to produce highly targeted fixes.

3. Comprehensive Fixes Across All 12 Audit Points

Gemini generates specific, actionable fixes for every item on our 12-point SEO checklist:

  • Meta Title/Description Uniqueness & Optimization:

* Fixes: Provides optimized <title> and <meta name='description'> tag content, often including keyword suggestions, character count adherence, and ensuring uniqueness across the site.

  • H1 Presence & Optimization:

* Fixes: If an H1 is missing, Gemini suggests appropriate H1 content based on page context. If multiple H1s are present, it recommends which existing heading tag to promote/demote or how to consolidate.

  • Image Alt Coverage & Quality:

* Fixes: For images missing alt attributes or having unoptimized ones, Gemini analyzes the image context (filename, surrounding text, visual cues if available) and generates descriptive, SEO-friendly alt text.

  • Internal Link Density & Broken Links:

* Fixes: Identifies pages with low internal link density and suggests relevant anchor text and target pages for new, contextually appropriate internal links. For broken links, it suggests correct target URLs.

  • Canonical Tags:

* Fixes: Generates the correct <link rel='canonical'> tag, ensuring proper URL normalization and preventing duplicate content issues by pointing to the preferred version of a page.

  • Open Graph Tags (Social Sharing):

* Fixes: Provides complete and correctly formatted Open Graph (og:title, og:description, og:image, og:url, og:type, etc.) tags, crucial for optimal appearance and performance when content is shared on social media platforms.

  • Core Web Vitals (LCP/CLS/FID):

* Fixes: Offers specific, technical code-level and configuration suggestions to improve:

* LCP (Largest Contentful Paint): E.g., preloading critical images, optimizing server response times, reducing render-blocking resources.

* CLS (Cumulative Layout Shift): E.g., setting explicit width/height attributes for images/videos, pre-allocating space for dynamically injected content.

* FID (First Input Delay): E.g., deferring non-critical JavaScript, breaking up long tasks, optimizing third-party script loading.

  • Structured Data Presence & Validity:

* Fixes: If structured data (e.g., Schema.org for Article, Product, LocalBusiness, FAQPage) is missing or malformed, Gemini will generate valid JSON-LD snippets tailored to the specific content and type of the page.

  • Mobile Viewport Configuration:

* Fixes: Provides the correct <meta name='viewport'> tag to ensure proper rendering across various mobile devices, guaranteeing a responsive and user-friendly experience.

4. Batch Generation & Efficiency

Gemini processes all identified issues in a highly efficient batch mode. This means that whether your site has dozens or thousands of pages with issues, the fixes are generated comprehensively and simultaneously. This approach ensures that you receive a complete set of solutions without manual intervention for each problem, significantly accelerating your SEO improvement timeline.

5. Integration with SiteAuditReport

The generated fixes are meticulously integrated into your SiteAuditReport. For each identified issue, the report will now include:

  • The original problematic element/context.
  • The Gemini-generated fix: Presented as a clear code snippet, content recommendation, or configuration suggestion.
  • An explanation of why the fix is necessary: Reinforcing SEO best practices.

This allows for a direct "before" (the issue) and "after" (the proposed fix) comparison, making it easy to understand and implement the recommended changes.

6. Actionable Deliverables

Upon completion of this step, the SiteAuditReport stored in MongoDB will be enriched with all the necessary information for remediation. You will receive:

  • A comprehensive list of every identified SEO issue.
  • For each issue, the exact, AI-generated solution (code or content).
  • A clear pathway to implement improvements across your entire website.

This deliverable empowers your team with specific, ready-to-use solutions, transforming a diagnostic report into an actionable plan for SEO excellence.

hive_db Output

Step 4 of 5: hive_dbupsert - Site Audit Report Persistence

This document details the critical fourth step in your "Site SEO Auditor" workflow, focusing on the secure and structured storage of your comprehensive SEO audit results within our dedicated hive_db (MongoDB) instance. This upsert operation ensures that every audit report is meticulously recorded, providing both historical context and actionable insights.


1. Introduction to the hive_db Upsert Step

Following the exhaustive crawling and analysis of your website and the AI-powered generation of fixes for identified issues, this step is responsible for persisting all gathered data. The upsert operation is a robust database command that intelligently either updates an existing SiteAuditReport document if one is found for the current audit run, or inserts a brand new document if no previous report exists. This mechanism is crucial for maintaining a complete, chronological record of your site's SEO performance and facilitating the "before/after" differential analysis.


2. Purpose of the hive_dbupsert Step

The primary objectives of this step are:

  • Data Persistence: Securely store all raw audit data, identified issues, performance metrics, and AI-generated fixes.
  • Historical Tracking: Create a chronological record of your site's SEO health, allowing for long-term trend analysis and performance monitoring.
  • "Before/After" Diff Generation: Facilitate the comparison of current audit results against previous ones, highlighting improvements, regressions, or new issues.
  • Centralized Reporting: Establish a single source of truth for all SEO audit reports, making it easy to access and analyze data.
  • Foundation for Reporting & Notifications: Provide the structured data necessary for generating user-friendly reports and triggering automated notifications.

3. Data Model: SiteAuditReport Document Structure

All audit results are stored as a SiteAuditReport document within a dedicated MongoDB collection. Each document is uniquely identified and contains a rich set of data points derived from the 12-point SEO checklist, Core Web Vitals, and Gemini-generated fixes.

Below is a detailed breakdown of the SiteAuditReport document structure:


{
  "_id": ObjectId("..."), // MongoDB's unique document ID
  "auditId": "uuid-v4-string", // Unique ID for this specific audit run
  "siteUrl": "https://www.yourwebsite.com", // The root URL of the audited site
  "auditTimestamp": ISODate("2023-10-27T02:00:00.000Z"), // Timestamp of when the audit was completed

  "pagesAudited": [
    {
      "pageUrl": "https://www.yourwebsite.com/page-1",
      "seoMetrics": {
        "metaTitle": {
          "content": "Your Page Title Here",
          "isUnique": true,
          "length": 25,
          "issues": [] // e.g., ["Too long", "Duplicate"]
        },
        "metaDescription": {
          "content": "A detailed description of your page.",
          "isUnique": true,
          "length": 120,
          "issues": [] // e.g., ["Too short", "Missing"]
        },
        "h1": {
          "present": true,
          "content": "Main Heading of Page 1",
          "issues": [] // e.g., ["Multiple H1s", "Missing H1"]
        },
        "canonicalTag": {
          "present": true,
          "value": "https://www.yourwebsite.com/page-1",
          "issues": [] // e.g., ["Self-referencing issue", "Missing"]
        },
        "openGraph": {
          "ogTitle": { "present": true, "content": "OG Title" },
          "ogDescription": { "present": true, "content": "OG Description" },
          "ogImage": { "present": true, "content": "URL to image" },
          "ogType": { "present": true, "content": "website" },
          "issues": [] // e.g., ["Missing og:image", "Incorrect og:type"]
        },
        "structuredData": {
          "present": true,
          "schemasDetected": ["Article", "BreadcrumbList"],
          "issues": [] // e.g., ["Missing Article schema", "Invalid JSON-LD"]
        },
        "mobileViewport": {
          "configured": true,
          "issues": [] // e.g., ["Missing viewport meta tag"]
        }
      },
      "imageAltCoverage": {
        "totalImages": 10,
        "imagesWithAlt": 8,
        "coveragePercentage": 80,
        "missingAltImages": [
          { "src": "https://www.yourwebsite.com/img1.jpg", "issues": ["Missing alt attribute"] }
        ]
      },
      "internalLinkDensity": {
        "totalInternalLinks": 15,
        "uniqueInternalLinks": 10,
        "issues": [] // e.g., ["Low link density", "Broken internal links"]
      },
      "coreWebVitals": {
        "lcp": { "score": 2.5, "status": "good" }, // Largest Contentful Paint (seconds)
        "cls": { "score": 0.02, "status": "good" }, // Cumulative Layout Shift
        "fid": { "score": 50, "status": "good" },   // First Input Delay (milliseconds)
        "issues": [] // e.g., ["High LCP", "High CLS"]
      },
      "issuesDetected": [
        {
          "type": "MissingH1",
          "severity": "High",
          "description": "H1 tag is missing on this page.",
          "element": null, // Can store HTML snippet or selector
          "geminiFix": {
            "prompt": "Generate an H1 tag for a page about...",
            "suggestedFix": "<h1>Welcome to Our Product Page</h1>",
            "confidence": "High"
          }
        },
        {
          "type": "ImageMissingAlt",
          "severity": "Medium",
          "description": "Image at /img2.jpg is missing an alt attribute.",
          "element": "<img src='/img2.jpg'>",
          "geminiFix": {
            "prompt": "Suggest an alt text for an image showing...",
            "suggestedFix": "alt=\"Descriptive text for image 2\"",
            "confidence": "Medium"
          }
        }
      ]
    }
    // ... more pages
  ],

  "overallSummary": {
    "totalPagesAudited": 100,
    "totalIssuesFound": 50,
    "criticalIssues": 5,
    "warnings": 20,
    "goodScores": 75,
    "metaTitleIssues": { "duplicate": 5, "missing": 2, "tooLong": 3 },
    "h1Issues": { "missing": 4, "multiple": 1 },
    // ... aggregated summaries for all 12 points
  },

  "diffReport": {
    "previousAuditId": "uuid-v4-string-of-previous-audit",
    "changes": [
      {
        "pageUrl": "https://www.yourwebsite.com/page-1",
        "field": "seoMetrics.metaTitle.content",
        "oldValue": "Old Title",
        "newValue": "New Title (Improved!)",
        "changeType": "updated" // or "added", "removed"
      },
      {
        "pageUrl": "https://www.yourwebsite.com/page-2",
        "field": "issuesDetected",
        "changeType": "removed",
        "description": "Fixed: ImageMissingAlt on /img3.jpg"
      },
      {
        "pageUrl": "https://www.yourwebsite.com/page-3",
        "field": "coreWebVitals.lcp.score",
        "oldValue": 3.5,
        "newValue": 2.1,
        "changeType": "improved"
      }
    ]
  },

  "status": "completed", // or "failed", "in_progress"
  "errorMessage": null // If audit failed
}

Key Fields Explained:

  • auditId: A unique identifier for each specific audit run, crucial for historical tracking.
  • siteUrl: The base URL of the website being audited.
  • auditTimestamp: The exact time the audit was completed.
  • pagesAudited: An array containing detailed reports for each individual page crawled, including:

* pageUrl: The URL of the specific page.

* seoMetrics: Object containing results for meta tags, H1, canonicals, Open Graph, structured data, and mobile viewport.

* imageAltCoverage: Statistics and details on images missing alt text.

* internalLinkDensity: Metrics on internal linking.

* coreWebVitals: LCP, CLS, and FID scores.

* issuesDetected: An array of specific problems found on the page, each including:

* type, severity, description.

* element: (Optional) The problematic HTML element or selector.

* geminiFix: The AI-generated suggested fix for the issue, including the prompt used and confidence level.

  • overallSummary: Aggregated statistics and issues across the entire site, providing a high-level overview.
  • diffReport: This critical section stores the before/after comparison. It contains:

* previousAuditId: Reference to the auditId of the last successful audit for this site.

changes: An array detailing specific differences (updates, additions, removals, improvements) between the current and previous audit. This is generated during* the upsert process.


4. Upsert Mechanism and Logic

The upsert operation within MongoDB is executed with the following logic:

  1. Identification:

Upon completion of an audit, the system attempts to find the most recent previous SiteAuditReport* document for the given siteUrl that has a status of "completed".

* This is typically done by querying the siteUrl and sorting by auditTimestamp in descending order, then taking the first result.

  1. Differential Generation (if previous report found):

* If a previousAuditReport is found:

* The system performs a deep comparison between the newly generated audit data (currentAuditReport) and the previousAuditReport.

* This comparison specifically looks for changes in all 12 SEO checklist points, Core Web Vitals, and the presence/absence of issues.

* Any identified differences (e.g., meta title changed, LCP improved, a new issue appeared, an old issue was resolved) are meticulously recorded in the diffReport.changes array within the currentAuditReport.

* The currentAuditReport.diffReport.previousAuditId field is populated with the auditId of the previousAuditReport.

  1. Upsert Execution:

* The currentAuditReport document (which now potentially includes the diffReport) is then sent to MongoDB.

The upsert operation targets the collection based on an index (e.g., siteUrl and a unique audit identifier if multiple audits could run concurrently for the same site, though typically auditId is generated before* this step).

* If no document with the same auditId exists: A new SiteAuditReport document is inserted.

* If a document with the same auditId already exists (e.g., a retry or update): The existing document is updated with the new currentAuditReport data.

For this workflow, typically a new auditId is generated for each run, so it's primarily an insert operation, but the term upsert is used to denote the intelligent handling of potential re-runs or updates to a specific audit. The "before/after diff" logic relies on finding* a previous report, not updating the current one.


5. Key Benefits for You (the Customer)

This hive_db → upsert step provides significant value:

  • Comprehensive Historical Record: Every audit run is saved, building a rich history of your site's SEO evolution.
  • Clear Progress Visualization: The diffReport explicitly highlights what has changed since the last audit, making it easy to see the impact of your SEO efforts.
  • Actionable Insights: By comparing current and past performance, you can quickly identify trends, newly emerging issues, and areas that require immediate attention or further optimization.
  • Proof of ROI: Demonstrate the effectiveness of SEO changes by showing measurable improvements in specific metrics and resolution of issues.
  • Data-Driven Decisions: Access to detailed historical data empowers you to make informed decisions about your SEO strategy and resource allocation.
  • Auditing Compliance: Maintain an auditable trail of your website's SEO health over time.

6. Next Steps

Upon successful completion of the hive_db → upsert step, the stored SiteAuditReport becomes the foundation for the final stage of the workflow:

  • Reporting: The stored data will be used to generate comprehensive, user-friendly reports that summarize findings, highlight changes, and present actionable recommendations.
  • Notifications: Automated alerts can be triggered based on critical issues detected or significant changes identified in the diffReport.
  • Dashboard Integration: The data will populate your custom SEO performance dashboard, providing real-time insights and historical trends.
hive_db Output

Step 5 of 5: hive_dbconditional_update - Site SEO Auditor Report Persistence

This final step in the "Site SEO Auditor" workflow is critical for ensuring that all collected SEO data, identified issues, and AI-generated fixes are securely and systematically stored in your PantheraHive database (MongoDB). This enables historical tracking, detailed reporting, and "before/after" comparisons to monitor your site's SEO performance over time.


1. Purpose of this Step

The hive_db → conditional_update operation performs the following key functions:

  • Data Persistence: Stores the comprehensive SEO audit report for your website, including all page-level details, metric statuses, and identified issues.
  • AI Fix Integration: Embeds the exact fixes generated by Gemini directly alongside each broken element, making the report immediately actionable.
  • Historical Tracking: Records each audit as a distinct SiteAuditReport document, allowing for a chronological view of your site's SEO health.
  • Delta Reporting (Before/After Diff): Automatically calculates and stores a detailed comparison against the most recent previous audit, highlighting new issues, resolved issues, and changes in metric status.

2. SiteAuditReport Data Structure

A new SiteAuditReport document is created in your MongoDB database for each audit run. This document is meticulously structured to provide a granular and comprehensive overview of your site's SEO status.

Key Fields within a SiteAuditReport document:

  • _id: Unique MongoDB ObjectId for the report.
  • auditId: A unique, human-readable identifier for this specific audit run.
  • siteUrl: The root URL of the website that was audited (e.g., https://www.example.com).
  • timestamp: The exact UTC date and time when the audit was completed.
  • status: Overall status of the audit (e.g., completed_with_issues, completed_no_issues, failed).
  • overallSummary: An aggregated summary across all audited pages:

* totalPagesCrawled: Total number of unique pages visited by the crawler.

* pagesWithIssues: Count of pages where at least one SEO issue was detected.

* totalIssuesFound: Total count of all individual SEO issues identified across the site.

* issueBreakdown: An object categorizing issues by type (e.g., {'metaTitleMissing': 5, 'h1Missing': 2, 'imageAltMissing': 10}).

* criticalIssuesCount: Number of issues flagged as critical (e.g., missing canonicals, severe Core Web Vitals).

  • pagesAudited: An array of objects, where each object represents the detailed audit results for a specific page:

* pageUrl: The full URL of the audited page.

* issuesFoundOnPage: Boolean indicating if any issues were found on this specific page.

* totalIssuesOnPage: Count of issues found on this page.

* seoMetrics: An object containing the results for each of the 12 SEO checklist points:

* metaTitle:

* status: (pass, fail, not_applicable)

* value: The actual meta title found.

* issues: Array of specific issues (e.g., too_long, duplicate).

* fixSuggestion: Gemini-generated recommendation for fixing the issue.

* metaDescription: (Similar structure to metaTitle)

* h1Presence:

* status: (pass, fail)

* value: The actual H1 text (if present).

* issues: Array of issues (e.g., missing, multiple).

* fixSuggestion: Gemini-generated fix.

* imageAltCoverage:

* status: (pass, fail)

* details: Array of objects for each image (e.g., { imgSrc: '...', altPresent: true/false, issue: '...', fixSuggestion: '...' }).

* issues: Aggregated issues (e.g., missing_alts_found).

* fixSuggestion: Gemini-generated fix.

* internalLinkDensity:

* status: (pass, fail, warning)

* count: Number of internal links found.

* issues: (e.g., low_density).

* fixSuggestion: Gemini-generated fix.

* canonicalTag:

* status: (pass, fail)

* value: The canonical URL found.

* issues: (e.g., missing, incorrect, self_referencing_error).

* fixSuggestion: Gemini-generated fix.

* openGraphTags:

* status: (pass, fail)

* details: Array of objects for each OG tag (e.g., { property: 'og:title', value: '...', issue: '...', fixSuggestion: '...' }).

* issues: Aggregated issues (e.g., missing_required_tags).

* fixSuggestion: Gemini-generated fix.

* coreWebVitals:

* lcp: { value: '...', status: 'pass/fail', issues: [], fixSuggestion: '...' }

* cls: { value: '...', status: 'pass/fail', issues: [], fixSuggestion: '...' }

* fid: { value: '...', status: 'pass/fail', issues: [], fixSuggestion: '...' }

* structuredDataPresence:

* status: (pass, fail)

* schemasFound: Array of detected schema types (e.g., ['Article', 'BreadcrumbList']).

* issues: (e.g., missing_expected_schema, invalid_schema_markup).

* fixSuggestion: Gemini-generated fix.

* mobileViewport:

* status: (pass, fail)

* issues: (e.g., missing_viewport_meta, non_responsive).

* fixSuggestion: Gemini-generated fix.

  • diffFromPrevious: An object detailing changes compared to the most recent prior audit for the same siteUrl:

newIssues: Array of issues detected in this audit that were not* present in the previous one (e.g., { pageUrl: '...', metric: 'metaTitle', issue: 'duplicate_title' }).

resolvedIssues: Array of issues that were present in the previous audit but are no longer* detected in this one.

* changedMetrics: Array of metrics whose status changed (e.g., pass to fail, or fail to pass) between audits.


3. Conditional Update Logic and Diff Generation

Upon completion of the audit and AI fix generation, the conditional_update logic executes:

  1. Retrieve Previous Report: The system queries MongoDB to find the most recent SiteAuditReport document associated with the siteUrl being audited.
  2. Generate Delta:

* If a previous report is found, the current audit's detailed results (pagesAudited and their seoMetrics) are meticulously compared against the corresponding data from the previous report.

* This comparison identifies:

* New problems: Issues that have appeared since the last audit.

* Resolved problems: Issues that were present previously but have now been fixed.

* Metric status changes: Any SEO metric that has changed from a pass to fail (or vice-versa), or from warning to pass/fail.

* These differences are then populated into the diffFromPrevious field of the new SiteAuditReport document.

  1. Insert New Report: The newly generated SiteAuditReport document, complete with all audit data, AI fixes, and the diffFromPrevious summary, is inserted into the SiteAuditReports collection in your MongoDB database.
  2. Indexing: Relevant indexes (e.g., on siteUrl and timestamp) are maintained to ensure efficient querying and retrieval of audit history.

4. AI-Generated Fixes Integration

For every broken element or identified issue, the output from Gemini (containing the exact fix) is stored directly within the fixSuggestion field of the respective metric or sub-detail. This direct integration provides immediate, actionable guidance, allowing your team to implement solutions without needing to consult external documentation or tools.

Example of an AI Fix being stored:


"metaTitle": {
    "status": "fail",
    "value": "My Website - Home",
    "issues": ["too_short"],
    "fixSuggestion": "The meta title 'My Website - Home' is too short (18 characters). For better SEO, consider expanding it to include relevant keywords, aiming for 50-60 characters. Example: 'My Website - Your Premier Source for [Industry] Solutions'."
}

5. Access and Reporting

Once stored, your SiteAuditReport documents are readily accessible:

  • PantheraHive UI: You can view detailed audit reports, track trends, and review "before/after" diffs directly within the PantheraHive user interface.
  • API Access: All audit data is available via the PantheraHive API, allowing for integration with your existing reporting dashboards, analytics platforms, or internal workflows.
  • Historical Analysis: The chronological storage of reports enables you to analyze SEO performance trends, identify patterns of improvement or regression, and validate the impact of your SEO efforts.

6. Workflow Automation and On-Demand Execution

This conditional_update step is the final stage of every "Site SEO Auditor" run, whether it's:

  • Automated Schedule: Executing automatically every Sunday at 2 AM UTC, ensuring consistent weekly monitoring.
  • On-Demand Trigger: Executing immediately when you manually initiate an audit, providing instant insights after critical site changes.

Conclusion

The hive_db → conditional_update step successfully concludes the "Site SEO Auditor" workflow by securely persisting all generated SEO audit data into your PantheraHive database. This ensures you have a comprehensive, actionable, and historically trackable record of your website's SEO health, complete with AI-generated fixes to guide your optimization efforts. Your latest Site Audit Report is now available

site_seo_auditor.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}