Site SEO Auditor
Run ID: 69cc43d18f41b62a970c203b2026-03-31SEO & Growth
PantheraHive BOS
BOS Dashboard

Deliverable: Site SEO Auditor - Step 2: Database Difference Analysis

This document details the execution and output of Step 2 of the "Site SEO Auditor" workflow: hive_db → diff. This crucial step is responsible for comparing your latest site audit results against previous reports stored in our database, providing you with a clear, actionable overview of changes over time.


Step Overview: hive_db → diff

This step performs a comprehensive comparison between your site's most recent SEO audit report and its preceding audit report, both securely stored in our MongoDB SiteAuditReport collection. The primary goal is to identify and highlight all significant changes, improvements, and regressions across your website's SEO health metrics.


Objective of this Step

The core objectives of the hive_db → diff step are to:

  1. Identify Trends: Understand the direction of your site's SEO performance (improving, stable, or declining).
  2. Pinpoint Changes: Clearly show what has changed since the last audit, at both a site-wide and page-specific level.
  3. Measure Impact: Assess the effect of recent website updates, content changes, or SEO optimization efforts.
  4. Prioritize Actions: Highlight new issues or regressions that require immediate attention.
  5. Track Progress: Provide a historical context for your SEO efforts, demonstrating the "before" and "after" state for each audit cycle.

Process Description

Upon completion of the site crawling and auditing process (Step 1), a new SiteAuditReport is generated and stored in MongoDB. The hive_db → diff step then executes the following sequence:

  1. Retrieve Current Report: The newly generated SiteAuditReport (from the current audit run) is fetched from the database.
  2. Retrieve Previous Report: The immediately preceding SiteAuditReport for your site is retrieved from the database.
  3. Execute Deep Comparison: A sophisticated differencing algorithm is applied to compare the two reports. This comparison is performed recursively across all audited pages and their respective SEO metrics.

* Page-Level Comparison: Each URL present in both reports is compared point-by-point for every SEO checklist item.

* New/Removed Pages: The system identifies pages that are new to the current audit or pages that were present in the previous audit but are no longer found.

  1. Categorize Changes: Identified differences are categorized into:

* Improvements: Metrics that have moved from a "failing" or "poor" state to a "passing" or "good" state.

* Regressions: Metrics that have moved from a "passing" or "good" state to a "failing" or "poor" state.

* New Issues: Problems identified in the current audit that were not present or detected in the previous one.

* Resolved Issues: Problems from the previous audit that are no longer present in the current one.

* Unchanged: Metrics that remain consistent between audits.

  1. Generate Diff Output: A structured diff report is compiled, summarizing these changes at both a site-wide level and for individual pages.
  2. Store Diff with Current Report: The generated diff report is then embedded within the current SiteAuditReport document in MongoDB, ensuring that each audit report contains its historical comparison data.

Key Data Points for Comparison

The diffing process meticulously compares the following 12-point SEO checklist items for every audited page:

* Largest Contentful Paint (LCP): Performance score (Good, Needs Improvement, Poor).

* Cumulative Layout Shift (CLS): Performance score (Good, Needs Improvement, Poor).

* First Input Delay (FID): Performance score (Good, Needs Improvement, Poor).


Output and Results

The output of this step is a comprehensive "Before/After Diff Report" integrated directly into your latest SiteAuditReport. This report will be presented in a clear, hierarchical format, allowing for quick identification of critical changes.

High-Level Summary (Site-Wide)

* Newly discovered pages.

* Pages no longer found (removed or redirected).

* Pages with improvements.

* Pages with regressions.

Detailed Page-Level Differences

For each page that has experienced a change, the report will provide:

* Meta Title: Before: "Old Title" -> After: "New Title" (if changed), or Status: Missing -> Present.

* H1 Tag: Status: Present (OK) -> Missing (ERROR).

* Image Alt Coverage: Before: 80% -> After: 95% (IMPROVEMENT).

* LCP Score: Before: Poor -> After: Good (IMPROVEMENT).

* Broken Elements: List of specific broken elements (e.g., https://example.com/broken-img.jpg) that are New or Resolved.

* Canonical Tag: Before: Missing -> After: Present (IMPROVEMENT).

Example Output Snippet (Illustrative)

markdown • 1,502 chars
### Site Audit Diff Report: Current (YYYY-MM-DD HH:MM) vs. Previous (YYYY-MM-DD HH:MM)

**Overall Site Health Summary:**
*   **Overall Score:** +5% (Improved)
*   **New Issues:** 12
*   **Resolved Issues:** 25
*   **Pages with Improvements:** 15
*   **Pages with Regressions:** 3
*   **New Pages Discovered:** 2
*   **Pages No Longer Found:** 1

---

**Page-Level Changes:**

**1. URL: `https://yourdomain.com/product-category/new-widget`**
    *   **Status:** **New Page Discovered**
    *   **Key Issues:** Missing Meta Description, LCP: Needs Improvement

**2. URL: `https://yourdomain.com/blog/article-about-seo`**
    *   **Status:** **Improved**
    *   **Meta Description:** `Before: Missing -> After: Present (IMPROVEMENT)`
    *   **Image Alt Coverage:** `Before: 70% -> After: 100% (IMPROVEMENT)`
    *   **LCP Score:** `Before: Needs Improvement -> After: Good (IMPROVEMENT)`
    *   **Structured Data:** `Before: Invalid Schema -> After: Valid Schema (IMPROVEMENT)`

**3. URL: `https://yourdomain.com/homepage`**
    *   **Status:** **Regression**
    *   **H1 Tag:** `Before: Present (OK) -> After: Missing (ERROR)`
    *   **CLS Score:** `Before: Good -> After: Needs Improvement (REGRESSION)`
    *   **Broken Elements:** `New Broken Link: https://yourdomain.com/old-page (ERROR)`

**4. URL: `https://yourdomain.com/contact`**
    *   **Status:** **No Significant Change** (All metrics within acceptable thresholds, no new or resolved issues)

... (Additional pages listed as necessary)
Sandboxed live preview

Workflow Step: 1 of 5 - Puppeteer-Driven Site Crawl

This document details the execution and output of the initial crawling phase for your "Site SEO Auditor" workflow. This foundational step is critical for accurately assessing your website's SEO performance, as it systematically visits and captures data from every accessible page on your site using a headless browser.


Purpose of This Step

The primary objective of this step is to:

  1. Discover and Catalog: Identify all unique, internal URLs within your specified domain.
  2. Render Pages Accurately: Fully load and render each page as a modern web browser would, including executing all client-side JavaScript. This is crucial for websites that rely on dynamic content loading (e.g., Single Page Applications, React, Angular, Vue frameworks).
  3. Collect Raw Page Data: Capture comprehensive raw data from each rendered page, which will serve as the input for the subsequent SEO auditing steps.

Technology Utilized: Puppeteer

We leverage Puppeteer, a Node.js library developed by Google, to control a headless instance of the Chrome browser.

  • Why Puppeteer is Essential:

* JavaScript Execution: Unlike traditional HTTP crawlers that only fetch static HTML, Puppeteer renders pages in a full browser environment. This ensures that all content generated or modified by JavaScript (e.g., dynamic product listings, blog comments, interactive elements) is fully present and visible, accurately reflecting what search engines and users see.

* Real User Simulation: It simulates a real user's browser experience, allowing us to capture how your site behaves and appears under actual browsing conditions.

* Comprehensive Data Capture: Puppeteer enables us to extract not just the raw HTML, but also the fully constructed DOM, network requests, performance metrics, and even screenshots if required for debugging.

Crawling Methodology

The crawl is executed with precision and care to ensure thoroughness while maintaining server politeness.

  1. Starting Point: The crawl begins from the root URL of your website (e.g., https://www.yourdomain.com/).
  2. Recursive Link Discovery:

* Each page is loaded by Puppeteer and allowed to fully render, waiting for network idle conditions to ensure all dynamic content has settled.

* The fully rendered DOM is then parsed to extract all internal <a> (anchor) tags linking to unique, unvisited URLs within your domain.

* These newly discovered URLs are added to an intelligent queue for subsequent processing. External links are noted but not followed to keep the audit focused on your domain.

* Sitemap Integration: For enhanced coverage, the crawler also consults your sitemap.xml (if available) to ensure all declared pages are included in the crawl, even if they might not be immediately discoverable via internal linking alone.

  1. Rate Limiting & Politeness: To prevent overloading your server, the crawler incorporates configurable delays between page requests and concurrent page limits. This ensures a smooth, non-disruptive operation.
  2. Error Handling & Logging: Robust mechanisms are in place to handle various crawl-time issues:

* HTTP Errors: Identification and logging of 404 (Not Found), 500 (Server Error), and other HTTP status codes.

* Network Timeouts: Handling of pages that fail to load within a specified timeframe.

* JavaScript Errors: Capture of any client-side JavaScript errors or warnings emitted in the browser console during page rendering.

* All errors are logged with their associated URLs for later review and potential remediation.

Data Collected During Crawl

For every successfully visited and rendered unique URL, the following detailed data points are collected:

  • Page URL: The canonical URL of the page.
  • Final Rendered HTML: The complete HTML source code of the page after all JavaScript has executed and the DOM has been fully constructed.
  • DOM Snapshot: A structured representation of the Document Object Model, reflecting the exact layout and content as seen by a browser at the point of data capture.
  • Network Requests Log: A comprehensive list of all resources (CSS, JavaScript files, images, fonts, API calls) initiated and loaded by the page, including their URLs, sizes, and response times.
  • Initial Core Web Vitals Metrics: Preliminary data points for critical user experience metrics such as:

* Largest Contentful Paint (LCP): The render time of the largest image or text block visible within the viewport.

* Cumulative Layout Shift (CLS): A score quantifying unexpected layout shifts during the page's lifecycle.

  • Console Logs & Errors: Any messages, warnings, or errors outputted to the browser's console during page load.

Output of This Step

Upon completion, this step delivers a structured dataset comprising:

  • Discovered URLs List: A comprehensive list of all unique internal URLs successfully crawled.
  • Raw Page Data Archive: A collection of all collected raw data (HTML, DOM, network logs, initial CWV metrics) indexed by URL. This data is stored in a temporary, secure location awaiting the next processing step.
  • Crawl Error Report: A detailed log of any pages that could not be successfully crawled, including the URL, the type of error encountered (e.g., 404, timeout, JS error), and relevant timestamps.

What Happens Next

The collected raw data from this Puppeteer-driven crawl is immediately passed to Step 2: Auditor → Analyze. In this subsequent step, the raw HTML, DOM, and performance metrics will be meticulously parsed and analyzed against the 12-point SEO checklist, identifying specific issues and preparing them for automated fix generation.

Customer Benefits

  • Holistic Site Coverage: Guarantees that every accessible page on your website is thoroughly reviewed, irrespective of its content rendering method.
  • Authentic SEO Audit: Provides an audit based on how search engines (especially Google) and real users actually experience your site, including all dynamic content.
  • Foundation for Precision: Gathers all necessary raw data, enabling a highly accurate, deep, and actionable SEO audit in the subsequent stages.

Actionability

This detailed diff report provides you with immediate insights into your site's SEO evolution. You can use this information to:

  • Validate Optimizations: Confirm that recent SEO efforts have yielded positive results.
  • Prioritize Fixes: Focus on pages showing regressions or newly introduced critical issues.
  • Monitor Development Impact: Quickly identify if new deployments or content updates have inadvertently introduced SEO problems.
  • Report Progress: Use the "before/after" data to demonstrate the value of ongoing SEO work.

Next Steps in Workflow

The generated diff report, now stored within your latest SiteAuditReport, will be used in the subsequent steps of the workflow:

  • Step 3 (gemini → fix): Identified "broken elements" or critical regressions will be sent to Gemini for AI-driven generation of exact fixes.
  • Step 4 (report → notify): A comprehensive report, including this diff analysis and any generated fixes, will be compiled and delivered to you via your preferred notification channels.
  • Step 5 (store → archive): The final SiteAuditReport with the embedded diff will be archived for long-term historical tracking.
gemini Output

Site SEO Auditor: Step 3 of 5 - AI-Powered Fix Generation (gemini → batch_generate)

This document details the successful execution and deliverables for Step 3 of the Site SEO Auditor workflow: AI-Powered Fix Generation using Gemini (batch_generate).


Overview of Step 3: AI-Powered Fix Generation

Following the comprehensive crawl and audit of your website in Step 2, our system has identified specific SEO elements that are either missing, incorrect, or sub-optimal according to our 12-point SEO checklist. Step 3 leverages Google's advanced AI model, Gemini, to meticulously analyze each identified issue and generate precise, actionable fixes.

This step transforms raw audit findings into concrete, implementable solutions, significantly streamlining the process of improving your site's SEO health.


Detailed Process & Gemini's Role

  1. Input Collection:

* From the extensive SiteAuditReport generated in Step 2, our system extracts every identified "broken element" or "recommendation." This includes specific details such as:

* Page URL: The exact URL where the issue was found.

* SEO Element Type: (e.g., meta_title, meta_description, H1_tag, image_alt, canonical_tag, open_graph_tag, structured_data, viewport).

* Problem Description: A clear explanation of the issue (e.g., "Meta title missing," "H1 tag not found," "Image missing alt attribute," "Canonical tag points to self-referencing non-canonical URL," "CLS score too high").

* Contextual Data: Relevant surrounding HTML, page content snippets, or performance metrics that provide Gemini with the necessary context.

  1. Batch Processing by Gemini:

* The collected issues are then batched and fed to the Gemini AI model.

* Intelligent Analysis: Gemini processes each issue by:

* Understanding SEO Best Practices: Applying its vast knowledge of current SEO guidelines, search engine algorithms, and user experience principles.

* Contextual Understanding: Analyzing the specific page content, existing metadata, and identified problems to ensure fixes are relevant and effective for that particular page.

* Problem-Solving Logic: Determining the root cause of the issue and formulating the most appropriate corrective action.

  1. Fix Generation:

* For each identified issue, Gemini generates an exact, actionable fix. These fixes are designed to be easily understood and implemented by your development or content team.

* Example Fixes Generated:

* Missing Meta Title: Suggests a concise, keyword-rich title (e.g., <title>Product Name - Category | Your Brand</title>).

* Missing Meta Description: Crafts a compelling, informative description encouraging clicks (e.g., <meta name="description" content="Discover our wide range of [product category]. Shop now for [key benefits] and [unique selling points].">).

* Missing H1 Tag: Proposes a clear, descriptive H1 based on page content (e.g., <h1>Main Product Category Page</h1>).

* Image Missing Alt Attribute: Suggests descriptive alt text based on the image's context and surrounding text (e.g., <img src="product.jpg" alt="Blue denim jacket for men">).

* Incorrect Canonical Tag: Recommends the correct canonical URL (e.g., <link rel="canonical" href="https://www.yourdomain.com/product-page/">).

* Missing Open Graph Tags: Provides complete OG tags for social sharing (e.g., <meta property="og:title" content="Page Title">, <meta property="og:image" content="https://...">).

* Core Web Vitals Improvement (LCP/CLS): Suggests specific code-level or configuration changes (e.g., "Preload largest contentful paint image: <link rel="preload" href="path/to/hero-image.jpg" as="image">", "Identify and optimize layout-shifting elements: CSS property aspect-ratio or explicit width/height attributes for images/iframes").

* Missing Mobile Viewport: Inserts the standard viewport meta tag (e.g., <meta name="viewport" content="width=device-width, initial-scale=1">).


Output & Deliverables of This Step

The primary deliverable of this step is a comprehensive collection of Generated Fixes, formatted for clarity and direct implementation.

  • Format: Each fix is provided in a structured JSON or similar format, containing:

* page_url: The URL where the fix should be applied.

* seo_element: The specific SEO element being addressed.

* problem_description: A restatement of the original issue.

* suggested_fix: The exact code snippet or actionable instruction generated by Gemini.

* fix_type: (e.g., add_tag, update_attribute, insert_code, configuration_change).

* confidence_score: (Optional) An AI-generated confidence score for the proposed fix.

  • Integration with SiteAuditReport: These generated fixes are immediately associated with their corresponding issues within the SiteAuditReport document in MongoDB. This means that for every "before" state (identified issue), there is now a proposed "after" state (the generated fix).

Key Benefits of AI-Powered Fix Generation

  • Precision & Accuracy: Gemini's advanced understanding ensures fixes are technically correct and align with current SEO best practices.
  • Scalability: Automatically generates fixes for hundreds or thousands of pages, a task that would be incredibly time-consuming and error-prone manually.
  • Actionable Recommendations: Provides exact code snippets or clear instructions, eliminating guesswork for your development team.
  • Time & Cost Savings: Significantly reduces the manual effort required to diagnose problems and devise solutions, accelerating your SEO improvement timeline.
  • Consistency: Ensures a consistent approach to SEO fixes across your entire site.

Next Steps

The generated fixes are now stored alongside the audit results in MongoDB. In Step 4, we will finalize the SiteAuditReport document, incorporating these fixes to create a complete before/after diff for each identified issue. This comprehensive report will then be ready for your review and implementation, enabling you to track the impact of these improvements on your site's SEO performance.

hive_db Output

Step 4 of 5: Data Persistence - Audit Report Upsert to MongoDB

This step is critical for securely storing the comprehensive SEO audit findings and their corresponding remediation strategies within your dedicated hive_db. We are performing an upsert operation, which intelligently handles both initial data insertion and subsequent updates to ensure your audit history is accurately maintained.


Overview of Operation

Following the successful crawling, SEO checklist validation, Core Web Vitals assessment, and Gemini's generation of precise fixes for identified issues, all this valuable data is consolidated into a SiteAuditReport document. This document is then persisted into your MongoDB instance. The upsert operation ensures that:

  1. New Audit Reports: If an audit report for a specific site and timestamp combination does not yet exist, a new document is created.
  2. Updating Existing Reports: If an audit report for the same site and timestamp (or a designated unique identifier) already exists, it is updated with the latest findings, including the "before/after" diff. This is particularly useful for re-running audits or updating partial reports.

Detailed Operation: hive_db Upsert

1. Target Database & Collection

  • Database: hive_db (Your dedicated MongoDB database instance).
  • Collection: site_audit_reports (A new collection specifically designed to store your SEO audit history).

2. Data Model: SiteAuditReport Document Structure

The following detailed structure outlines the SiteAuditReport document that will be stored in the site_audit_reports collection. This comprehensive model ensures all aspects of your SEO audit are captured and easily retrievable.


{
  "_id": ObjectId("..."), // MongoDB's unique document ID
  "siteUrl": "https://www.example.com", // The root URL of the site audited
  "auditTimestamp": ISODate("2023-10-27T02:00:00.000Z"), // Timestamp of when the audit was initiated
  "auditId": "uuid-for-this-audit-run", // Unique identifier for each specific audit run
  "auditStatus": "completed", // e.g., "completed", "in_progress", "failed", "cancelled"
  "overallScore": 85, // An aggregated SEO score (optional, can be derived)
  "pagesAuditedCount": 150, // Total number of unique pages crawled and audited

  "auditDetails": [ // Array of detailed results for each audited page
    {
      "pageUrl": "https://www.example.com/some-page",
      "crawlStatus": "success", // e.g., "success", "error", "skipped"
      "statusCode": 200, // HTTP status code of the page
      "redirectedTo": null, // If redirected, the final URL

      "seoChecklistResults": { // Results for the 12-point SEO checklist
        "metaTitle": {
          "present": true,
          "unique": true,
          "length": 55,
          "value": "Your Page Title - Keyword"
        },
        "metaDescription": {
          "present": true,
          "unique": true,
          "length": 150,
          "value": "Detailed description of your page content."
        },
        "h1Presence": {
          "present": true,
          "count": 1,
          "value": "Main Heading of the Page"
        },
        "imageAltCoverage": {
          "totalImages": 10,
          "imagesWithAlt": 8,
          "missingAltImages": [
            {"src": "/img/broken1.jpg", "reason": "Missing alt text"},
            {"src": "/img/broken2.png", "reason": "Empty alt text"}
          ],
          "coveragePercentage": 80
        },
        "internalLinkDensity": {
          "totalInternalLinks": 25,
          "uniqueInternalLinks": 18,
          "densityScore": 75 // A calculated score or count
        },
        "canonicalTag": {
          "present": true,
          "valid": true,
          "value": "https://www.example.com/some-page"
        },
        "openGraphTags": {
          "ogTitle": {"present": true, "value": "OG Title"},
          "ogDescription": {"present": true, "value": "OG Description"},
          "ogImage": {"present": true, "value": "https://.../og-image.jpg"},
          "allPresent": true // True if all essential OG tags are found
        },
        "structuredData": {
          "present": true,
          "schemaTypes": ["Article", "BreadcrumbList"],
          "validationIssues": [] // Array of issues found by schema validator
        },
        "mobileViewport": {
          "present": true,
          "valid": true,
          "content": "width=device-width, initial-scale=1.0"
        }
        // ... other checklist items
      },

      "coreWebVitals": { // Performance metrics
        "LCP": 2.5, // Largest Contentful Paint (seconds)
        "CLS": 0.05, // Cumulative Layout Shift
        "FID": 50, // First Input Delay (milliseconds)
        "performanceScore": 90 // An aggregated Lighthouse/performance score
      },

      "brokenElements": [ // Issues requiring fixes, identified by the crawler
        {
          "type": "image",
          "selector": "img[src='/img/broken1.jpg']",
          "issue": "Missing alt text",
          "severity": "medium",
          "context": "<img src='/img/broken1.jpg' />"
        },
        {
          "type": "h1",
          "selector": "body",
          "issue": "No H1 tag found",
          "severity": "high",
          "context": "<body>...</body>"
        }
      ],

      "geminiFixes": [ // Exact fixes generated by Gemini for broken elements
        {
          "issueType": "image_alt_text",
          "originalElement": "<img src='/img/broken1.jpg' />",
          "suggestedFix": {
            "action": "update_attribute",
            "selector": "img[src='/img/broken1.jpg']",
            "attribute": "alt",
            "value": "Descriptive alt text for image 1"
          },
          "explanation": "Adding descriptive alt text improves accessibility and SEO for screen readers and image search engines."
        },
        {
          "issueType": "missing_h1",
          "originalElement": "<body>...</body>",
          "suggestedFix": {
            "action": "insert_element",
            "selector": "body",
            "position": "after_opening_tag",
            "element": "<h1>Main Title of the Page</h1>"
          },
          "explanation": "A unique and descriptive H1 tag is crucial for SEO, indicating the main topic of the page to search engines."
        }
      ]
    }
    // ... more page audit details
  ],

  "previousAuditId": "uuid-of-previous-audit-run", // Link to the previous audit report for diffing
  "diffSummary": { // High-level summary of changes since the last audit
    "newIssuesFound": 5,
    "issuesResolved": 3,
    "scoreChange": "+5", // e.g., "+5", "-2"
    "pageCountChange": "+10",
    "changedPages": ["https://www.example.com/page-a", "https://www.example.com/page-b"]
  },

  "createdAt": ISODate("2023-10-27T02:00:00.000Z"), // Timestamp of document creation
  "updatedAt": ISODate("2023-10-27T02:05:00.000Z") // Timestamp of last document update
}

3. Upsert Logic

The upsert operation uses a unique key to identify whether a document already exists. For SiteAuditReport documents, the combination of siteUrl and auditId (or auditTimestamp if auditId is not used for primary identification) will serve as the unique identifier.

  • Query: { "siteUrl": "https://www.example.com", "auditId": "uuid-for-this-audit-run" }
  • Update: The entire SiteAuditReport document described above.
  • Option: upsert: true

This ensures that each specific audit run for a given site is either created or updated, providing a clear and traceable history of your site's SEO performance.


Key Benefits of this Step

  • Historical Tracking: Enables a comprehensive historical record of your site's SEO performance, allowing you to track improvements or regressions over time.
  • "Before/After" Diffing: By storing each audit report, the system can easily compare the current report with the previousAuditId to generate a meaningful "before/after" diff, highlighting changes.
  • Actionable Fix Persistence: Gemini's generated fixes are permanently stored alongside the issues, creating a direct link between problem and solution within your audit data.
  • Data Integrity & Reliability: MongoDB's robust nature ensures your audit data is stored reliably and can be scaled as your site grows or as you audit more properties.
  • Reporting & Analytics Foundation: This structured data forms the foundation for future reporting, dashboards, and advanced analytics on your site's SEO health.

Confirmation & Next Steps

Upon successful completion of this upsert operation, the full SiteAuditReport will be securely stored in your hive_db. This concludes the data processing and storage phase.

The final step (Step 5 of 5) will involve presenting these results in a user-friendly format, potentially triggering notifications, and providing access to the detailed audit report and generated fixes.

hive_db Output

Step 5 of 5: hive_dbconditional_update for Site SEO Auditor

This final step in the "Site SEO Auditor" workflow is critical for persistent storage, historical tracking, and delivering actionable insights to you. It involves intelligently updating your MongoDB database (hive_db) with the comprehensive SEO audit results and the generated fixes, including a detailed before/after comparison.


Purpose of this Step

The conditional_update step serves several key purposes:

  1. Persistent Storage: Securely stores all audit data, including the 12-point checklist results, Core Web Vitals, and Gemini-generated fixes, in your dedicated MongoDB instance.
  2. Historical Tracking: Enables the system to maintain a complete history of your site's SEO performance over time, allowing for trend analysis and tracking improvement.
  3. Before/After Comparison: Automatically calculates and stores a detailed before/after diff against the previous audit report, highlighting changes and the impact of implemented fixes.
  4. Data Integrity: Ensures that new audits either create a fresh report or intelligently update an existing record, maintaining data consistency.
  5. Foundation for Reporting: The stored data forms the basis for all SEO performance dashboards, alerts, and detailed reports provided to you.

Data Model: SiteAuditReport in MongoDB

All audit results are meticulously structured and stored in a new document within the site_audit_reports collection in MongoDB, following a robust schema designed for comprehensiveness and easy querying.

Key Fields for SiteAuditReport

  • _id: Unique identifier for each audit report.
  • siteUrl: The base URL of the audited website (e.g., https://example.com).
  • auditDate: Timestamp of when the audit was completed.
  • triggerType: Indicates if the audit was scheduled (automatic) or manual (on-demand).
  • previousAuditId: References the _id of the immediately preceding audit report for the same site, crucial for diff generation.
  • overallStatus: A high-level assessment (e.g., Pass, Needs Improvement, Critical Issues).
  • overallScore: A calculated score reflecting the site's overall SEO health.
  • summary: High-level statistics and a brief overview of critical issues.
  • pages: An array of detailed audit results for each page crawled.
  • diffSummary: A high-level summary of changes compared to the previousAuditId.
  • detailedDiff: A granular, page-by-page and metric-by-metric comparison with the previous audit.

Detailed Structure Example for a SiteAuditReport Document


{
  "_id": "65e8a0b0c1d2e3f4a5b6c7d8",
  "siteUrl": "https://www.yourwebsite.com",
  "auditDate": ISODate("2024-03-07T02:00:00.000Z"),
  "triggerType": "scheduled",
  "previousAuditId": "65e1b2c3d4e5f6a7b8c9d0e1", // Reference to the previous week's audit
  "overallStatus": "Needs Improvement",
  "overallScore": 78,
  "summary": {
    "totalPagesAudited": 150,
    "criticalIssuesDetected": 5,
    "warningsDetected": 12,
    "pagesWithGeminiFixes": 7
  },
  "pages": [
    {
      "pageUrl": "https://www.yourwebsite.com/",
      "status": "Needs Improvement",
      "metrics": {
        "metaTitle": {
          "value": "Your Website - Home Page",
          "status": "Pass",
          "issue": null,
          "geminiFix": null
        },
        "metaDescription": {
          "value": "Welcome to Your Website, offering...",
          "status": "Fail",
          "issue": "Description too short (50 chars). Recommended: 150-160 chars.",
          "geminiFix": "Rewrite: 'Discover our wide range of products and services. We are dedicated to providing high-quality solutions tailored to your needs. Learn more about what makes us stand out.'"
        },
        "h1Presence": {
          "value": "Welcome to Our Site",
          "status": "Pass",
          "issue": null,
          "geminiFix": null
        },
        "imageAltCoverage": {
          "status": "Fail",
          "issue": "2 images missing alt text.",
          "details": [
            { "src": "/img/hero.jpg", "currentAlt": "" },
            { "src": "/img/logo.png", "currentAlt": "" }
          ],
          "geminiFix": "For /img/hero.jpg: Add alt='Hero image depicting [description]'. For /img/logo.png: Add alt='Your Website Logo'."
        },
        "internalLinkDensity": {
          "count": 25,
          "status": "Pass",
          "issue": null
        },
        "canonicalTag": {
          "value": "https://www.yourwebsite.com/",
          "status": "Pass",
          "issue": null,
          "geminiFix": null
        },
        "openGraphTags": {
          "status": "Fail",
          "issue": "og:image and og:description missing.",
          "details": {
            "ogTitle": "Your Website",
            "ogUrl": "https://www.yourwebsite.com/"
          },
          "geminiFix": "Add <meta property='og:image' content='[URL to image]'> and <meta property='og:description' content='[Concise description for social sharing]'>."
        },
        "coreWebVitals": {
          "lcp": 2.8, // seconds
          "cls": 0.05,
          "fid": 0.03, // seconds
          "status": "Pass",
          "issue": null
        },
        "structuredDataPresence": {
          "status": "Pass",
          "details": ["Schema.org/WebSite", "Schema.org/Organization"],
          "issue": null
        },
        "mobileViewport": {
          "status": "Pass",
          "issue": null
        }
      }
    },
    // ... more page objects
  ],
  "diffSummary": {
    "metricsImproved": ["metaDescription", "imageAltCoverage"],
    "metricsDegraded": ["coreWebVitals"],
    "newIssues": ["https://www.yourwebsite.com/blog/new-post - Missing H1"],
    "fixedIssues": ["https://www.yourwebsite.com/about - Missing Canonical Tag"]
  },
  "detailedDiff": {
    "https://www.yourwebsite.com/": {
      "metaDescription": {
        "before": "Old short description.",
        "after": "New longer description.",
        "change": "Improved"
      },
      "coreWebVitals": {
        "lcp": {
          "before": 2.1,
          "after": 2.8,
          "change": "Degraded"
        }
      }
    }
    // ... more detailed diffs for other pages and metrics
  }
}

Conditional Update Logic

The conditional_update process is intelligently designed to handle both initial audits and subsequent recurring checks:

  1. Retrieve Previous Audit: The system first queries MongoDB to find the most recent SiteAuditReport for the siteUrl being audited.
  2. New Site Audit (No Previous Report):

* If no prior audit report exists for the given siteUrl, the current audit results are inserted directly as a brand-new SiteAuditReport document.

* The previousAuditId field will be null.

* diffSummary and detailedDiff fields will also be null or empty, as there's nothing to compare against.

  1. Subsequent Site Audits (Previous Report Exists):

* If a previousAuditReport is found, the system proceeds to generate the before/after diff.

* A new SiteAuditReport document is created for the current audit. This new document will include all current audit results, the calculated diffSummary, and detailedDiff.

The previousAuditId field in this new* document will be populated with the _id of the retrieved previous audit report, establishing a clear historical link.

* This approach ensures that every audit run creates an immutable snapshot, providing a complete and traceable history of your site's SEO evolution.


Before/After Diff: Generation & Storage

The core value of the conditional_update step lies in its ability to generate a meaningful comparison between consecutive audits.

High-Level Diff (diffSummary)

This section provides a quick overview of significant changes across the entire site:

  • Metrics Improved/Degraded: Lists specific SEO metrics (e.g., metaDescription, H1Presence, LCP) that have shown a notable improvement or degradation across the site.
  • New Issues: Identifies pages and specific issues that were not present in the previous audit.
  • Fixed Issues: Highlights issues from the previous audit that are no longer detected in the current run, confirming the success of applied fixes.

Page-Level Diff (detailedDiff)

This provides granular, page-specific comparisons for every audited metric. For each page and each metric, it records:

  • before value: The state of the metric from the previousAuditReport.
  • after value: The state of the metric from the current audit.
  • change indicator: A qualitative assessment (e.g., Improved, Degraded, No Change, New Issue, Fixed Issue).

Gemini Fixes Integration

The geminiFix suggestions generated in the previous step are stored directly within the pages array, associated with the specific metric and issue they address. This ensures that when you review an audit report, the recommended fix is immediately available alongside the identified problem.

When generating the diff, if a geminiFix from a previous report is no longer needed (because the issue is resolved), this will be noted in the fixedIssues within the diffSummary. If a new issue arises, a new geminiFix will be generated and stored.


Audit Triggering & Report Linkage

The system ensures seamless integration with both scheduled and on-demand audit triggers:

  • Automatic Scheduled Audits: Every Sunday at 2 AM, the workflow is automatically initiated. Upon completion, this conditional_update step stores the report, linking it chronologically to the previous week's report.
  • On-Demand Audits: When you manually trigger an audit, the results are similarly processed and stored, immediately providing an up-to-date snapshot and diff against the last known state.
  • Historical Tracking: By linking each new report to its predecessor via previousAuditId, a comprehensive chain of audit history is established for your site, enabling deep historical analysis and long-term trend monitoring.

Deliverable & Customer Impact

Upon successful completion of this step, the following direct deliverables and benefits are realized:

  • Comprehensive Audit Report: A new, detailed SiteAuditReport document is available in your hive_db (MongoDB) for the audited website, containing all 12 SEO checklist points, Core Web Vitals, and structured data presence.
  • Actionable Fixes: For every identified issue, a precise, AI-generated fix from Gemini is embedded directly within the report, telling you exactly what needs to be done.
  • Clear Before/After Comparison: You gain immediate insight into how your site's SEO has changed since the last audit, with clear indicators of improvements, degradations, new issues, and fixed problems.
  • Historical Record: A complete, version-controlled history of your site's SEO performance is maintained, invaluable for demonstrating ROI and tracking long-term optimization efforts.
  • Foundation for UI/Dashboards: The structured data stored here directly feeds into any reporting dashboards, email summaries, or alert systems, providing you with a unified view of your site's SEO health.

You will typically interact with this data through a user-friendly interface that visualizes these reports, rather than directly querying MongoDB.


Next Steps & Automation

With the data now securely stored and intelligently structured in MongoDB:

  1. Reporting & Visualization: The stored SiteAuditReport documents
site_seo_auditor.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}