Site SEO Auditor
Run ID: 69cc048f04066a6c4a1689c72026-03-31SEO & Growth
PantheraHive BOS
BOS Dashboard

Step 2 of 5: hive_dbdiff - Site Audit Data Comparison and Difference Generation

This crucial step in the "Site SEO Auditor" workflow is responsible for intelligently comparing the newly generated SEO audit data with historical records stored in our PantheraHive database (MongoDB). This comparison allows us to identify changes, track progress, detect new issues, and highlight regressions, providing a dynamic "before-and-after" perspective on your site's SEO health.


1. Purpose of This Step

The primary purpose of the hive_dbdiff step is to:


2. Process Overview

Upon completion of the headless crawling and initial audit data collection (Step 1), this step performs the following sequence of operations:

  1. Retrieve Previous Audit: Fetches the most recent SiteAuditReport for your site from MongoDB.
  2. Compare Data Sets: Executes a deep comparison between the current audit results and the retrieved previous audit.
  3. Generate Diff Report: Creates a structured "diff" highlighting all significant changes across the 12-point SEO checklist for every page audited.
  4. Store New Audit & Diff: Persists the new SiteAuditReport (including the generated diff) in MongoDB.
  5. Prepare for Gemini: Extracts specific "broken elements" or newly failed checks from the diff to be passed to Gemini for fix generation.

3. Detailed Execution

3.1. Data Retrieval from hive_db (MongoDB)

* auditId (UUID)

* siteUrl (e.g., https://www.example.com)

* auditTimestamp (ISO date string)

* overallStatus (e.g., "completed", "failed")

* pagesAudited (Array of page objects)

* Each page object contains url, seoMetrics (title, description, H1, etc.), coreWebVitals, structuredData, openGraph, mobileViewport, and status for each check.

* summaryMetrics (Aggregated site-wide statistics)

3.2. Data Comparison Logic

The comparison engine performs a page-by-page, metric-by-metric analysis between the currentAuditReport and the previousAuditReport.

* Identifies new pages discovered.

* Identifies pages no longer present.

* Matches existing pages by URL.

* Meta Title/Description: Compares content for changes, uniqueness status.

* H1 Presence: Checks for presence/absence, content changes.

* Image Alt Coverage: Quantifies changes in coverage percentage, identifies specific missing/added alt tags.

* Internal Link Density: Compares link count, identifies new/removed internal links.

* Canonical Tags: Checks for changes in canonical URL, presence/absence.

* Open Graph Tags: Compares og:title, og:description, og:image content and presence.

* Core Web Vitals: Compares LCP, CLS, FID scores against thresholds and previous values, highlighting regressions or improvements.

* Structured Data: Detects presence/absence of schema, identifies changes in detected schema types.

* Mobile Viewport: Confirms presence of <meta name="viewport"> tag.

* New Failure: A check that passed previously now fails.

* New Pass: A check that failed previously now passes.

* Regression: A metric's value has worsened (e.g., LCP score increased).

* Improvement: A metric's value has improved (e.g., LCP score decreased).

* Unchanged: No significant change in status or value.

3.3. Diff Generation and Storage

* overallSummary: High-level changes (e.g., "3 new failures, 2 fixes detected").

* pageChanges: An array detailing changes for each page:

* url: The page URL.

* statusChanges: An array of objects for each metric that changed status (e.g., { metric: "meta_title_uniqueness", oldStatus: "PASS", newStatus: "FAIL" }).

* valueChanges: An array of objects for metrics whose values changed significantly (e.g., { metric: "LCP", oldValue: "2.5s", newValue: "3.1s", changeType: "REGRESSION" }).

* newIssues: Specific details of elements causing new failures (e.g., missing alt text for <img> tags, specific H1 content).

* resolvedIssues: Specific details of elements that are now fixed.

3.4. Preparation for Next Step (Gemini Integration)

* The specific page URL.

* The exact SEO metric that failed.

* The nature of the failure (e.g., "missing H1", "duplicate meta title", "LCP regressed").

* (If applicable) The HTML snippet or CSS selector of the problematic element (e.g., <img> tag with missing alt, <title> tag content).


4. Input Data


5. Output Data

json • 820 chars
    [
      {
        "pageUrl": "https://www.example.com/product/item-x",
        "issueType": "Missing H1",
        "description": "No H1 tag found on the page.",
        "htmlSnippetContext": "<body>...<div id='main-content'>...</div></body>"
      },
      {
        "pageUrl": "https://www.example.com/blog/latest-post",
        "issueType": "Image Alt Text Missing",
        "description": "Image is missing 'alt' attribute.",
        "htmlSnippetContext": "<img src='/images/hero.jpg' class='banner-img'>"
      },
      {
        "pageUrl": "https://www.example.com/about-us",
        "issueType": "Core Web Vitals Regression",
        "metric": "LCP",
        "oldValue": "2.2s",
        "newValue": "3.5s",
        "description": "Largest Contentful Paint (LCP) has regressed significantly."
      }
    ]
    
Sandboxed live preview

Site SEO Auditor: Step 1 of 5 - Puppeteer Crawl Execution

This document details the execution of Step 1: puppeteer → crawl for your Site SEO Auditor workflow. This crucial initial phase involves systematically traversing your website to discover all accessible pages and collect their raw content for subsequent in-depth SEO analysis.


1. Step Overview: Initial Site Discovery and Data Collection

The first step of the "Site SEO Auditor" workflow is dedicated to a comprehensive crawl of your website. Utilizing a headless browser powered by Puppeteer, this process simulates how a search engine bot (or a real user) navigates and renders your pages. The primary objective is to identify every discoverable URL within your domain and capture a complete snapshot of its rendered content and associated metadata.

2. Purpose of the Crawl

The Puppeteer crawl serves several critical purposes:

  • Comprehensive Page Discovery: Identify all internal pages accessible via navigation links, ensuring no relevant page is missed from the audit.
  • Realistic Content Capture: Unlike a simple HTTP fetcher, Puppeteer renders pages in a browser environment, capturing dynamically loaded content (e.g., JavaScript-rendered elements) exactly as a user or modern search engine would see it.
  • Foundation for SEO Audit: Collect the raw HTML, DOM structure, and network responses necessary for the subsequent 12-point SEO checklist analysis.
  • Performance Baseline: Gather initial timing metrics and resource loading information that will inform Core Web Vitals calculations later in the workflow.

3. Technical Implementation: Puppeteer Configuration

Our headless crawler is configured for robust and efficient site traversal:

  • Headless Chromium Instance: A dedicated Chromium browser instance is launched for the crawl, ensuring a consistent and isolated environment for each page visit.
  • User-Agent Simulation: The browser's User-Agent string is configured to mimic a common search engine bot (e.g., Googlebot), allowing us to observe how your site responds to typical crawler requests.
  • Page Load Strategy: Each page is loaded with a waitUntil: 'networkidle0' strategy, ensuring that the page has fully rendered and all significant network requests have completed before its content is extracted. This guarantees capture of dynamic content.
  • Resource Optimization: While capturing full page content, the crawler is optimized to manage memory and network resources efficiently, especially for large sites.
  • Error Handling & Retries: Built-in mechanisms for handling page load timeouts, network errors, and other transient issues, including configurable retries to ensure maximum page coverage.

4. Data Collected During the Crawl

For every unique URL discovered and successfully crawled, the following essential data points are meticulously extracted and temporarily stored:

  • URL Information:

* Original Request URL: The URL as initially discovered.

* Final URL (after redirects): The ultimate destination URL after any 301/302 redirects.

* HTTP Status Code: The server response code (e.g., 200 OK, 301 Moved Permanently, 404 Not Found).

  • Page Content & Metadata:

* Full Rendered HTML: The complete Document Object Model (DOM) of the page after all scripts have executed.

* Page Title: The text content of the <title> tag.

* Meta Description: The content of the <meta name="description"> tag.

* H1 Tags: An array of all <h1> elements and their text content found on the page.

* Image Data: For every <img> tag, its src attribute and the presence/content of its alt attribute.

* Internal Links: A list of all <a> tags with href attributes pointing to other pages within the same domain.

* Canonical Tag: The href attribute of the <link rel="canonical"> tag, if present.

* Open Graph Tags: All <meta> tags with property attributes starting with og: (e.g., og:title, og:description, og:image).

* Structured Data: Any <script type="application/ld+json"> blocks found in the HTML.

* Mobile Viewport Meta Tag: The content of the <meta name="viewport"> tag.

  • Performance Metrics (Initial Capture):

* Page load timings (e.g., DOMContentLoadedEventEnd, LoadEventEnd).

* Initial layout shift data.

* First Contentful Paint (FCP) time.

5. Crawl Scope and Configuration

The crawl is executed with specific parameters to ensure thoroughness while respecting your server's resources:

  • Starting Point: The crawl initiates from the base URL provided (e.g., https://yourwebsite.com/).
  • Internal Link Following: The crawler will follow all internal <a> links discovered on each page, traversing your site's structure.
  • robots.txt Adherence: The crawler strictly respects directives specified in your robots.txt file (e.g., Disallow rules), ensuring that excluded pages are not accessed.
  • Meta robots Directives: Pages containing <meta name="robots" content="noindex"> or nofollow directives are logged but not further traversed or deeply audited for specific SEO elements (as per their directive).
  • Rate Limiting: To prevent server overload, the crawl includes intelligent rate limiting, ensuring a controlled number of requests per second.
  • Configurable Exclusions: Additional URL patterns can be defined to explicitly exclude specific sections or pages from the crawl (e.g., /user-dashboard, /checkout).
  • Crawl Depth: Configurable to ensure either a full site crawl or a limited-depth crawl based on specific needs.

6. Output of this Step

Upon successful completion of Step 1, the immediate output is a comprehensive, structured dataset. This dataset comprises all discovered URLs, each paired with its extracted raw content and metadata. This raw data is then passed as the primary input to the subsequent "SEO Auditing" step (Step 2 of 5).

This phase ensures that every piece of information required for the 12-point SEO checklist is accurately and completely gathered directly from your live website, providing a reliable foundation for the audit.


6. Key Deliverables and Benefits

  • Actionable Diff Report: A clear, itemized list of what has changed on your site from an SEO perspective, making it easy to prioritize and address issues.
  • Historical Tracking: A robust database of all audit reports, enabling long-term trend analysis and performance monitoring.
  • Automated Issue Identification: Automatically flags new problems and regressions without manual comparison.
  • Input for AI-Powered Fixes: Provides the precise, contextual data necessary for Gemini to generate accurate and effective solutions, streamlining your SEO remediation process.
  • Transparency: You gain full visibility into your site's SEO evolution, understanding exactly where improvements have been made and where new challenges have emerged.

7. Next Steps in Workflow

The brokenElementsForGemini output from this step will be passed directly to Step 3: diffgemini_fix. In this next step, Gemini will analyze these identified issues and generate the exact code or content fixes required to resolve them.

gemini Output

Step 3 of 5: Gemini AI Fix Generation

This phase of the "Site SEO Auditor" workflow leverages the advanced capabilities of the Gemini AI model to meticulously analyze identified SEO deficiencies and generate precise, actionable solutions. Following the comprehensive crawl and audit performed in the previous steps, all detected issues are systematically fed into Gemini for intelligent remediation.

1. Overview & Purpose

Purpose: The primary goal of this step is to transform raw audit findings into concrete, executable fixes. Instead of merely reporting problems, Gemini intelligently understands the context of each SEO issue, consults best practices, and produces the exact code snippets, content recommendations, or configuration adjustments required to resolve them. This significantly reduces the manual effort and expertise typically needed to diagnose and implement SEO improvements.

Process:

  1. Issue Aggregation: All identified SEO violations from the crawler (e.g., missing H1s, duplicate meta descriptions, poor Core Web Vitals scores) are aggregated.
  2. Contextual Input: For each issue, Gemini receives not only the problem description but also relevant contextual data such as the page URL, HTML snippets, surrounding content, and current tag values.
  3. Intelligent Fix Generation: Gemini processes this input, applying its understanding of SEO principles, HTML structure, web performance, and content optimization to formulate specific, targeted solutions.
  4. Batch Output: The generated fixes are compiled into a structured batch output, ready for storage and review.

2. Input to Gemini: Identified SEO Issues

The crawler meticulously collects data points for each of the 12 SEO checklist items. When an issue is detected, a structured data object is created and passed to Gemini. This object provides Gemini with all necessary context to generate an accurate fix.

Data Structure for Each Identified Issue:

  • pageUrl (String): The full URL of the page where the issue was found.
  • issueType (String): A categorical identifier for the SEO problem (e.g., "MISSING_H1", "DUPLICATE_META_DESCRIPTION", "MISSING_IMAGE_ALT", "LCP_OPTIMIZATION_REQUIRED").
  • severity (Enum): An indicator of the issue's impact (e.g., CRITICAL, HIGH, MEDIUM, LOW).
  • problemDescription (String): A human-readable description of the specific problem.
  • htmlSnippet (String, Optional): A relevant HTML fragment from the page where the issue resides, providing immediate context.
  • currentValue (String, Optional): The current problematic value (e.g., the duplicate meta description text, the src of an image with no alt).
  • pageContentContext (String, Optional): A snippet of the page's main content, used by Gemini to generate relevant text-based fixes (e.g., for meta descriptions, H1s, image alt text).
  • crawlerMetrics (Object, Optional): Specific performance metrics if the issue relates to Core Web Vitals (e.g., LCP value, CLS score, relevant resource timings).

Example Input Scenarios:

  • Missing H1 Tag:

    {
      "pageUrl": "https://www.example.com/blog/article-on-ai",
      "issueType": "MISSING_H1",
      "severity": "HIGH",
      "problemDescription": "No H1 tag found on the page.",
      "htmlSnippet": "<body><div class='main-content'><p>Welcome to our detailed article...</p></div></body>",
      "pageContentContext": "Welcome to our detailed article on Artificial Intelligence. In this piece, we explore the latest advancements, ethical considerations, and future predictions...",
      "currentValue": null
    }
  • Duplicate Meta Description:

    {
      "pageUrl": "https://www.example.com/products/widget-a",
      "issueType": "DUPLICATE_META_DESCRIPTION",
      "severity": "MEDIUM",
      "problemDescription": "Meta description is identical to another page (e.g., /products/widget-b).",
      "htmlSnippet": "<head><meta name='description' content='Buy the best widgets for your home and office.'></head>",
      "currentValue": "Buy the best widgets for your home and office.",
      "pageContentContext": "Discover Widget A, a revolutionary device designed for ultimate efficiency and user-friendliness. Available in multiple colors..."
    }
  • Missing Image Alt Text:

    {
      "pageUrl": "https://www.example.com/gallery/nature",
      "issueType": "MISSING_IMAGE_ALT",
      "severity": "LOW",
      "problemDescription": "Image element has no 'alt' attribute.",
      "htmlSnippet": "<img src='/images/sunset.jpg' class='gallery-image'>",
      "currentValue": "/images/sunset.jpg",
      "pageContentContext": "A breathtaking sunset over the mountains, with vibrant colors painting the sky. This image captures the serenity of nature."
    }

3. Gemini's Intelligent Analysis & Fix Generation

Upon receiving an issue, Gemini performs a sophisticated analysis to generate the most appropriate fix.

How Gemini Processes Data:

  1. Contextual Understanding: Gemini first parses the issueType, problemDescription, and severity to understand the core problem.
  2. Content Extraction & Analysis: It then leverages htmlSnippet and pageContentContext to extract relevant keywords, themes, and structural information from the surrounding content. This is crucial for generating content-aware fixes (e.g., relevant H1s, descriptive alt text, unique meta descriptions).
  3. SEO Best Practices Integration: Gemini is pre-trained and continuously updated with the latest SEO guidelines from search engines (Google, Bing), web standards (W3C), and performance optimization techniques. It applies these rules to formulate a compliant and effective solution.
  4. Code & Content Generation: Based on its analysis, Gemini generates specific code snippets (HTML, JSON-LD), content recommendations, or configuration advice. It ensures that the generated output is syntactically correct and semantically appropriate for the identified issue.
  5. Validation & Refinement: Gemini internally cross-references the proposed fix against common pitfalls and potential side effects to ensure the solution is robust and doesn't introduce new problems.

Types of Fixes Generated:

Gemini can generate fixes across the entire 12-point SEO checklist, including but not limited to:

  • Meta Titles & Descriptions: Unique, concise, and keyword-rich suggestions based on page content, adhering to character limits.
  • H1 Tags: Relevant, single H1 suggestions that accurately summarize page content.
  • Image Alt Attributes: Descriptive and contextually appropriate alt text for images.
  • Internal Linking: Suggestions for anchor text and target URLs to improve internal link density and discoverability.
  • Canonical Tags: Correct rel="canonical" tag implementation for pages with duplicate content issues.
  • Open Graph Tags: Comprehensive and accurate og: tags for social media sharing.
  • Core Web Vitals: Specific recommendations for optimizing LCP (e.g., prioritize hero image, server response time), CLS (e.g., specify image dimensions, font display), and FID (e.g., defer non-critical JS). These might involve code changes or configuration advice.
  • Structured Data: JSON-LD snippets for relevant Schema.org types (e.g., Article, Product, FAQPage) based on page content.
  • Mobile Viewport: Correct viewport meta tag for optimal mobile rendering.

4. Output from Gemini: Actionable SEO Fixes

The output from Gemini is a structured collection of recommended fixes, designed to be immediately actionable by development or content teams.

Detailed Fix Structure for Each Generated Solution:

  • originalIssueId (String): A unique identifier linking back to the original audit finding.
  • pageUrl (String): The URL of the page to which the fix applies.
  • issueType (String): The type of SEO issue being addressed.
  • recommendedAction (String): A clear, concise instruction for what needs to be done (e.g., "Add H1 tag", "Update meta description", "Implement JSON-LD").
  • fixCodeSnippet (String, Optional): The exact code (HTML, JSON-LD) that needs to be added, modified, or replaced.
  • fixContentSuggestion (String, Optional): Textual content suggestions (e.g., new meta description, alt text, H1 text) if the fix is content-based.
  • reasoning (String): A brief explanation of why this fix is important and how it addresses the SEO problem, often referencing best practices.
  • priority (Enum): Gemini's assessment of the urgency of implementing this fix (CRITICAL, HIGH, MEDIUM, LOW).
  • targetElementSelector (String, Optional): A CSS selector or XPath to pinpoint where the change should be made on the page, if applicable.
  • estimatedImpact (String, Optional): A qualitative assessment of the potential positive impact on SEO metrics.

Examples of Generated Fixes:

  • Fix for Missing H1 Tag:

    {
      "originalIssueId": "AUDIT-12345-H1-001",
      "pageUrl": "https://www.example.com/blog/article-on-ai",
      "issueType": "MISSING_H1",
      "recommendedAction": "Add an H1 tag to the main content area of the page.",
      "fixCodeSnippet": "<h1>The Future of Artificial Intelligence: Trends and Ethics</h1>",
      "fixContentSuggestion": "The Future of Artificial Intelligence: Trends and Ethics",
      "reasoning": "A unique, descriptive H1 tag is crucial for search engines to understand the main topic of your page, improving relevance and user experience.",
      "priority": "HIGH",
      "targetElementSelector": "body > .main-content",
      "estimatedImpact": "Significant improvement in topic relevance and on-page SEO."
    }
  • Fix for Duplicate Meta Description:

    {
      "originalIssueId": "AUDIT-12345-MD-002",
      "pageUrl": "https://www.example.com/products/widget-a",
      "issueType": "DUPLICATE_META_DESCRIPTION",
      "recommendedAction": "Update the meta description to be unique and specific to Widget A.",
      "fixCodeSnippet": "<meta name='description' content='Explore Widget A: revolutionary design, ultimate efficiency, and available in multiple colors. Enhance your daily tasks.'>",
      "fixContentSuggestion": "Explore Widget A: revolutionary design, ultimate efficiency, and available in multiple colors. Enhance your daily tasks.",
      "reasoning": "Unique meta descriptions prevent search engines from choosing alternative text and improve click-through rates by accurately describing page content.",
      "priority": "MEDIUM",
      "targetElementSelector": "head > meta[name='description']",
      "estimatedImpact": "Improved search snippet accuracy and potential for higher CTR."
    }
  • Fix for Missing Image Alt Text:

    {
      "originalIssueId": "AUDIT-12345-ALT-003",
      "pageUrl": "https://www.example.com/gallery/nature",
      "issueType": "MISSING_IMAGE_ALT",
      "recommendedAction": "Add descriptive alt text to the image.",
      "fixCodeSnippet": "<img src='/images/sunset.jpg' class='gallery-image' alt='Breathtaking sunset over mountains with vibrant orange and purple sky.'>",
      "fixContentSuggestion": "Breathtaking sunset over mountains with vibrant orange and purple sky.",
      "reasoning": "Alt text improves accessibility for visually impaired users and helps search engines understand image content, contributing to image SEO.",
      "priority": "LOW",
      "targetElementSelector": "img[src='/images/sunset.jpg']",
      "estimatedImpact": "Enhanced accessibility and improved image search visibility."
    }

5. Batch Processing & Integration

This batch_generate step ensures that all identified issues across the entire audited site are processed efficiently and comprehensively. Gemini handles these requests in parallel where possible, or sequentially in an optimized manner, to quickly produce a complete set of fixes.

Once generated, these detailed and actionable fixes, along with their associated original issues, are prepared for the next stage of the workflow: storage in MongoDB. This storage will include both the "before" state (the detected issue)

hive_db Output

Step 4 of 5: Data Persistence (hive_db → upsert)

This step is critical for storing the comprehensive SEO audit results and generated fixes, ensuring data integrity, historical tracking, and the ability to monitor improvements over time. The "upsert" operation intelligently handles both initial data insertion and subsequent updates to your site's audit reports within our MongoDB database.


1. Purpose & Overview

The primary goal of this step is to persist the detailed SiteAuditReport generated in the previous steps into the hive_db (our internal MongoDB instance). The "upsert" command ensures that:

  • If an audit report for a specific page (identified by its URL and the audit run ID) does not yet exist, a new record is created (insert).
  • If an audit report for that page and run ID already exists, it is updated with the latest audit data, including the crucial "before/after diff" for performance tracking (update).

This mechanism allows for efficient storage of both initial audit findings and the ongoing monitoring of your site's SEO health.


2. SiteAuditReport Data Model Structure

Each SiteAuditReport document stored in MongoDB will encapsulate a full audit for a specific page at a given point in time. Below is the detailed schema:


{
  "_id": "<MongoDB ObjectId>", // Unique identifier for the report document
  "auditRunId": "string",     // Unique ID for a specific audit run (e.g., UUID or timestamp-based)
  "siteUrl": "string",        // The root URL of the audited site
  "pageUrl": "string",        // The specific URL of the page being audited
  "auditTimestamp": "Date",   // Timestamp when this audit was performed
  "overallStatus": "string",  // "PASS", "FAIL", or "WARNING" based on aggregated issues
  "previousAuditId": "string | null", // Reference to the _id of the previous audit for this page, if any

  "auditDetails": {
    "metaTags": {
      "title": {
        "value": "string | null",
        "status": "PASS | FAIL | WARNING",
        "issue": "string | null", // e.g., "Missing", "Too Long", "Not Unique"
        "fixSuggestion": "string | null" // Gemini-generated fix
      },
      "description": {
        "value": "string | null",
        "status": "PASS | FAIL | WARNING",
        "issue": "string | null", // e.g., "Missing", "Too Short", "Not Unique"
        "fixSuggestion": "string | null"
      },
      "uniquenessAcrossSite": {
        "titleUnique": "boolean",
        "descriptionUnique": "boolean"
      }
    },
    "h1Tag": {
      "present": "boolean",
      "value": "string | null",
      "status": "PASS | FAIL",
      "issue": "string | null", // e.g., "Missing H1", "Multiple H1s"
      "fixSuggestion": "string | null"
    },
    "imageAltText": {
      "totalImages": "number",
      "imagesWithoutAlt": "number",
      "coveragePercentage": "number",
      "missingAltDetails": [
        {
          "src": "string",
          "status": "FAIL",
          "issue": "Missing alt attribute",
          "fixSuggestion": "string | null" // Gemini-generated fix for specific image
        }
      ],
      "status": "PASS | FAIL"
    },
    "internalLinks": {
      "count": "number",
      "density": "number", // Number of internal links / total links
      "status": "PASS | WARNING", // High density is usually good, low might be a warning
      "issue": "string | null" // e.g., "Low Internal Link Density"
    },
    "canonicalTag": {
      "present": "boolean",
      "value": "string | null", // The URL specified in the canonical tag
      "status": "PASS | FAIL",
      "issue": "string | null", // e.g., "Missing", "Incorrect URL", "Self-referencing issue"
      "fixSuggestion": "string | null"
    },
    "openGraphTags": {
      "present": "boolean",
      "ogTitle": "string | null",
      "ogDescription": "string | null",
      "ogImage": "string | null",
      "status": "PASS | FAIL | WARNING",
      "issue": "string | null", // e.g., "Missing essential OG tags", "Incorrect OG image URL"
      "fixSuggestion": "string | null"
    },
    "coreWebVitals": {
      "LCP": {
        "value": "number", // in ms
        "status": "PASS | FAIL",
        "issue": "string | null",
        "fixSuggestion": "string | null"
      },
      "CLS": {
        "value": "number", // score
        "status": "PASS | FAIL",
        "issue": "string | null",
        "fixSuggestion": "string | null"
      },
      "FID": {
        "value": "number", // in ms
        "status": "PASS | FAIL",
        "issue": "string | null",
        "fixSuggestion": "string | null"
      },
      "overallStatus": "PASS | FAIL"
    },
    "structuredData": {
      "present": "boolean",
      "typesFound": ["string"], // e.g., ["Article", "Product"]
      "isValid": "boolean",
      "validationErrors": ["string"], // List of specific errors if invalid
      "status": "PASS | FAIL",
      "issue": "string | null",
      "fixSuggestion": "string | null"
    },
    "mobileViewport": {
      "present": "boolean",
      "config": "string | null", // e.g., "width=device-width, initial-scale=1.0"
      "status": "PASS | FAIL",
      "issue": "string | null", // e.g., "Missing viewport meta tag", "Incorrect configuration"
      "fixSuggestion": "string | null"
    }
  },

  "issuesFound": [
    {
      "category": "string", // e.g., "metaTags", "h1Tag", "coreWebVitals"
      "severity": "CRITICAL | HIGH | MEDIUM | LOW",
      "description": "string", // Human-readable description of the issue
      "suggestedFix": "string", // Gemini-generated exact fix for this specific issue
      "elementSelector": "string | null" // CSS selector for the problematic element, if applicable
    }
  ],

  "diffReport": {
    "previousAuditTimestamp": "Date | null",
    "changesDetected": "boolean",
    "changedMetrics": [
      {
        "metric": "string", // e.g., "metaTags.title.status", "coreWebVitals.LCP.value"
        "oldValue": "any",
        "newValue": "any",
        "improvement": "boolean | null" // true if newValue is better than oldValue
      }
    ]
  }
}

3. Upsert Logic and Operation

The upsert operation is performed using the findOneAndUpdate method in MongoDB with the upsert: true option.

  1. Identification Key: Each SiteAuditReport is uniquely identified by a composite key consisting of pageUrl and auditRunId. This ensures that for a given audit run, each page has exactly one report.
  2. Lookup for Previous Audit: Before performing the upsert, the system attempts to find the most recent previous audit for the pageUrl (excluding the current auditRunId). This is crucial for generating the diffReport.
  3. Diff Generation:

* If a previousAuditId is found, the system compares key metrics and statuses from the current audit (auditDetails) against the auditDetails of the previous audit.

* Any detected changes (e.g., LCP score improved, meta description status changed from FAIL to PASS) are recorded in the diffReport.changedMetrics array.

* The diffReport.changesDetected flag is set accordingly.

  1. Data Construction: The complete SiteAuditReport document, including all audit details, issues, Gemini-generated fixes, and the diffReport, is constructed.
  2. Upsert Execution:

* Query: {"pageUrl": "[current_page_url]", "auditRunId": "[current_audit_run_id]"}

* Update: The entire constructed SiteAuditReport document is used as the update payload ($set).

* Options: {"upsert": true}

* This operation will either insert a new document or update an existing one based on the query.


4. Integration with Gemini Fixes

All "broken elements" or detected issues are passed to Gemini in the previous step (Step 3). Gemini's precise, actionable fixes are directly embedded within the SiteAuditReport document:

  • Specific Fixes: For individual audit points (e.g., metaTags.title.fixSuggestion, h1Tag.fixSuggestion), Gemini's output provides the exact code snippet or instruction for correction.
  • Consolidated Issues List: The issuesFound array provides a consolidated list of all problems, each containing its category, severity, description, and the corresponding suggestedFix from Gemini. This allows for a quick overview of all actionable items.

5. Database Interaction Details

  • Database: MongoDB
  • Collection: SiteAuditReports (or similar, e.g., seo_audit_reports)
  • Indexes:

* {"pageUrl": 1, "auditRunId": 1} (Unique index for efficient upserts)

* {"pageUrl": 1, "auditTimestamp": -1} (For efficient lookup of the latest audit for a page)

* {"auditRunId": 1} (For querying all reports related to a specific run)

  • Transactionality: While individual upserts are atomic, the overall process across multiple pages is not a single transaction. Each page's report is independently upserted.

6. Expected Output and Deliverable

Upon successful completion of this step, the following will be achieved:

  • Persistent Audit Records: A complete SiteAuditReport document for each audited page will be stored in your dedicated MongoDB collection.
  • Historical Data: Each new audit run will generate new reports, or update existing ones, creating a historical log of your site's SEO performance.
  • Actionable Insights: The stored reports will include all identified issues with precise, Gemini-generated fixes, ready for implementation by your development or SEO team.
  • Performance Tracking: The diffReport within each document provides a clear summary of how SEO metrics have changed since the last audit, highlighting improvements or regressions.
  • API Accessibility: The stored data will be immediately accessible via our internal APIs for generating dashboards, alerts, and detailed reports for your review.

This robust data persistence layer ensures that your SEO audit data is secure, organized, and provides a foundational bedrock for continuous site optimization.

hive_db Output

Step 5 of 5: hive_db → Conditional Update - Site Audit Report Storage

This final step in the "Site SEO Auditor" workflow ensures that all the valuable data gathered during the crawling and auditing process, along with the AI-generated fixes, is securely and systematically stored in your dedicated MongoDB database. This persistent storage forms the foundation for historical tracking, performance analysis, and continuous SEO improvement.


Workflow Description Recap

The "Site SEO Auditor" is a comprehensive tool designed to provide a 360-degree view of your website's SEO health. It leverages a headless crawler (Puppeteer) to meticulously examine every page against a 12-point SEO checklist, including critical elements like meta titles, H1s, image alt tags, Core Web Vitals, and structured data. Any identified issues are then processed by Gemini to generate precise, actionable fixes. This step focuses on the structured storage of these findings and recommendations.


Data Storage Confirmation and Details

We confirm that a new SiteAuditReport document has been successfully created and stored in your MongoDB instance. This document encapsulates the complete findings of the latest audit run, providing a comprehensive snapshot of your site's SEO performance at the time of the audit.

Each SiteAuditReport document is structured to provide both high-level summaries and granular, page-specific details, including:

  • Audit Metadata: Unique auditId, the siteUrl that was audited, and the timestamp of the audit.
  • Overall Status: A high-level indicator (e.g., "Completed Successfully," "Completed with Critical Issues") of the audit's outcome.
  • Summary Statistics: Total pages crawled, and counts of critical, warning, and informational issues identified across the entire site.
  • Previous Audit Link: A previousAuditId field to link this report to the preceding successful audit, enabling robust "before/after" comparisons.
  • Page-Level Details: An array of objects, where each object represents a distinct page on your site and contains its specific audit results.
  • Audit Configuration: The specific parameters used for this audit run (e.g., crawl depth, scope).

Comprehensive Report Structure and Data Points

For each audited page within the SiteAuditReport, the following detailed information is stored:

1. Page Identification & Status

  • pageUrl: The specific URL of the audited page.
  • pageStatus: An indicator of the page's overall SEO health (e.g., "Pass," "Fail," "Warning").

2. SEO Metrics Checklist (12-Point Audit)

Each of the 12 SEO points is individually assessed and stored, including its status (pass/fail), currentValue (the detected content), and any issue description.

  • metaTitle: Presence, uniqueness, and length.
  • metaDescription: Presence, uniqueness, and length.
  • h1Presence: Verification of a single, relevant H1 tag.
  • imageAltCoverage: Percentage of images with alt attributes and a list of missing ones.
  • internalLinkDensity: Count and distribution of internal links.
  • canonicalTag: Correct implementation and self-referencing.
  • openGraphTags: Presence and correctness of key Open Graph meta tags (e.g., og:title, og:image).
  • structuredData: Detection of schema markup (e.g., JSON-LD, Microdata).
  • mobileViewport: Confirmation of a responsive meta viewport tag.

3. Core Web Vitals

Detailed performance metrics for each page:

  • Largest Contentful Paint (LCP): Loading performance.
  • Cumulative Layout Shift (CLS): Visual stability.
  • First Input Delay (FID): Interactivity (or INP for newer audits).

* Each vital includes its score and a status (e.g., "Good," "Needs Improvement," "Poor").

4. Broken Elements & Gemini Fixes

For every identified issue or "broken element" on a page, a dedicated entry is stored:

  • elementId: A unique identifier for the specific issue instance.
  • type: The category of the issue (e.g., "Missing H1," "Duplicate Meta Title," "Image Missing Alt").
  • description: A human-readable explanation of the problem.
  • location: A precise indicator (e.g., CSS selector, XPath) to locate the element on the page.
  • originalValue: The problematic content or attribute, if applicable.
  • geminiFix: A nested object containing the AI-generated remediation:

* suggestedFix: The exact code or content change recommended.

* rationale: A brief explanation of why the fix is necessary and beneficial.

* confidence: A score indicating Gemini's confidence level in the suggested fix.


Before/After Diffing Mechanism

The previousAuditId field within each SiteAuditReport is crucial for enabling powerful "before/after" comparisons. While the diff itself is calculated dynamically when you view reports, the stored data allows you to:

  • Track Progress: Clearly see improvements or regressions in SEO metrics over time.
  • Identify New Issues: Pinpoint issues that have emerged since the last audit.
  • Validate Fixes: Confirm that previously identified issues have been successfully resolved.
  • Measure Impact: Quantify the impact of your SEO efforts on Core Web Vitals and other key indicators.

This historical data provides an invaluable roadmap for continuous optimization.


Accessing Your Audit Reports

All generated SiteAuditReport documents are securely stored and readily accessible. You can view these reports through your designated PantheraHive dashboard, which provides an intuitive interface for:

  • Browsing Historical Reports: Accessing any past audit report by date or auditId.
  • Viewing Detailed Page-Level Data: Drilling down into specific pages to examine individual SEO metrics and identified issues.
  • Reviewing Gemini Fixes: Easily seeing the suggested fixes and their rationales for immediate implementation.
  • Analyzing Before/After Diffs: Utilizing the built-in comparison tools to understand changes between audit runs.

Automated Reporting & Continuous Improvement

This conditional_update step is integral to the automated nature of the "Site SEO Auditor." With reports automatically generated and stored every Sunday at 2 AM (or on-demand), you establish a rhythm of continuous monitoring. The detailed, diff-enabled reports empower you to:

  • Proactively Address Issues: Catch and fix SEO problems before they significantly impact your rankings.
  • Maintain SEO Health: Ensure your site consistently adheres to best practices.
  • Iterate and Optimize: Use the data to inform ongoing SEO strategies and measure their effectiveness.

Conclusion and Next Steps

Your site's latest SEO audit report has been successfully generated and stored. You now have a comprehensive, actionable document detailing your website's SEO performance and precise recommendations for improvement.

Next Steps for You:

  1. Review the Latest Report: Access your PantheraHive dashboard to examine the newly generated SiteAuditReport.
  2. Prioritize Fixes: Focus on critical issues and implement the AI-generated Gemini fixes.
  3. Monitor Progress: Observe the "before/after" diffs in subsequent weekly reports to track your site's SEO evolution.

We are committed to providing you with the insights needed to maintain a high-performing and search-engine-friendly website.

site_seo_auditor.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}