Site SEO Auditor
Run ID: 69cc51d5b4d97b765147557f2026-03-31SEO & Growth
PantheraHive BOS
BOS Dashboard

Workflow Step: AI-Powered Fix Generation (Gemini)

This document details Step 3 of 5 in your "Site SEO Auditor" workflow: the AI-powered generation of precise fixes for identified SEO issues using Google's Gemini model.


Introduction to AI-Powered Fix Generation

Following the comprehensive crawl and audit of your website by our headless crawler (Puppeteer), our system identifies specific SEO elements that are either missing, incorrect, or suboptimal. Instead of merely reporting these issues, this crucial step leverages the advanced capabilities of Google's Gemini AI to automatically generate the exact, actionable code snippets or instructions required to fix each identified problem.

This proactive approach significantly streamlines your SEO remediation process, transforming audit reports from problem lists into solution blueprints.

Purpose of Gemini Fix Generation

The primary purpose of this step is to:

  1. Automate Remediation: Eliminate the manual effort of diagnosing and formulating fixes for common SEO issues.
  2. Ensure Accuracy: Provide precise, context-aware solutions tailored to the specific nature of each broken element.
  3. Accelerate Implementation: Deliver ready-to-use code or clear instructions that can be directly applied by your development or content teams.
  4. Enhance Report Value: Transform raw audit data into highly actionable insights, making the Site Audit Report a powerful tool for continuous improvement.

How Broken Elements Are Identified and Fed to Gemini

  1. Crawler Output: The Puppeteer-based crawler thoroughly scans every page, collecting data on the 12-point SEO checklist (meta tags, H1s, alt texts, internal links, canonicals, OG tags, Core Web Vitals, structured data, mobile viewport).
  2. Audit Engine Analysis: Our internal audit engine processes this raw data, comparing it against best practices and identifying specific violations or areas for improvement (e.g., duplicate meta titles, missing H1s, broken internal links).
  3. Contextual Data Extraction: For each identified issue, the system extracts all relevant contextual information. For example, for a missing alt tag, it extracts the <img> tag, its surrounding HTML, and the image's purpose if inferable. For a broken link, it extracts the source page URL, the broken link URL, and the anchor text.
  4. Gemini API Call: This contextual data, along with a specific prompt tailored to the SEO issue type, is then sent to the Gemini API for analysis and fix generation.

Gemini's AI-Powered Analysis and Solution Generation

Gemini receives the detailed context of each broken element and generates a precise, actionable fix.

Input to Gemini (Example)

* Page URL: https://www.yourwebsite.com/blog/latest-article

* Page Content Snippet: <div class="main-content"><h2>Subheading 1</h2><p>Article text...</p></div>

* Inferred Page Title/Topic: "Latest Article on AI Trends"

Gemini's Output: Actionable Fixes

Gemini processes this input and generates an output that is typically a code snippet or a clear instruction set, ready for implementation. The output is formatted for direct use by developers or content managers.


Examples of AI-Generated Fixes

Below are examples of the types of fixes Gemini will generate for various common SEO issues:

1. Meta Title/Description Uniqueness & Optimization

html • 554 chars
    <!-- For: https://www.yourwebsite.com/product/item-a -->
    <title>Product Item A - High Quality Gadget | YourBrand</title>
    <meta name="description" content="Discover the features of Product Item A, a high-quality gadget from YourBrand. Shop now for exclusive deals."/>

    <!-- For: https://www.yourwebsite.com/product/item-b -->
    <title>Product Item B - Innovative Solution | YourBrand</title>
    <meta name="description" content="Explore Product Item B, an innovative solution designed for efficiency. Find out more at YourBrand."/>
    
Sandboxed live preview

Step 1 of 5: Execution Report - Puppeteer-Powered Site Crawl

Workflow: Site SEO Auditor

Step: puppeteer → crawl

Description: This initial and foundational step leverages a headless browser (Puppeteer) to systematically visit and render every discoverable page on your website. Its primary objective is to accurately simulate a real user's browser experience, ensuring all dynamically loaded content and JavaScript-rendered elements are captured, which is critical for a comprehensive SEO audit.


1. Objective of This Step

The core objective of the puppeteer → crawl step is to:

  • Discover All Unique URLs: Identify every accessible page within your specified domain, including those linked internally and potentially those listed in sitemaps.
  • Render Pages Accurately: Fully load and execute all JavaScript on each page, mimicking a modern browser environment to capture the complete and final Document Object Model (DOM) as a user would experience it.
  • Collect Raw Page Data: Extract the comprehensive HTML, associated resources, and initial performance metrics necessary for the subsequent detailed SEO audit.
  • Establish a Baseline: Create a snapshot of your site's structure and content at the time of the crawl, serving as the "before" state for future comparisons.

2. Crawling Mechanism: Headless Browser Emulation (Puppeteer)

Unlike traditional HTTP crawlers that only fetch raw HTML, this system employs Puppeteer to control a headless Chromium browser instance. This advanced approach offers several key advantages for SEO auditing:

  • JavaScript Execution: Crucially, Puppeteer executes all client-side JavaScript, ensuring that content rendered dynamically (e.g., Single Page Applications, content loaded via APIs, lazy-loaded images) is fully available in the captured DOM. This provides an accurate representation of what search engine bots capable of rendering JavaScript (like Googlebot) would see.
  • Real-User Simulation: The crawl emulates a genuine user experience, including loading CSS, images, fonts, and other assets, which is essential for measuring Core Web Vitals accurately.
  • Viewport Emulation: The crawler is configured to emulate specific device viewports (e.g., mobile, desktop) as defined in the audit settings, allowing for analysis of responsive design and mobile-specific SEO factors.
  • DOM Snapshotting: At the completion of page loading and rendering, a complete snapshot of the page's final HTML (the fully rendered DOM) is captured, reflecting all content, meta tags, and structural elements.

3. Detailed Crawl Process and Data Collection

The crawl process is executed systematically to ensure thorough coverage and data integrity:

  1. Initial Seed URL: The crawl begins from the specified starting URL(s) (typically the homepage or URLs provided).
  2. Link Discovery:

* Upon loading each page, Puppeteer scans the fully rendered DOM for all internal <a> tags (hyperlinks).

* Discovered internal links are added to a queue for subsequent crawling, ensuring all navigable pages within the domain are identified.

* External links are noted but not followed to remain within the scope of your site.

  1. Page Loading and Rendering:

* For each URL, a new browser page is opened, and the URL is navigated to.

* The browser waits for network idle conditions and specific DOM events (e.g., load, domcontentloaded) to ensure all resources have loaded and JavaScript has executed.

* Mobile Viewport Emulation: The browser viewport is set to a common mobile device size (e.g., iPhone X, 375x812 pixels) to assess mobile-specific rendering and performance.

  1. Data Extraction: Once the page is fully rendered, the following raw data points are collected:

* Full HTML/DOM Snapshot: The complete HTML content of the page as seen by the browser after all scripts have run.

* HTTP Status Code: The server's response code (e.g., 200 OK, 301 Redirect, 404 Not Found).

* Response Headers: Key HTTP response headers, including Content-Type, Cache-Control, etc.

* Initial Performance Metrics: Basic timing metrics from the browser's Performance API, such as DOMContentLoaded, loadEventEnd, and initial measurements for Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS) if directly available from the browser context.

* Console Logs & Network Requests: Any JavaScript errors or significant network requests made by the page (optional, for advanced debugging).

  1. Error Handling & Retries: The crawler is designed to handle common issues such as:

* Timeouts: Pages that take too long to load are flagged.

* Network Errors: Connection issues or unreachable URLs.

* HTTP Errors: 4xx (client errors) and 5xx (server errors) status codes are recorded.

* A limited number of retries may be attempted for transient errors.

  1. Concurrency Management: The crawl operates with controlled concurrency to avoid overwhelming your server while maximizing efficiency.

4. Technical Configuration and Best Practices

The Puppeteer instance is configured with best practices to ensure a robust and reliable crawl:

  • User-Agent String: A custom user-agent string is used to identify the crawler (e.g., PantheraHive-SiteAuditor/1.0 (+https://pantherahive.com/seo-auditor)), allowing server-side logging and potential identification.
  • Viewport Settings: Configured to a standard mobile viewport (e.g., 375x812 pixels, deviceScaleFactor: 2) to ensure mobile-first rendering.
  • Resource Blocking (Selective): To optimize crawl speed and resource usage, certain non-critical resources (e.g., third-party analytics scripts, ad networks) may be optionally blocked if configured, without impacting the SEO audit's core requirements.
  • Navigation Timeout: A reasonable timeout is set for page navigation (e.g., 60 seconds) to prevent infinite waits on slow or broken pages.
  • Cookie and Local Storage: A fresh browser context is used for each crawl session to prevent cookie accumulation or session interference from previous runs.

5. Outputs of the Crawl Step

Upon completion, this step generates a comprehensive dataset that serves as the foundation for the subsequent audit steps:

  • Discovered URLs List: A definitive list of all unique internal URLs found on your website, along with their HTTP status codes.
  • Raw Page Data Archive: For each discovered URL:

* The complete, fully rendered HTML content.

* The final URL after any redirects.

* Response headers.

* Initial performance metrics (e.g., loadEventEnd, FCP, LCP, CLS values captured from browser's performance timeline API).

  • Crawl Log: A detailed log of any errors encountered during the crawl (e.g., 404s, timeouts, JavaScript errors).
  • Screenshots (Optional): High-resolution screenshots of each page in the emulated mobile viewport, useful for visual verification and debugging.

6. Next Steps in the Workflow

The collected raw data from this crawl step will now be passed to the next stage of the "Site SEO Auditor" workflow:

  • Step 2: Audit & Analysis: The raw HTML and performance metrics for each page will be meticulously analyzed against the 12-point SEO checklist (meta tags, H1s, alt text, internal links, canonicals, Open Graph, structured data, Core Web Vitals, mobile viewport). This detailed analysis will identify specific SEO issues.

This comprehensive crawling process ensures that the subsequent audit is based on the most accurate and complete representation of your website, reflecting the true user and search engine experience.

hive_db Output

Step 2: hive_dbdiff - Comprehensive Audit Difference Analysis

This crucial step in the Site SEO Auditor workflow is dedicated to providing you with a precise, actionable understanding of how your website's SEO performance is evolving. By comparing the results of the latest site crawl and audit against your historical data stored in PantheraHive's dedicated MongoDB instance (hive_db), we generate a detailed "diff" report. This report highlights all changes, improvements, and regressions, ensuring you have a clear, quantifiable perspective on your site's SEO health.

Purpose of This Step

The primary goal of the hive_dbdiff step is to:

  • Track Progress: Measure the impact of your SEO efforts and content updates over time.
  • Detect Regressions: Quickly identify any new issues or performance degradations introduced by recent site changes, code deployments, or content updates.
  • Validate Optimizations: Confirm that previously identified issues have been successfully resolved.
  • Inform Prioritization: Provide a clear overview of the most critical changes, helping you prioritize future SEO tasks.

How the Difference Analysis Works

Our system employs a robust comparison engine to meticulously analyze the audit data:

  1. Retrieval of Historical Data:

* Upon completion of a new site crawl and audit, the system first accesses hive_db to retrieve the most recent SiteAuditReport stored for your domain.

* This previous report serves as the baseline for comparison. If no prior report exists (e.g., this is the very first audit), the current audit becomes the initial baseline.

  1. Granular Comparison Engine:

The system then performs a deep, page-by-page and metric-by-metric comparison between the current audit report and the previous* audit report.

* Every data point from the 12-point SEO checklist (e.g., meta titles, H1 presence, image alt coverage, Core Web Vitals scores, canonical tags, etc.) is systematically contrasted.

  1. Categorization of Changes:

* Differences are intelligently categorized to provide immediate clarity on their nature and impact:

New Issues: Problems detected in the current audit that were not* present or flagged in the previous report.

Resolved Issues: Issues that were* present in the previous audit but are now successfully fixed and no longer detected.

* Improved Metrics: Quantitative or qualitative metrics that show a positive change (e.g., faster LCP, increased alt tag coverage, presence of a previously missing H1).

* Regressed Metrics: Quantitative or qualitative metrics that show a negative change (e.g., slower LCP, new instances of missing alt tags, removal of a previously present H1).

* Unchanged: Elements or metrics that remain consistent between the two audit periods.

Key Information Provided in the Diff Report

The output of this step is a structured report detailing the identified changes:

  • Page-Level Specificity: Every change is precisely attributed to the exact URL(s) where it was observed, allowing for highly targeted analysis and action.
  • Metric-Specific Details: The report clearly indicates which specific SEO checklist item has changed (e.g., "Meta Title Uniqueness," "Image Alt Tag Coverage," "LCP Score," "Structured Data Presence").
  • Quantifiable Before/After Data: For all relevant metrics, the exact previous value and current value are provided.

* Example for Core Web Vitals: "LCP Score: Before 2.8s, After 1.9s (Improved)" or "CLS Score: Before 0.05, After 0.12 (Regressed)".

* Example for Qualitative Metrics: "H1 Presence on /product-page: Before Missing, After Present (Resolved)" or "Meta Description Uniqueness: Before Unique, After Duplicate (New Issue)".

  • Change Status Indicators: Clear labels (New, Resolved, Improved, Regressed, Unchanged) are applied to each identified difference for quick comprehension.
  • Consolidated View: The report aggregates all changes across your site, providing both a high-level summary and the ability to drill down into specific pages or SEO categories.

Benefits for Your SEO Strategy

The detailed diff report generated in this step offers significant advantages:

  • Proactive Issue Detection: Catch new SEO issues or performance regressions as soon as they appear, preventing them from negatively impacting your rankings or user experience over time.
  • Performance Tracking & ROI Validation: Clearly demonstrate the impact of your SEO efforts. See tangible improvements in metrics, validating the return on investment for your optimization work.
  • Targeted Optimization Efforts: With precise before/after data and page-level granularity, you can focus your resources on the most impactful changes and address regressions immediately.
  • Historical Trend Analysis: Over time, these diff reports build a comprehensive history of your site's SEO health, enabling you to identify long-term trends and patterns in performance.
  • Informed Decision-Making: Provides the data required to make informed decisions about content updates, technical SEO adjustments, and overall website development.

Next Steps

The detailed diff report, highlighting all broken elements and areas of regression, will now be used as input for the subsequent step. Specifically, the identified broken elements will be sent to Gemini for precise, actionable fix generation, further streamlining your SEO maintenance process.

  • Instruction: "Insert the above JSON-LD script into the <head> or <body> of the product page. Crucially, review and update the placeholder values (e.g., image URL, description, price, brand, rating) with the actual product data."

Benefits of Automated Fix Generation

  • Time Savings: Drastically reduces the time spent by your team on researching and formulating SEO fixes.
  • Reduced Error Rate: AI-generated fixes are consistent and adhere to best practices, minimizing human error.
  • Faster Remediation Cycle: Move from identification to implementation much quicker, leading to faster SEO improvements.
  • Empowered Teams: Provides clear, actionable steps for developers and content creators, regardless of their SEO expertise.
  • Continuous Improvement: Enables a more agile and responsive approach to maintaining and enhancing your site's SEO health.

Next Steps in the Workflow

Once Gemini has generated these detailed fixes, the workflow proceeds to:

  • Step 4: MongoDB Storage & Diffing: The original audit results ("before" state) and the Gemini-generated fixes ("after" state) will be securely stored in your MongoDB database. This enables historical tracking and a clear "before/after" diff for every issue, allowing you to see the impact of implemented changes over time.
  • Step 5: Reporting & Notifications: A comprehensive Site Audit Report will be compiled, detailing all identified issues, the AI-generated fixes, and the overall SEO health of your site. This report will be made available to you, and notifications will be sent as per your configuration (e.g., email, dashboard alert).
hive_db Output

Step 4 of 5: Data Persistence – hive_db Upsert Operation

This step focuses on the crucial persistence of your website's SEO audit results within our secure MongoDB database. Following the comprehensive crawling and analysis by our headless auditor and the AI-powered fix generation by Gemini, all findings are meticulously structured and stored as a SiteAuditReport. This ensures historical tracking, detailed insights, and the ability to monitor your site's SEO health over time.


1. Purpose of the hive_db Upsert

The hive_db upsert operation is responsible for:

  • Storing Comprehensive Audit Data: Persisting all identified SEO issues, performance metrics, and AI-generated fixes for every page audited.
  • Enabling Historical Tracking: Creating a permanent record of each audit run, allowing for trend analysis and performance comparisons over time.
  • Facilitating Before/After Diffs: Storing the necessary data points to highlight changes, improvements, or regressions between consecutive audit reports.
  • Ensuring Data Integrity: Structuring the data in a consistent and query-friendly format within the SiteAuditReport document.

2. SiteAuditReport Data Model Overview

Each time the Site SEO Auditor runs (either on demand or automatically), a new SiteAuditReport document is created in your dedicated MongoDB collection. This document encapsulates the entire audit for a specific website at a given point in time.

Key Fields of the SiteAuditReport Document:

  • _id (ObjectId): Unique identifier for this specific audit report (auto-generated by MongoDB).
  • siteUrl (String): The root URL of the website that was audited (e.g., https://www.yourdomain.com).
  • auditId (String): A unique, human-readable identifier for this audit run (e.g., audit-20231027-0830).
  • timestamp (Date): The exact date and time when the audit was completed.
  • status (String): Overall status of the audit (e.g., completed, failed, partial).
  • overallScore (Number, 0-100): A calculated aggregate SEO score for the entire site, based on the checklist adherence and Core Web Vitals.
  • pagesAudited (Number): The total count of unique pages successfully audited during this run.
  • previousAuditId (String, Optional): References the auditId of the immediately preceding audit for the same siteUrl. This is crucial for diff calculations.
  • summary (Object): A high-level summary of the audit findings.

* totalPages: Total number of pages processed.

* pagesWithIssues: Number of pages with at least one SEO issue.

* criticalIssuesCount: Count of critical issues across all pages.

* warningsCount: Count of warning-level issues across all pages.

* coreWebVitalsAverages: Average LCP, CLS, FID scores across the site.

  • pageReports (Array of Objects): An array containing detailed audit results for each individual page visited by the crawler.

3. Detailed Page-Level Audit (pageReports Array)

Each object within the pageReports array represents a single page's audit results and contains the following structure:

  • url (String): The full URL of the audited page.
  • statusCode (Number): The HTTP status code returned for the page (e.g., 200, 301, 404).
  • pageTitle (String, Optional): The <title> content of the page.
  • auditedAt (Date): Timestamp when this specific page was audited.
  • seoChecklistResults (Array of Objects): Detailed results for each of the 12 SEO checklist items.

* checkName (String): Name of the SEO check (e.g., meta_title_uniqueness, h1_presence, image_alt_coverage).

* status (String): Result of the check (pass, fail, warning).

* details (Object): Specific information about the check's outcome.

* For meta_title_uniqueness: isUnique (Boolean), duplicateOf (Array of URLs if not unique).

* For h1_presence: foundH1 (Boolean), h1Content (String, Optional).

* For image_alt_coverage: totalImages (Number), imagesMissingAlt (Array of image URLs).

* For core_web_vitals: lcpScore, clsScore, fidScore (details below).

* ... and similar specific details for all 12 checks.

* issueDescription (String, Optional): A human-readable description of the identified issue.

* geminiFixSuggestion (String, Optional): The exact, actionable fix generated by Gemini for fail or warning statuses.

  • coreWebVitals (Object): Detailed performance metrics.

* LCP (Largest Contentful Paint): score (ms), status (good, needs_improvement, poor).

* CLS (Cumulative Layout Shift): score (unitless), status (good, needs_improvement, poor).

* FID (First Input Delay): score (ms), status (good, needs_improvement, poor).

  • brokenElements (Array of Objects, Optional): A list of any broken links, images, or scripts found on the page.

* elementType (String): e.g., link, image, script.

* src (String): The href or src attribute of the broken element.

* status (Number): The HTTP status code returned (e.g., 404, 500).

  • geminiFixes (Array of Objects, Optional): Specific, generated fixes by Gemini for broken elements or general page issues.

* issueType (String): e.g., broken_link, missing_h1, duplicate_meta_description.

* elementSelector (String, Optional): CSS selector to locate the problematic element.

* suggestedFix (String): The precise code snippet or instruction to resolve the issue.


4. Before/After Diff Generation and Storage

A critical feature of the SiteAuditReport is the inclusion of a "before/after diff." This is achieved by comparing the newly generated SiteAuditReport with the previousAuditId for the same siteUrl.

  • Diff Calculation: Upon completion of a new audit, the system retrieves the most recent prior SiteAuditReport for your domain. It then performs a deep comparison of key metrics and checklist item statuses across all audited pages.
  • diffReport (Object, within SiteAuditReport): This object is included in the new SiteAuditReport document and highlights changes since the previousAuditId.

* comparedToAuditId (String): The auditId of the previous report used for comparison.

* overallScoreChange (Number): The change in overallScore (e.g., +5, -2).

newIssuesFound (Array of Objects): Details of issues identified in the current audit that were not* present in the previous one.

* pageUrl, checkName, issueDescription, geminiFixSuggestion.

* issuesResolved (Array of Objects): Details of issues that were present in the previous audit but are now pass in the current one.

* pageUrl, checkName, issueDescription (from previous report).

* performanceChanges (Array of Objects): Highlights significant changes in Core Web Vitals.

* pageUrl, metric (LCP, CLS, FID), oldScore, newScore, change.

* statusChanges (Array of Objects): Any checklist items that changed status (e.g., warning to fail, pass to warning).

* pageUrl, checkName, oldStatus, newStatus.


5. Upsert Logic in Action

The hive_db operation utilizes MongoDB's updateOne method with the upsert: true option. However, for audit reports, we typically create a new document for each run rather than updating an existing one, to maintain a complete historical record.

The logic is as follows:

  1. A new SiteAuditReport document is constructed based on the latest audit results.
  2. The system queries MongoDB for the SiteAuditReport with the most recent timestamp for the given siteUrl.
  3. If a previous report is found, its auditId is stored in the previousAuditId field of the new report, and the diffReport is calculated and embedded.
  4. The new SiteAuditReport document is then inserted into the site_audit_reports collection.

This approach ensures that every audit run is preserved as an immutable snapshot, providing a robust timeline of your site's SEO evolution.


6. Customer Benefits

This sophisticated data storage and diffing mechanism delivers significant value:

  • Clear Progress Tracking: Easily visualize your SEO improvements or identify regressions between audit runs.
  • Actionable Insights: Instantly see what new issues have emerged and which ones have been successfully resolved.
  • Historical Performance Review: Access a complete history of your site's SEO health, allowing for long-term trend analysis.
  • Targeted Remediation: Focus your efforts on newly identified issues or areas of decline, with AI-generated fixes readily available.
  • Accountability & Reporting: Generate comprehensive reports for stakeholders, demonstrating the impact of SEO efforts.
  • Automated Monitoring: Benefit from continuous, automated oversight of your website's SEO health without manual intervention.

By storing your audit results in this detailed and structured manner, we empower you with the data and insights needed to maintain and improve your website's search engine visibility effectively.

hive_db Output

Step 5 of 5: hive_dbconditional_update - Database Persistence & Reporting

This final step of the "Site SEO Auditor" workflow is critical for ensuring that all gathered audit data, SEO recommendations, and performance metrics are securely stored and made accessible for your review and action. The conditional_update operation intelligently manages your site's audit history within our MongoDB database, providing a robust foundation for tracking your SEO progress.


1. Operation Summary

Upon successful completion of the site crawl, SEO checklist evaluation, and AI-powered fix generation by Gemini, this step orchestrates the persistence of the comprehensive SiteAuditReport. This involves either inserting a new audit record or updating an existing one, specifically designed to facilitate "before/after" comparisons and maintain a clear historical record of your site's SEO health.


2. Database Target & Collection

  • Database: MongoDB
  • Collection: SiteAuditReport

This collection is specifically designed to store all audit-related data, structured for easy retrieval, analysis, and diff generation.


3. SiteAuditReport Document Structure (Schema)

Each document within the SiteAuditReport collection represents a single, complete audit of your website. The structure is comprehensive, capturing all aspects evaluated by the auditor and the AI-generated fixes.


{
  "_id": "ObjectId", // Unique identifier for this audit report
  "auditId": "string", // A human-readable unique ID for this specific audit run (e.g., UUID)
  "siteUrl": "string", // The root URL of the audited website (e.g., "https://www.example.com")
  "timestamp": "ISODate", // Date and time when this audit was completed
  "auditType": "string", // Type of audit: "scheduled" (for weekly runs) or "on-demand"
  "overallStatus": "string", // Aggregated status: "Pass", "Needs Improvement", "Critical Issues"
  "pagesAudited": "number", // Total count of unique pages successfully crawled and audited

  "summaryMetrics": {
    "totalIssuesFound": "number", // Total number of individual SEO issues across all pages
    "uniqueFixesRecommended": "number", // Total number of distinct fixes recommended by Gemini
    "overallCoreWebVitalsScore": { // Aggregated Core Web Vitals score (e.g., average or worst-case)
      "LCP": "number", // Largest Contentful Paint (ms)
      "CLS": "number", // Cumulative Layout Shift
      "FID": "number"  // First Input Delay (ms) - often approximated for reports
    },
    "metaTitleCoverage": "number", // Percentage of pages with a unique meta title
    "metaDescriptionCoverage": "number", // Percentage of pages with a unique meta description
    "h1Coverage": "number", // Percentage of pages with a valid H1
    "imageAltCoverage": "number", // Percentage of images with alt text
    "canonicalTagCoverage": "number", // Percentage of pages with a canonical tag
    "openGraphTagCoverage": "number", // Percentage of pages with Open Graph tags
    "structuredDataCoverage": "number", // Percentage of pages with structured data
    "mobileViewportCoverage": "number" // Percentage of pages with a mobile viewport meta tag
  },

  "pageDetails": [ // Array of objects, each representing an audited page
    {
      "pageUrl": "string", // URL of the specific page
      "pageStatus": "string", // Status for this page: "Pass", "Warning", "Critical"
      "seoMetrics": {
        "metaTitle": {
          "value": "string",
          "status": "string", // "Pass", "Fail", "Warning"
          "issues": ["string"], // e.g., "Missing", "Duplicate", "Too Long"
          "recommendedFix": "string" // Gemini-generated fix, if applicable
        },
        "metaDescription": { /* ... similar structure ... */ },
        "h1Presence": { /* ... similar structure ... */ },
        "imageAltCoverage": { /* ... similar structure ... */ },
        "internalLinkDensity": { /* ... similar structure ... */ },
        "canonicalTag": { /* ... similar structure ... */ },
        "openGraphTags": { /* ... similar structure ... */ },
        "coreWebVitals": {
          "LCP": "number", "CLS": "number", "FID": "number",
          "status": "string", // "Good", "Needs Improvement", "Poor"
          "issues": ["string"], // e.g., "LCP too high"
          "recommendedFix": "string" // Gemini-generated fix
        },
        "structuredData": { /* ... similar structure ... */ },
        "mobileViewport": { /* ... similar structure ... */ }
      },
      "brokenElements": [ // Array of specific broken elements found on this page (e.g., broken links, missing attributes)
        {
          "type": "string", // e.g., "link", "image", "script"
          "selector": "string", // CSS selector or XPath to locate element
          "attribute": "string", // Specific attribute if applicable (e.g., "href", "alt")
          "issue": "string" // Description of the issue
        }
      ],
      "geminiFixes": [ // Array of all specific fixes generated by Gemini for this page
        {
          "metric": "string", // e.g., "metaTitle", "imageAltCoverage"
          "description": "string", // Original issue description
          "aiRecommendation": "string" // Detailed fix generated by Gemini
        }
      ]
    }
  ],

  "previousAuditId": "string", // `auditId` of the immediately preceding audit for this site, if any
  "diffReport": { // Generated only if `previousAuditId` is present
    "newIssues": [ /* ... array of issues found that were not present previously ... */ ],
    "resolvedIssues": [ /* ... array of issues that were present previously but are now resolved ... */ ],
    "metricChanges": [ // Significant changes in key metrics
      {
        "metric": "string", // e.g., "overallCoreWebVitalsScore.LCP"
        "oldValue": "any",
        "newValue": "any",
        "change": "string" // e.g., "+150ms", "-0.1 CLS"
      }
    ]
  }
}

4. Conditional Update Logic Explained

The conditional_update logic ensures intelligent management of your audit history:

  1. Initial Audit / New Site:

* If no prior SiteAuditReport exists for your siteUrl, a brand new document will be inserted into the SiteAuditReport collection. The previousAuditId and diffReport fields will be omitted as there is no prior baseline.

  1. Subsequent Audits (Scheduled or On-Demand):

* The system first queries the SiteAuditReport collection to find the most recent audit report for your siteUrl.

* Diff Generation: The newly completed audit results are meticulously compared against the data from the previousAuditId report. This comparison generates the diffReport, highlighting:

New Issues: Any SEO problems identified in the current audit that were not* present or flagged in the previous audit.

* Resolved Issues: Issues that were present in the previous audit but are now marked as Pass or no longer detected in the current audit.

* Metric Changes: Quantifiable improvements or degradations in key metrics (e.g., Core Web Vitals, coverage percentages).

* New Report Insertion: A new SiteAuditReport document is then inserted, containing all the current audit data. Crucially, its previousAuditId field will be populated with the auditId of the preceding report, establishing a clear historical link.

* The diffReport field in the new document will contain the detailed comparison, allowing you to instantly see what has changed since the last audit.


5. Actionable Outcomes & Benefits for the Customer

  • Comprehensive Audit History: A complete, versioned record of your site's SEO performance over time, stored securely in MongoDB.
  • "Before/After" Comparison: The diffReport provides immediate insights into the impact of your SEO efforts, clearly showing resolved issues and new challenges.
  • Prioritized Fixes: All Gemini-generated fixes are stored alongside the issues, giving you actionable recommendations for improvement directly within the report.
  • Trend Analysis: By reviewing successive audit reports, you can identify long-term trends in your site's SEO health, content quality, and technical performance.
  • Performance Monitoring: Track Core Web Vitals and other critical metrics to ensure your site is delivering an optimal user experience.
  • On-Demand & Automated Reports: Whether triggered manually or by the weekly schedule, each audit contributes to a growing, valuable dataset for your website.

6. Next Steps

Your SiteAuditReport is now successfully stored in the MongoDB database. You can access and visualize this detailed report, including the "before/after" diffs and Gemini's recommended fixes, through your dedicated PantheraHive dashboard or via API integration. The next scheduled audit will automatically run this Sunday at 2 AM (or can be triggered on-demand), continuing to build your site's SEO history.

site_seo_auditor.html
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n ```\n * **Instruction**: \"Insert the above JSON-LD script into the `` or `` of the product page. **Crucially, review and update the placeholder values** (e.g., image URL, description, price, brand, rating) with the actual product data.\"\n\n---\n\n### Benefits of Automated Fix Generation\n\n* **Time Savings**: Drastically reduces the time spent by your team on researching and formulating SEO fixes.\n* **Reduced Error Rate**: AI-generated fixes are consistent and adhere to best practices, minimizing human error.\n* **Faster Remediation Cycle**: Move from identification to implementation much quicker, leading to faster SEO improvements.\n* **Empowered Teams**: Provides clear, actionable steps for developers and content creators, regardless of their SEO expertise.\n* **Continuous Improvement**: Enables a more agile and responsive approach to maintaining and enhancing your site's SEO health.\n\n### Next Steps in the Workflow\n\nOnce Gemini has generated these detailed fixes, the workflow proceeds to:\n\n* **Step 4: MongoDB Storage & Diffing**: The original audit results (\"before\" state) and the Gemini-generated fixes (\"after\" state) will be securely stored in your MongoDB database. This enables historical tracking and a clear \"before/after\" diff for every issue, allowing you to see the impact of implemented changes over time.\n* **Step 5: Reporting & Notifications**: A comprehensive Site Audit Report will be compiled, detailing all identified issues, the AI-generated fixes, and the overall SEO health of your site. This report will be made available to you, and notifications will be sent as per your configuration (e.g., email, dashboard alert).\n\n## Step 4 of 5: Data Persistence – `hive_db` Upsert Operation\n\nThis step focuses on the crucial persistence of your website's SEO audit results within our secure MongoDB database. Following the comprehensive crawling and analysis by our headless auditor and the AI-powered fix generation by Gemini, all findings are meticulously structured and stored as a `SiteAuditReport`. This ensures historical tracking, detailed insights, and the ability to monitor your site's SEO health over time.\n\n---\n\n### 1. Purpose of the `hive_db` Upsert\n\nThe `hive_db` upsert operation is responsible for:\n* **Storing Comprehensive Audit Data**: Persisting all identified SEO issues, performance metrics, and AI-generated fixes for every page audited.\n* **Enabling Historical Tracking**: Creating a permanent record of each audit run, allowing for trend analysis and performance comparisons over time.\n* **Facilitating Before/After Diffs**: Storing the necessary data points to highlight changes, improvements, or regressions between consecutive audit reports.\n* **Ensuring Data Integrity**: Structuring the data in a consistent and query-friendly format within the `SiteAuditReport` document.\n\n---\n\n### 2. `SiteAuditReport` Data Model Overview\n\nEach time the Site SEO Auditor runs (either on demand or automatically), a new `SiteAuditReport` document is created in your dedicated MongoDB collection. This document encapsulates the entire audit for a specific website at a given point in time.\n\n**Key Fields of the `SiteAuditReport` Document:**\n\n* **`_id` (ObjectId)**: Unique identifier for this specific audit report (auto-generated by MongoDB).\n* **`siteUrl` (String)**: The root URL of the website that was audited (e.g., `https://www.yourdomain.com`).\n* **`auditId` (String)**: A unique, human-readable identifier for this audit run (e.g., `audit-20231027-0830`).\n* **`timestamp` (Date)**: The exact date and time when the audit was completed.\n* **`status` (String)**: Overall status of the audit (e.g., `completed`, `failed`, `partial`).\n* **`overallScore` (Number, 0-100)**: A calculated aggregate SEO score for the entire site, based on the checklist adherence and Core Web Vitals.\n* **`pagesAudited` (Number)**: The total count of unique pages successfully audited during this run.\n* **`previousAuditId` (String, Optional)**: References the `auditId` of the immediately preceding audit for the same `siteUrl`. This is crucial for diff calculations.\n* **`summary` (Object)**: A high-level summary of the audit findings.\n * `totalPages`: Total number of pages processed.\n * `pagesWithIssues`: Number of pages with at least one SEO issue.\n * `criticalIssuesCount`: Count of critical issues across all pages.\n * `warningsCount`: Count of warning-level issues across all pages.\n * `coreWebVitalsAverages`: Average LCP, CLS, FID scores across the site.\n* **`pageReports` (Array of Objects)**: An array containing detailed audit results for each individual page visited by the crawler.\n\n---\n\n### 3. Detailed Page-Level Audit (`pageReports` Array)\n\nEach object within the `pageReports` array represents a single page's audit results and contains the following structure:\n\n* **`url` (String)**: The full URL of the audited page.\n* **`statusCode` (Number)**: The HTTP status code returned for the page (e.g., 200, 301, 404).\n* **`pageTitle` (String, Optional)**: The `` content of the page.\n* **`auditedAt` (Date)**: Timestamp when this specific page was audited.\n* **`seoChecklistResults` (Array of Objects)**: Detailed results for each of the 12 SEO checklist items.\n * **`checkName` (String)**: Name of the SEO check (e.g., `meta_title_uniqueness`, `h1_presence`, `image_alt_coverage`).\n * **`status` (String)**: Result of the check (`pass`, `fail`, `warning`).\n * **`details` (Object)**: Specific information about the check's outcome.\n * For `meta_title_uniqueness`: `isUnique` (Boolean), `duplicateOf` (Array of URLs if not unique).\n * For `h1_presence`: `foundH1` (Boolean), `h1Content` (String, Optional).\n * For `image_alt_coverage`: `totalImages` (Number), `imagesMissingAlt` (Array of image URLs).\n * For `core_web_vitals`: `lcpScore`, `clsScore`, `fidScore` (details below).\n * ... and similar specific details for all 12 checks.\n * **`issueDescription` (String, Optional)**: A human-readable description of the identified issue.\n * **`geminiFixSuggestion` (String, Optional)**: The exact, actionable fix generated by Gemini for `fail` or `warning` statuses.\n* **`coreWebVitals` (Object)**: Detailed performance metrics.\n * **`LCP` (Largest Contentful Paint)**: `score` (ms), `status` (`good`, `needs_improvement`, `poor`).\n * **`CLS` (Cumulative Layout Shift)**: `score` (unitless), `status` (`good`, `needs_improvement`, `poor`).\n * **`FID` (First Input Delay)**: `score` (ms), `status` (`good`, `needs_improvement`, `poor`).\n* **`brokenElements` (Array of Objects, Optional)**: A list of any broken links, images, or scripts found on the page.\n * `elementType` (String): e.g., `link`, `image`, `script`.\n * `src` (String): The `href` or `src` attribute of the broken element.\n * `status` (Number): The HTTP status code returned (e.g., 404, 500).\n* **`geminiFixes` (Array of Objects, Optional)**: Specific, generated fixes by Gemini for broken elements or general page issues.\n * `issueType` (String): e.g., `broken_link`, `missing_h1`, `duplicate_meta_description`.\n * `elementSelector` (String, Optional): CSS selector to locate the problematic element.\n * `suggestedFix` (String): The precise code snippet or instruction to resolve the issue.\n\n---\n\n### 4. Before/After Diff Generation and Storage\n\nA critical feature of the `SiteAuditReport` is the inclusion of a \"before/after diff.\" This is achieved by comparing the newly generated `SiteAuditReport` with the `previousAuditId` for the same `siteUrl`.\n\n* **Diff Calculation**: Upon completion of a new audit, the system retrieves the most recent prior `SiteAuditReport` for your domain. It then performs a deep comparison of key metrics and checklist item statuses across all audited pages.\n* **`diffReport` (Object, within `SiteAuditReport`)**: This object is included in the *new* `SiteAuditReport` document and highlights changes since the `previousAuditId`.\n * **`comparedToAuditId` (String)**: The `auditId` of the previous report used for comparison.\n * **`overallScoreChange` (Number)**: The change in `overallScore` (e.g., `+5`, `-2`).\n * **`newIssuesFound` (Array of Objects)**: Details of issues identified in the current audit that were *not* present in the previous one.\n * `pageUrl`, `checkName`, `issueDescription`, `geminiFixSuggestion`.\n * **`issuesResolved` (Array of Objects)**: Details of issues that were present in the previous audit but are now `pass` in the current one.\n * `pageUrl`, `checkName`, `issueDescription` (from previous report).\n * **`performanceChanges` (Array of Objects)**: Highlights significant changes in Core Web Vitals.\n * `pageUrl`, `metric` (LCP, CLS, FID), `oldScore`, `newScore`, `change`.\n * **`statusChanges` (Array of Objects)**: Any checklist items that changed status (e.g., `warning` to `fail`, `pass` to `warning`).\n * `pageUrl`, `checkName`, `oldStatus`, `newStatus`.\n\n---\n\n### 5. Upsert Logic in Action\n\nThe `hive_db` operation utilizes MongoDB's `updateOne` method with the `upsert: true` option. However, for audit reports, we typically *create a new document* for each run rather than updating an existing one, to maintain a complete historical record.\n\nThe logic is as follows:\n1. A new `SiteAuditReport` document is constructed based on the latest audit results.\n2. The system queries MongoDB for the `SiteAuditReport` with the most recent `timestamp` for the given `siteUrl`.\n3. If a previous report is found, its `auditId` is stored in the `previousAuditId` field of the new report, and the `diffReport` is calculated and embedded.\n4. The new `SiteAuditReport` document is then inserted into the `site_audit_reports` collection.\n\nThis approach ensures that every audit run is preserved as an immutable snapshot, providing a robust timeline of your site's SEO evolution.\n\n---\n\n### 6. Customer Benefits\n\nThis sophisticated data storage and diffing mechanism delivers significant value:\n\n* **Clear Progress Tracking**: Easily visualize your SEO improvements or identify regressions between audit runs.\n* **Actionable Insights**: Instantly see what new issues have emerged and which ones have been successfully resolved.\n* **Historical Performance Review**: Access a complete history of your site's SEO health, allowing for long-term trend analysis.\n* **Targeted Remediation**: Focus your efforts on newly identified issues or areas of decline, with AI-generated fixes readily available.\n* **Accountability & Reporting**: Generate comprehensive reports for stakeholders, demonstrating the impact of SEO efforts.\n* **Automated Monitoring**: Benefit from continuous, automated oversight of your website's SEO health without manual intervention.\n\nBy storing your audit results in this detailed and structured manner, we empower you with the data and insights needed to maintain and improve your website's search engine visibility effectively.\n\n## Step 5 of 5: `hive_db` → `conditional_update` - Database Persistence & Reporting\n\nThis final step of the \"Site SEO Auditor\" workflow is critical for ensuring that all gathered audit data, SEO recommendations, and performance metrics are securely stored and made accessible for your review and action. The `conditional_update` operation intelligently manages your site's audit history within our MongoDB database, providing a robust foundation for tracking your SEO progress.\n\n---\n\n### 1. Operation Summary\n\nUpon successful completion of the site crawl, SEO checklist evaluation, and AI-powered fix generation by Gemini, this step orchestrates the persistence of the comprehensive `SiteAuditReport`. This involves either inserting a new audit record or updating an existing one, specifically designed to facilitate \"before/after\" comparisons and maintain a clear historical record of your site's SEO health.\n\n---\n\n### 2. Database Target & Collection\n\n* **Database:** MongoDB\n* **Collection:** `SiteAuditReport`\n\nThis collection is specifically designed to store all audit-related data, structured for easy retrieval, analysis, and diff generation.\n\n---\n\n### 3. `SiteAuditReport` Document Structure (Schema)\n\nEach document within the `SiteAuditReport` collection represents a single, complete audit of your website. The structure is comprehensive, capturing all aspects evaluated by the auditor and the AI-generated fixes.\n\n```json\n{\n \"_id\": \"ObjectId\", // Unique identifier for this audit report\n \"auditId\": \"string\", // A human-readable unique ID for this specific audit run (e.g., UUID)\n \"siteUrl\": \"string\", // The root URL of the audited website (e.g., \"https://www.example.com\")\n \"timestamp\": \"ISODate\", // Date and time when this audit was completed\n \"auditType\": \"string\", // Type of audit: \"scheduled\" (for weekly runs) or \"on-demand\"\n \"overallStatus\": \"string\", // Aggregated status: \"Pass\", \"Needs Improvement\", \"Critical Issues\"\n \"pagesAudited\": \"number\", // Total count of unique pages successfully crawled and audited\n\n \"summaryMetrics\": {\n \"totalIssuesFound\": \"number\", // Total number of individual SEO issues across all pages\n \"uniqueFixesRecommended\": \"number\", // Total number of distinct fixes recommended by Gemini\n \"overallCoreWebVitalsScore\": { // Aggregated Core Web Vitals score (e.g., average or worst-case)\n \"LCP\": \"number\", // Largest Contentful Paint (ms)\n \"CLS\": \"number\", // Cumulative Layout Shift\n \"FID\": \"number\" // First Input Delay (ms) - often approximated for reports\n },\n \"metaTitleCoverage\": \"number\", // Percentage of pages with a unique meta title\n \"metaDescriptionCoverage\": \"number\", // Percentage of pages with a unique meta description\n \"h1Coverage\": \"number\", // Percentage of pages with a valid H1\n \"imageAltCoverage\": \"number\", // Percentage of images with alt text\n \"canonicalTagCoverage\": \"number\", // Percentage of pages with a canonical tag\n \"openGraphTagCoverage\": \"number\", // Percentage of pages with Open Graph tags\n \"structuredDataCoverage\": \"number\", // Percentage of pages with structured data\n \"mobileViewportCoverage\": \"number\" // Percentage of pages with a mobile viewport meta tag\n },\n\n \"pageDetails\": [ // Array of objects, each representing an audited page\n {\n \"pageUrl\": \"string\", // URL of the specific page\n \"pageStatus\": \"string\", // Status for this page: \"Pass\", \"Warning\", \"Critical\"\n \"seoMetrics\": {\n \"metaTitle\": {\n \"value\": \"string\",\n \"status\": \"string\", // \"Pass\", \"Fail\", \"Warning\"\n \"issues\": [\"string\"], // e.g., \"Missing\", \"Duplicate\", \"Too Long\"\n \"recommendedFix\": \"string\" // Gemini-generated fix, if applicable\n },\n \"metaDescription\": { /* ... similar structure ... */ },\n \"h1Presence\": { /* ... similar structure ... */ },\n \"imageAltCoverage\": { /* ... similar structure ... */ },\n \"internalLinkDensity\": { /* ... similar structure ... */ },\n \"canonicalTag\": { /* ... similar structure ... */ },\n \"openGraphTags\": { /* ... similar structure ... */ },\n \"coreWebVitals\": {\n \"LCP\": \"number\", \"CLS\": \"number\", \"FID\": \"number\",\n \"status\": \"string\", // \"Good\", \"Needs Improvement\", \"Poor\"\n \"issues\": [\"string\"], // e.g., \"LCP too high\"\n \"recommendedFix\": \"string\" // Gemini-generated fix\n },\n \"structuredData\": { /* ... similar structure ... */ },\n \"mobileViewport\": { /* ... similar structure ... */ }\n },\n \"brokenElements\": [ // Array of specific broken elements found on this page (e.g., broken links, missing attributes)\n {\n \"type\": \"string\", // e.g., \"link\", \"image\", \"script\"\n \"selector\": \"string\", // CSS selector or XPath to locate element\n \"attribute\": \"string\", // Specific attribute if applicable (e.g., \"href\", \"alt\")\n \"issue\": \"string\" // Description of the issue\n }\n ],\n \"geminiFixes\": [ // Array of all specific fixes generated by Gemini for this page\n {\n \"metric\": \"string\", // e.g., \"metaTitle\", \"imageAltCoverage\"\n \"description\": \"string\", // Original issue description\n \"aiRecommendation\": \"string\" // Detailed fix generated by Gemini\n }\n ]\n }\n ],\n\n \"previousAuditId\": \"string\", // `auditId` of the immediately preceding audit for this site, if any\n \"diffReport\": { // Generated only if `previousAuditId` is present\n \"newIssues\": [ /* ... array of issues found that were not present previously ... */ ],\n \"resolvedIssues\": [ /* ... array of issues that were present previously but are now resolved ... */ ],\n \"metricChanges\": [ // Significant changes in key metrics\n {\n \"metric\": \"string\", // e.g., \"overallCoreWebVitalsScore.LCP\"\n \"oldValue\": \"any\",\n \"newValue\": \"any\",\n \"change\": \"string\" // e.g., \"+150ms\", \"-0.1 CLS\"\n }\n ]\n }\n}\n```\n\n---\n\n### 4. Conditional Update Logic Explained\n\nThe `conditional_update` logic ensures intelligent management of your audit history:\n\n1. **Initial Audit / New Site:**\n * If no prior `SiteAuditReport` exists for your `siteUrl`, a brand new document will be inserted into the `SiteAuditReport` collection. The `previousAuditId` and `diffReport` fields will be omitted as there is no prior baseline.\n\n2. **Subsequent Audits (Scheduled or On-Demand):**\n * The system first queries the `SiteAuditReport` collection to find the most recent audit report for your `siteUrl`.\n * **Diff Generation:** The newly completed audit results are meticulously compared against the data from the `previousAuditId` report. This comparison generates the `diffReport`, highlighting:\n * **New Issues:** Any SEO problems identified in the current audit that were *not* present or flagged in the previous audit.\n * **Resolved Issues:** Issues that were present in the previous audit but are now marked as `Pass` or no longer detected in the current audit.\n * **Metric Changes:** Quantifiable improvements or degradations in key metrics (e.g., Core Web Vitals, coverage percentages).\n * **New Report Insertion:** A new `SiteAuditReport` document is then inserted, containing all the current audit data. Crucially, its `previousAuditId` field will be populated with the `auditId` of the preceding report, establishing a clear historical link.\n * The `diffReport` field in the new document will contain the detailed comparison, allowing you to instantly see what has changed since the last audit.\n\n---\n\n### 5. Actionable Outcomes & Benefits for the Customer\n\n* **Comprehensive Audit History:** A complete, versioned record of your site's SEO performance over time, stored securely in MongoDB.\n* **\"Before/After\" Comparison:** The `diffReport` provides immediate insights into the impact of your SEO efforts, clearly showing resolved issues and new challenges.\n* **Prioritized Fixes:** All Gemini-generated fixes are stored alongside the issues, giving you actionable recommendations for improvement directly within the report.\n* **Trend Analysis:** By reviewing successive audit reports, you can identify long-term trends in your site's SEO health, content quality, and technical performance.\n* **Performance Monitoring:** Track Core Web Vitals and other critical metrics to ensure your site is delivering an optimal user experience.\n* **On-Demand & Automated Reports:** Whether triggered manually or by the weekly schedule, each audit contributes to a growing, valuable dataset for your website.\n\n---\n\n### 6. Next Steps\n\nYour `SiteAuditReport` is now successfully stored in the MongoDB database. You can access and visualize this detailed report, including the \"before/after\" diffs and Gemini's recommended fixes, through your dedicated PantheraHive dashboard or via API integration. The next scheduled audit will automatically run this Sunday at 2 AM (or can be triggered on-demand), continuing to build your site's SEO history.";function phTab(btn,name){document.querySelectorAll(".ph-panel").forEach(function(el){el.classList.remove("active");});document.querySelectorAll(".ph-tab").forEach(function(el){el.classList.remove("active");el.classList.add("inactive");});var p=document.getElementById("panel-"+name);if(p)p.classList.add("active");btn.classList.remove("inactive");btn.classList.add("active");if(name==="preview"){var fr=document.getElementById("ph-preview-frame");if(fr&&!fr.dataset.loaded){if(_phIsHtml){fr.srcdoc=_phCode;}else{var vc=document.getElementById("panel-content");fr.srcdoc=vc?"<html><head><style>body{font-family:-apple-system,system-ui,sans-serif;padding:24px;line-height:1.75;color:#1a1a2e;max-width:860px;margin:0 auto}h2{color:#10b981;margin-top:20px}h3{color:#1a1a2e}pre{background:#0d1117;color:#a5f3c4;padding:16px;border-radius:8px;overflow-x:auto;font-size:.85rem}code{background:#f3f4f6;padding:1px 5px;border-radius:4px;font-size:.85rem}ul,ol{padding-left:20px}li{margin:4px 0}strong{font-weight:700}</style></head><body>"+vc.innerHTML+"</body></html>":"<html><body><p>No content</p></body></html>";}fr.dataset.loaded="1";}}}function phCopyCode(){navigator.clipboard.writeText(_phCode).then(function(){var b=document.getElementById("tab-code");if(b){var o=b.innerHTML;b.innerHTML='<i class="fas fa-check"></i> Copied!';setTimeout(function(){b.innerHTML=o;},2000);}});}function phCopyAll(){var txt=_phAll;if(!txt){var vc=document.getElementById("panel-content");if(vc)txt=vc.innerText||vc.textContent||"";}navigator.clipboard.writeText(txt).then(function(){alert("Content copied to clipboard!");});}function phDownload(){var content=_phCode||_phAll;if(!content){var vc=document.getElementById("panel-content");if(vc)content=vc.innerText||vc.textContent||"";}if(!content){alert("No content to download.");return;}var fn=_phFname;if(!_phCode&&fn.endsWith(".txt"))fn=fn.replace(/\.txt$/,".md");var a=document.createElement("a");a.href="data:text/plain;charset=utf-8,"+encodeURIComponent(content);a.download=fn;a.click();}function phDownloadZip(){ var lbl=document.getElementById("ph-zip-lbl"); if(lbl)lbl.textContent="Preparing…"; /* ===== HELPERS ===== */ function cc(s){ return s.replace(/[_-s]+([a-z])/g,function(m,c){return c.toUpperCase();}) .replace(/^[a-z]/,function(m){return m.toUpperCase();}); } function pkgName(app){ return app.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; } function slugTitle(app){ return app.replace(/_/g," "); } /* Generic code block extractor. Finds marker comments like: // lib/main.dart or # lib/main.dart or ## lib/main.dart and collects lines until the next marker. Also strips markdown fences (```lang ... ```) from each block. */ function extractFiles(txt, pathRe){ var files={}, cur=null, buf=[]; function flush(){ if(cur&&buf.length){ files[cur]=buf.join(" ").trim(); } } txt.split(" ").forEach(function(line){ var m=line.trim().match(pathRe); if(m){ flush(); cur=m[1]; buf=[]; return; } if(cur) buf.push(line); }); flush(); // Strip ```...``` fences from each file Object.keys(files).forEach(function(k){ files[k]=files[k].replace(/^```[a-z]* ?/,"").replace(/ ?```$/,"").trim(); }); return files; } /* General path extractor that covers most languages */ function extractCode(txt){ var re=/^(?://|#|##)s*((?:lib|src|test|tests|Sources?|app|components?|screens?|views?|hooks?|routes?|store|services?|models?|pages?)/[w/-.]+.w+|pubspec.yaml|Package.swift|angular.json|babel.config.(?:js|ts)|vite.config.(?:js|ts)|tsconfig.(?:json|app.json)|app.json|App.(?:tsx|jsx|vue|kt|swift)|MainActivity(?:.kt)?|ContentView.swift)/i; return extractFiles(txt, re); } /* Detect language from combined code+panel text */ function detectLang(code, panel){ var t=(code+" "+panel).toLowerCase(); if(t.indexOf("import 'package:flutter")>=0||t.indexOf('import "package:flutter')>=0) return "flutter"; if(t.indexOf("statelesswidget")>=0||t.indexOf("statefulwidget")>=0) return "flutter"; if((t.indexOf(".dart")>=0)&&(t.indexOf("pubspec")>=0||t.indexOf("flutter:")>=0)) return "flutter"; if(t.indexOf("react-native")>=0||t.indexOf("react_native")>=0) return "react-native"; if(t.indexOf("stylesheet.create")>=0||t.indexOf("view, text, touchableopacity")>=0) return "react-native"; if(t.indexOf("expo(")>=0||t.indexOf(""expo":")>=0||t.indexOf("from 'expo")>=0) return "react-native"; if(t.indexOf("import swiftui")>=0||t.indexOf("import uikit")>=0) return "swift"; if(t.indexOf(".swift")>=0&&(t.indexOf("func body")>=0||t.indexOf("@main")>=0||t.indexOf("var body: some view")>=0)) return "swift"; if(t.indexOf("import android.")>=0||t.indexOf("package com.example")>=0) return "kotlin"; if(t.indexOf("@composable")>=0||t.indexOf("fun mainactivity")>=0||(t.indexOf(".kt")>=0&&t.indexOf("androidx")>=0)) return "kotlin"; if(t.indexOf("@ngmodule")>=0||t.indexOf("@component")>=0) return "angular"; if(t.indexOf("angular.json")>=0||t.indexOf("from '@angular")>=0) return "angular"; if(t.indexOf(".vue")>=0||t.indexOf("<template>")>=0||t.indexOf("definecomponent")>=0) return "vue"; if(t.indexOf("createapp(")>=0&&t.indexOf("vue")>=0) return "vue"; if(t.indexOf("import react")>=0||t.indexOf("reactdom")>=0||(t.indexOf("jsx.element")>=0)) return "react"; if((t.indexOf("usestate")>=0||t.indexOf("useeffect")>=0)&&t.indexOf("from 'react'")>=0) return "react"; if(t.indexOf(".dart")>=0) return "flutter"; if(t.indexOf(".kt")>=0) return "kotlin"; if(t.indexOf(".swift")>=0) return "swift"; if(t.indexOf("import numpy")>=0||t.indexOf("import pandas")>=0||t.indexOf("#!/usr/bin/env python")>=0) return "python"; if(t.indexOf("const express")>=0||t.indexOf("require('express')")>=0||t.indexOf("app.listen(")>=0) return "node"; return "generic"; } /* ===== PLATFORM BUILDERS ===== */ /* --- Flutter --- */ function buildFlutter(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var all=code+" "+panelTxt; var extracted=extractCode(panelTxt); var treeFiles=(code.match(/[w_]+.dart/g)||[]).filter(function(f,i,a){return a.indexOf(f)===i;}); if(!extracted["lib/main.dart"]) extracted["lib/main.dart"]="import 'package:flutter/material.dart'; void main()=>runApp(const "+cc(pn)+"App()); class "+cc(pn)+"App extends StatelessWidget{ const "+cc(pn)+"App({super.key}); @override Widget build(BuildContext context)=>MaterialApp( title: '"+slugTitle(pn)+"', debugShowCheckedModeBanner: false, theme: ThemeData( colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple), useMaterial3: true, ), home: Scaffold(appBar: AppBar(title: const Text('"+slugTitle(pn)+"')), body: const Center(child: Text('Welcome!'))), ); } "; // pubspec.yaml — sniff deps var deps=[" flutter: sdk: flutter"]; var devDeps=[" flutter_test: sdk: flutter"," flutter_lints: ^5.0.0"]; var knownPkg={"go_router":"^14.0.0","flutter_riverpod":"^2.6.1","riverpod_annotation":"^2.6.1","shared_preferences":"^2.3.4","http":"^1.2.2","dio":"^5.7.0","firebase_core":"^3.12.1","firebase_auth":"^5.5.1","cloud_firestore":"^5.6.5","get_it":"^8.0.3","flutter_bloc":"^9.1.0","provider":"^6.1.2","cached_network_image":"^3.4.1","url_launcher":"^6.3.1","intl":"^0.19.0","google_fonts":"^6.2.1","equatable":"^2.0.7","freezed_annotation":"^2.4.4","json_annotation":"^4.9.0","path_provider":"^2.1.5","image_picker":"^1.1.2","uuid":"^4.4.2","flutter_svg":"^2.0.17","lottie":"^3.2.0","hive_flutter":"^1.1.0"}; var knownDev={"build_runner":"^2.4.14","freezed":"^2.5.7","json_serializable":"^6.8.0","riverpod_generator":"^2.6.3","hive_generator":"^2.0.1"}; Object.keys(knownPkg).forEach(function(p){if(all.indexOf("package:"+p)>=0)deps.push(" "+p+": "+knownPkg[p]);}); Object.keys(knownDev).forEach(function(p){if(all.indexOf(p)>=0)devDeps.push(" "+p+": "+knownDev[p]);}); zip.file(folder+"pubspec.yaml","name: "+pn+" description: Flutter app — PantheraHive BOS. version: 1.0.0+1 environment: sdk: '>=3.3.0 <4.0.0' dependencies: "+deps.join(" ")+" dev_dependencies: "+devDeps.join(" ")+" flutter: uses-material-design: true assets: - assets/images/ "); zip.file(folder+"analysis_options.yaml","include: package:flutter_lints/flutter.yaml "); zip.file(folder+".gitignore",".dart_tool/ .flutter-plugins .flutter-plugins-dependencies /build/ .pub-cache/ *.g.dart *.freezed.dart .idea/ .vscode/ "); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash flutter pub get flutter run ``` ## Build ```bash flutter build apk # Android flutter build ipa # iOS flutter build web # Web ``` "); zip.file(folder+"assets/images/.gitkeep",""); Object.keys(extracted).forEach(function(p){ zip.file(folder+p,extracted[p]); }); treeFiles.forEach(function(fn){ if(fn.indexOf("_test.dart")>=0) return; var found=Object.keys(extracted).some(function(p){return p.endsWith("/"+fn)||p===fn;}); if(!found){ var path="lib/"+fn; var cls=cc(fn.replace(".dart","")); var isScr=fn.indexOf("screen")>=0||fn.indexOf("page")>=0||fn.indexOf("view")>=0; var stub=isScr?"import 'package:flutter/material.dart'; class "+cls+" extends StatelessWidget{ const "+cls+"({super.key}); @override Widget build(BuildContext ctx)=>Scaffold( appBar: AppBar(title: const Text('"+fn.replace(/_/g," ").replace(".dart","")+"')), body: const Center(child: Text('"+cls+" — TODO')), ); } ":"// TODO: implement class "+cls+"{ // "+fn+" } "; zip.file(folder+path,stub); } }); } /* --- React Native (Expo) --- */ function buildReactNative(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var extracted=extractCode(panelTxt); var allT=code+" "+panelTxt; var usesTS=allT.indexOf(".tsx")>=0||allT.indexOf(": React.")>=0||allT.indexOf("interface ")>=0; var ext=usesTS?"tsx":"jsx"; zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "1.0.0", "main": "expo-router/entry", "scripts": { "start": "expo start", "android": "expo run:android", "ios": "expo run:ios", "web": "expo start --web" }, "dependencies": { "expo": "~52.0.0", "expo-router": "~4.0.0", "expo-status-bar": "~2.0.1", "expo-font": "~13.0.1", "react": "18.3.1", "react-native": "0.76.7", "react-native-safe-area-context": "4.12.0", "react-native-screens": "~4.3.0", "@react-navigation/native": "^7.0.14" }, "devDependencies": { "@babel/core": "^7.25.0", "typescript": "~5.3.3", "@types/react": "~18.3.12" } } '); zip.file(folder+"app.json",'{ "expo": { "name": "'+slugTitle(pn)+'", "slug": "'+pn+'", "version": "1.0.0", "orientation": "portrait", "scheme": "'+pn+'", "platforms": ["ios","android","web"], "icon": "./assets/icon.png", "splash": {"image": "./assets/splash.png","resizeMode":"contain","backgroundColor":"#ffffff"}, "ios": {"supportsTablet": true}, "android": {"package": "com.example.'+pn+'"}, "newArchEnabled": true } } '); zip.file(folder+"tsconfig.json",'{ "extends": "expo/tsconfig.base", "compilerOptions": { "strict": true, "paths": {"@/*": ["./src/*"]} } } '); zip.file(folder+"babel.config.js","module.exports=function(api){ api.cache(true); return {presets:['babel-preset-expo']}; }; "); var hasApp=Object.keys(extracted).some(function(k){return k.toLowerCase().indexOf("app.")>=0;}); if(!hasApp) zip.file(folder+"App."+ext,"import React from 'react'; import {View,Text,StyleSheet,StatusBar,SafeAreaView} from 'react-native'; export default function App(){ return( <SafeAreaView style={s.container}> <StatusBar barStyle='dark-content'/> <View style={s.body}><Text style={s.title}>"+slugTitle(pn)+"</Text> <Text style={s.sub}>Built with PantheraHive BOS</Text></View> </SafeAreaView>); } const s=StyleSheet.create({ container:{flex:1,backgroundColor:'#fff'}, body:{flex:1,justifyContent:'center',alignItems:'center',padding:24}, title:{fontSize:28,fontWeight:'700',color:'#1a1a2e',marginBottom:8}, sub:{fontSize:14,color:'#6b7280'} }); "); zip.file(folder+"assets/.gitkeep",""); Object.keys(extracted).forEach(function(p){ zip.file(folder+p,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npx expo start ``` ## Platforms ```bash npx expo run:android npx expo run:ios npx expo start --web ``` "); } /* --- Swift (SwiftUI via Swift Package Manager, open Package.swift in Xcode) --- */ function buildSwift(zip,folder,app,code,panelTxt){ var pn=pkgName(app).replace(/_/g,""); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"Package.swift","// swift-tools-version: 5.9 import PackageDescription let package = Package( name: ""+C+"", platforms: [ .iOS(.v17), .macOS(.v14) ], targets: [ .executableTarget( name: ""+C+"", path: "Sources/"+C+"" ), .testTarget( name: ""+C+"Tests", dependencies: [""+C+""], path: "Tests/"+C+"Tests" ) ] ) "); var hasEntry=Object.keys(extracted).some(function(k){return k.indexOf("App.swift")>=0||k.indexOf("main.swift")>=0;}); if(!hasEntry) zip.file(folder+"Sources/"+C+"/"+C+"App.swift","import SwiftUI @main struct "+C+"App: App { var body: some Scene { WindowGroup { ContentView() } } } "); var hasCV=Object.keys(extracted).some(function(k){return k.indexOf("ContentView")>=0;}); if(!hasCV) zip.file(folder+"Sources/"+C+"/ContentView.swift","import SwiftUI struct ContentView: View { var body: some View { NavigationStack { VStack(spacing: 20) { Image(systemName: "app.fill") .font(.system(size: 60)) .foregroundColor(.accentColor) Text(""+slugTitle(pn)+"") .font(.largeTitle) .fontWeight(.bold) Text("Built with PantheraHive BOS") .foregroundColor(.secondary) } .navigationTitle(""+slugTitle(pn)+"") } } } #Preview { ContentView() } "); zip.file(folder+"Tests/"+C+"Tests/"+C+"Tests.swift","import XCTest @testable import "+C+" final class "+C+"Tests: XCTestCase { func testExample() throws { XCTAssertTrue(true) } } "); Object.keys(extracted).forEach(function(p){ var fp=p.indexOf("/")>=0?p:"Sources/"+C+"/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Open in Xcode 1. Unzip 2. `File > Open...` → select `Package.swift` 3. Xcode resolves dependencies automatically ## Run - Press ▶ in Xcode or `swift run` in terminal ## Test ```bash swift test ``` "); zip.file(folder+".gitignore",".DS_Store .build/ *.xcuserdata .swiftpm/ "); } /* --- Kotlin (Jetpack Compose Android) --- */ function buildKotlin(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var pkg="com.example."+pn; var srcPath="app/src/main/kotlin/"+pkg.replace(/./g,"/")+"/"; var extracted=extractCode(panelTxt); zip.file(folder+"settings.gradle.kts","pluginManagement { repositories { google() mavenCentral() gradlePluginPortal() } } dependencyResolutionManagement { repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS) repositories { google(); mavenCentral() } } rootProject.name = ""+C+"" include(":app") "); zip.file(folder+"build.gradle.kts","plugins { alias(libs.plugins.android.application) apply false alias(libs.plugins.kotlin.android) apply false alias(libs.plugins.kotlin.compose) apply false } "); zip.file(folder+"gradle.properties","org.gradle.jvmargs=-Xmx2048m -Dfile.encoding=UTF-8 android.useAndroidX=true kotlin.code.style=official android.nonTransitiveRClass=true "); zip.file(folder+"gradle/wrapper/gradle-wrapper.properties","distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-8.9-bin.zip zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists "); zip.file(folder+"app/build.gradle.kts","plugins { alias(libs.plugins.android.application) alias(libs.plugins.kotlin.android) alias(libs.plugins.kotlin.compose) } android { namespace = ""+pkg+"" compileSdk = 35 defaultConfig { applicationId = ""+pkg+"" minSdk = 24 targetSdk = 35 versionCode = 1 versionName = "1.0" } buildTypes { release { isMinifyEnabled = false proguardFiles(getDefaultProguardFile("proguard-android-optimize.txt")) } } compileOptions { sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 } kotlinOptions { jvmTarget = "11" } buildFeatures { compose = true } } dependencies { implementation(platform("androidx.compose:compose-bom:2024.12.01")) implementation("androidx.activity:activity-compose:1.9.3") implementation("androidx.compose.ui:ui") implementation("androidx.compose.ui:ui-tooling-preview") implementation("androidx.compose.material3:material3") implementation("androidx.navigation:navigation-compose:2.8.4") implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.8.7") debugImplementation("androidx.compose.ui:ui-tooling") } "); zip.file(folder+"app/src/main/AndroidManifest.xml","<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android"> <application android:allowBackup="true" android:label="@string/app_name" android:theme="@style/Theme."+C+""> <activity android:name=".MainActivity" android:exported="true" android:theme="@style/Theme."+C+""> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> </manifest> "); var hasMain=Object.keys(extracted).some(function(k){return k.indexOf("MainActivity")>=0;}); if(!hasMain) zip.file(folder+srcPath+"MainActivity.kt","package "+pkg+" import android.os.Bundle import androidx.activity.ComponentActivity import androidx.activity.compose.setContent import androidx.activity.enableEdgeToEdge import androidx.compose.foundation.layout.* import androidx.compose.material3.* import androidx.compose.runtime.* import androidx.compose.ui.Alignment import androidx.compose.ui.Modifier import androidx.compose.ui.unit.dp class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) enableEdgeToEdge() setContent { "+C+"Theme { Scaffold(modifier = Modifier.fillMaxSize()) { padding -> Box(Modifier.fillMaxSize().padding(padding), contentAlignment = Alignment.Center) { Column(horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.spacedBy(16.dp)) { Text(""+slugTitle(pn)+"", style = MaterialTheme.typography.headlineLarge) Text("Built with PantheraHive BOS", style = MaterialTheme.typography.bodyMedium) } } } } } } } "); zip.file(folder+"app/src/main/res/values/strings.xml","<?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">"+slugTitle(pn)+"</string> </resources> "); zip.file(folder+"app/src/main/res/values/themes.xml","<?xml version="1.0" encoding="utf-8"?> <resources> <style name="Theme."+C+"" parent="Theme.Material3.DayNight.NoActionBar"/> </resources> "); Object.keys(extracted).forEach(function(p){ var fp=p.indexOf("app/src")>=0?p:srcPath+p; if(!fp.endsWith(".kt")&&!fp.endsWith(".xml"))fp=srcPath+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Open in IDE 1. Open **Android Studio** 2. `File > Open...` → select the root folder 3. Let Gradle sync complete ## Run - Click ▶ in Android Studio ## Build Release ```bash ./gradlew assembleRelease ``` "); zip.file(folder+".gitignore","*.iml .gradle/ /local.properties /.idea/ .DS_Store /build/ /captures .externalNativeBuild/ .cxx/ *.apk "); } /* --- React (Vite + TypeScript) --- */ function buildReact(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); var allT=code+" "+panelTxt; var usesTS=allT.indexOf(".tsx")>=0||allT.indexOf("interface ")>=0||allT.indexOf(": React.")>=0; var ext=usesTS?"tsx":"jsx"; zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "react": "^19.0.0", "react-dom": "^19.0.0", "react-router-dom": "^7.1.5", "axios": "^1.7.9" }, "devDependencies": { "@eslint/js": "^9.17.0", "@types/react": "^19.0.2", "@types/react-dom": "^19.0.2", "@vitejs/plugin-react": "^4.3.4", "typescript": "~5.7.2", "vite": "^6.0.5" } } '); zip.file(folder+"vite.config."+ext,"import { defineConfig } from 'vite' import react from '@vitejs/plugin-react' export default defineConfig({ plugins: [react()], resolve: { alias: { '@': '/src' } } }) "); zip.file(folder+"tsconfig.json",'{ "files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"lib":["ES2020","DOM","DOM.Iterable"], "module":"ESNext","skipLibCheck":true,"moduleResolution":"bundler", "allowImportingTsExtensions":true,"isolatedModules":true,"moduleDetection":"force", "noEmit":true,"jsx":"react-jsx","strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src"] } '); zip.file(folder+"index.html","<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>"+slugTitle(pn)+"
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}