Site SEO Auditor
Run ID: 69cc98253e7fb09ff16a35ae2026-04-01SEO & Growth
PantheraHive BOS
BOS Dashboard

Step 2: Data Comparison and Change Detection (hive_db → diff)

This crucial step in the Site SEO Auditor workflow is responsible for comparing the newly generated SEO audit data with the historical data stored in your dedicated MongoDB instance (hive_db). The primary objective is to identify, quantify, and categorize all changes, improvements, and regressions across your website's SEO landscape since the last audit. This "diff" process provides a clear, actionable overview of your site's evolving SEO health.

Purpose and Value

The "hive_db → diff" step is fundamental for a proactive and data-driven SEO strategy. It transforms raw audit data into actionable insights by:

Process Details

The "hive_db → diff" process involves a systematic comparison of the current audit snapshot against the most recent comprehensive audit report stored in MongoDB.

1. Current Audit Data Ingestion

Upon completion of Step 1 (the headless crawling and initial audit), a complete set of SEO metrics for every crawled page is available. This data includes:

* Meta Title (content, length, uniqueness)

* Meta Description (content, length, uniqueness)

* H1 Tag (presence, content, uniqueness)

* Image Alt Attributes (coverage, missing alts)

* Internal Link Density (count, distribution)

* Canonical Tags (presence, correctness)

* Open Graph Tags (presence, correctness for social sharing)

* Core Web Vitals (LCP, CLS, FID scores and status)

* Structured Data (presence, type, validation status)

* Mobile Viewport (correct meta tag presence)

2. Historical Data Retrieval

The system queries your hive_db (MongoDB) to retrieve the most recent SiteAuditReport document that corresponds to your site. This historical report serves as the baseline for comparison. If no prior report exists, the current audit is stored as the initial baseline, and no diff is generated for this first run.

3. Granular Comparison Engine

A sophisticated comparison algorithm then performs a detailed "diff" operation at multiple levels:

* New Pages: Identifies any URLs found in the current crawl that were not present in the previous audit.

* Removed Pages: Detects URLs present in the previous audit but no longer found or accessible in the current crawl.

* Existing Pages: For pages present in both audits, a deeper metric-by-metric comparison is initiated.

* Boolean Checks: Has an H1 appeared/disappeared? Is a canonical tag now present/missing?

* Content Checks: Has the meta title/description content changed? Has an image alt text been added/modified?

* Quantitative Checks: Has the internal link count changed? Have Core Web Vitals scores improved or regressed?

* Uniqueness Checks: Are meta titles/descriptions that were previously unique now duplicated, or vice-versa?

4. Change Identification and Categorization

The comparison engine categorizes each identified change for clarity and actionability:

Output of this Step

The primary output of the "hive_db → diff" step is a structured diff object, which is then integrated directly into the SiteAuditReport document being prepared for storage in MongoDB. This diff object provides a concise summary of all critical changes.

Example of diff output structure (simplified):

json • 1,728 chars
{
  "auditId": "current_audit_id_xyz",
  "previousAuditId": "previous_audit_id_abc",
  "summary": {
    "totalPagesAudited": 500,
    "newIssuesDetected": 25,
    "issuesResolved": 15,
    "performanceImprovements": 8,
    "performanceRegressions": 3,
    "pagesWithSignificantChanges": 40
  },
  "pageChanges": [
    {
      "url": "https://www.yourdomain.com/product-page-a",
      "type": "modified",
      "changes": [
        {
          "metric": "metaTitle",
          "status": "changed",
          "oldValue": "Old Product Title",
          "newValue": "New Optimized Product Title",
          "issueType": "content_change"
        },
        {
          "metric": "h1Tag",
          "status": "new_issue",
          "description": "H1 tag is now missing.",
          "issueType": "missing_h1"
        },
        {
          "metric": "coreWebVitals.LCP",
          "status": "improved",
          "oldValue": "3.5s",
          "newValue": "2.2s",
          "delta": "-1.3s",
          "issueType": "performance_improvement"
        }
      ]
    },
    {
      "url": "https://www.yourdomain.com/blog/new-article",
      "type": "new_page",
      "initialStatus": {
        "metaTitle": "OK",
        "h1Tag": "OK",
        "imageAltCoverage": "80% (2 missing)",
        "coreWebVitals.LCP": "3.1s (Needs Improvement)"
      }
    },
    {
      "url": "https://www.yourdomain.com/old-promotion",
      "type": "removed_page",
      "status": "404 Not Found"
    }
  ],
  "globalIssues": [
    {
      "metric": "duplicateMetaTitles",
      "status": "new_issue",
      "description": "5 new instances of duplicate meta titles found across the site.",
      "affectedPages": ["/page1", "/page2", "/page3"]
    }
  ]
}
Sandboxed live preview

Step 1 of 5: Site Crawl Initiation (Puppeteer-driven)

This marks the crucial first phase of your Site SEO Auditor workflow. In this step, our headless crawler, powered by Puppeteer, systematically navigates and captures the content of every discoverable page on your website. This foundational crawl ensures that subsequent audit steps have a complete and accurate dataset to analyze.


1. Step Overview: Comprehensive Site Crawl

The primary objective of this step is to meticulously traverse your entire website, identify all unique, crawlable URLs, and collect the fully rendered HTML content for each. This process simulates a real user's browser experience, which is vital for accurately assessing modern, JavaScript-heavy websites.

2. Purpose of the Crawl

  • Complete Page Discovery: Identify every page on your site that is accessible and linked, ensuring no critical pages are missed from the SEO audit.
  • Accurate Content Capture: Obtain the fully rendered HTML and DOM structure, including content generated by JavaScript, which is essential for auditing elements like meta tags, H1s, image alt attributes, and internal links that might be dynamically injected.
  • Foundation for Audit: Provide the raw data (URLs and HTML) that the subsequent SEO audit steps will process against the 12-point checklist.
  • Initial Performance Metrics: Capture initial Core Web Vitals metrics (e.g., First Contentful Paint, Largest Contentful Paint, Cumulative Layout Shift) directly during the page load process.

3. Crawling Mechanism: Headless Puppeteer

We leverage Puppeteer, a Node.js library, to control a headless Chrome or Chromium instance. This choice is deliberate and offers significant advantages over traditional HTTP request-based crawlers:

  • Full Browser Emulation: Puppeteer launches a genuine web browser (headless), allowing it to execute JavaScript, render CSS, load images, and interact with the DOM exactly as a user's browser would. This is critical for sites that rely heavily on client-side rendering or dynamic content.
  • Accurate SEO Data: Ensures that meta tags, structured data, H1s, and other elements that might be dynamically generated or modified by JavaScript are captured in their final, rendered state, mirroring what search engine bots (especially Google's evergreen crawler) would see.
  • Core Web Vitals Measurement: Enables the direct measurement of user-centric performance metrics like Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS) as the page loads, providing real-world performance data.
  • Screenshot Capability: Allows for capturing a visual representation of each page, useful for verification and debugging.

4. Crawl Configuration and Execution Details

The crawl is executed with the following parameters and methodologies to ensure thoroughness and accuracy:

  • Starting Point:

* The crawl initiates from your provided primary domain URL (e.g., https://www.yourdomain.com).

* Before starting the full crawl, an attempt is made to locate and parse your sitemap.xml file(s) (e.g., https://www.yourdomain.com/sitemap.xml). URLs from the sitemap are prioritized and added to the crawl queue.

  • Page Discovery:

* Internal Link Following: As each page is visited, Puppeteer extracts all valid internal <a> tags (hyperlinks) within the same domain. These newly discovered URLs are added to a unique queue for subsequent crawling.

* Sitemap Integration: URLs discovered from the sitemap are cross-referenced with those found via internal linking to ensure comprehensive coverage and prevent redundant crawls.

  • Scope Definition:

The crawler is strictly confined to the specified domain. It will not follow external links unless explicitly configured otherwise (which is not the default for an SEO site* audit).

* Parameters are used to ignore specific URL patterns (e.g., ?utm_source=, _ga=) to prevent crawling duplicate content and reduce crawl time.

  • Browser Emulation:

* User Agent: The crawler will emulate a standard desktop user agent by default. Optionally, it can simulate a mobile user agent and viewport to assess mobile-specific rendering and content.

* Viewport: A standard desktop viewport (e.g., 1920x1080) is used for rendering unless mobile emulation is activated.

  • Concurrency and Rate Limiting:

* To prevent overwhelming your server, the crawler operates with controlled concurrency (e.g., X simultaneous browser instances/pages).

* Rate limiting is applied to introduce small delays between page requests, further reducing server load. These parameters are adjustable based on site size and server capacity.

  • Data Collection per Page: For each successfully crawled URL, the following data points are collected:

* URL: The canonical URL of the page.

* Status Code: The HTTP status code received (e.g., 200, 301, 404).

* Redirect Chain: If a redirect occurs, the full chain of URLs from the initial request to the final destination.

Raw HTML: The complete HTML content of the page after* all JavaScript has executed and the DOM is stable.

* DOM Snapshot: A representation of the final DOM structure.

* Initial Core Web Vitals: First Contentful Paint (FCP), Largest Contentful Paint (LCP), and Cumulative Layout Shift (CLS) are measured during the page load event.

* Console Logs & Network Requests: Captured for debugging and advanced analysis (e.g., identifying broken resources or JavaScript errors).

5. Output and Deliverables of this Step

Upon completion of the crawl, this step generates a structured dataset that serves as the input for the subsequent audit phases:

  • Comprehensive URL List: A unique list of all discoverable and successfully crawled URLs on your website.
  • Page Snapshots: For each URL:

* The full, rendered HTML content.

* The HTTP status code and any redirect paths.

* Initial Core Web Vitals metrics (FCP, LCP, CLS values).

* A timestamp of when the page was crawled.

  • Crawl Log: A detailed log of the crawl process, including any warnings or errors encountered (e.g., broken links, timeout issues, pages skipped due to robots.txt).

6. Robustness and Error Handling

The crawler is designed with resilience in mind:

  • Timeout Management: Configurable timeouts for page loading ensure that the crawl doesn't get stuck on unresponsive pages.
  • Retry Mechanisms: Pages that fail to load or return transient errors (e.g., 500 server errors) are retried a configurable number of times.
  • HTTP Error Handling: All HTTP status codes (2xx, 3xx, 4xx, 5xx) are recorded. Specific handling for 4xx (client errors) and 5xx (server errors) ensures these are flagged for review.
  • Resource Management: Efficiently manages browser resources to prevent memory leaks and ensure stable operation over large crawls.
  • Robots.txt Adherence: The crawler respects your robots.txt file, ensuring that pages disallowed for crawling are not accessed.

7. Next Steps

The comprehensive dataset generated in this step will be passed to Step 2: SEO Audit Execution, where each page's content will be systematically analyzed against the 12-point SEO checklist.


This detailed crawl provides the robust and accurate foundation necessary for a truly insightful and actionable SEO audit of your website.

Impact on Subsequent Workflow Steps

The output of the "hive_db → diff" step is critical for driving the subsequent stages of the Site SEO Auditor workflow:

  • Gemini AI Fix Generation (Step 3): Only newly detected issues or regressed metrics identified in the diff will be sent to Gemini for generating specific, actionable fixes. This ensures that Gemini focuses its efforts on current problems requiring immediate attention, rather than re-analyzing already resolved or unchanged issues.
  • SiteAuditReport Storage (Step 4): The complete SiteAuditReport, including this detailed diff object, is saved to your hive_db (MongoDB). This creates a persistent record of your site's SEO evolution, allowing for long-term trend analysis and historical lookup.
  • Customer Deliverable (Step 5): The diff report forms a core component of the final, user-facing deliverable. It provides the most valuable insights into what has changed, what has improved, and what needs immediate attention.

Benefits for Your SEO Strategy

By meticulously comparing audit results, this step empowers you with:

  • Dynamic Issue Prioritization: Focus resources on the most pressing new or worsening SEO issues.
  • Performance Trend Analysis: Visually track the impact of your optimizations on Core Web Vitals and other key metrics over time.
  • Proactive Problem Solving: Catch regressions early before they significantly impact search visibility.
  • Clear Accountability: Demonstrate the value of SEO initiatives by showcasing resolved issues and measurable improvements.

This robust diffing capability ensures that your SEO monitoring is not just a snapshot, but a continuous, intelligent evolution of your site's search engine performance.

gemini Output

Workflow Step 3 of 5: Gemini AI Fix Generation (Batch)

This document details the execution of Step 3 in your "Site SEO Auditor" workflow: leveraging Google Gemini's advanced AI capabilities for batch generation of precise, actionable fixes for identified SEO issues.


1. Current Workflow Step: Gemini AI Fix Generation

Following the comprehensive site crawl and audit performed by our headless crawler (Step 2), this crucial phase focuses on transforming identified SEO deficiencies into concrete, executable solutions. Rather than simply reporting problems, we utilize Google Gemini to intelligently analyze each broken element and automatically generate the exact code-level fixes or content recommendations required. This process is executed in a batch, ensuring efficient and scalable remediation across your entire site.

2. Purpose of Gemini's Role in Fix Generation

The primary objective of this step is to automate and accelerate the remediation of SEO issues. Manually identifying and crafting fixes for numerous pages and diverse SEO problems can be time-consuming and prone to human error. Gemini's role is to:

  • Provide Precise Solutions: Generate exact code snippets, content suggestions, or configuration instructions tailored to each specific issue.
  • Ensure SEO Best Practices: Formulate fixes that adhere to the latest search engine guidelines and industry best practices.
  • Enhance Efficiency: Process a large volume of identified issues concurrently, significantly reducing the time to resolution.
  • Reduce Manual Effort: Minimize the need for human intervention in diagnosing problems and drafting initial solutions, allowing your team to focus on implementation and strategic oversight.

3. Input for Fix Generation: Detailed Audit Findings

For each identified SEO issue, Gemini receives a structured input package containing comprehensive context from the audit. This typically includes:

  • Page URL: The specific URL where the issue was found.
  • Issue Type: The category of the SEO problem (e.g., "Missing H1 Tag," "Duplicate Meta Description," "Image Missing Alt Text," "Low LCP Score").
  • Severity Level: An indication of the impact of the issue (e.g., Critical, High, Medium, Low).
  • Current HTML Snippet/Context: The relevant section of the page's source code where the error occurs, or surrounding content for contextual understanding.
  • Audit Metrics: Specific data points related to the issue (e.g., current LCP value, duplicate meta description text, image source URL).
  • SEO Checklist Reference: The specific point from the 12-point SEO checklist that was violated.

Example Input for a Missing H1:

  • URL: https://www.yourdomain.com/blog/article-title
  • Issue Type: Missing H1 Tag
  • Severity: High
  • Current HTML Snippet: <h2>Article Title</h2><p>...</p>
  • Context: Page content suggests "Article Title" is the main heading.

4. Gemini's AI-Powered Analysis & Fix Strategy

Upon receiving the detailed audit findings, Gemini employs its advanced reasoning and code generation capabilities:

  1. Contextual Understanding: Gemini first analyzes the provided URL, the surrounding HTML, and the page's overall content to understand the context of the issue. For instance, for a missing alt text, it examines the image's filename, adjacent text, and the article's topic.
  2. Problem Diagnosis: It confirms the specific nature of the problem based on the issue type and provided data.
  3. Best Practice Adherence: Gemini then consults its extensive knowledge base of SEO best practices, W3C standards, and web development guidelines to formulate an optimal solution.
  4. Code-Level Generation: For technical issues, Gemini generates precise HTML, CSS, or JavaScript code snippets. For content-related issues, it provides specific textual recommendations.
  5. Batch Processing: Gemini processes multiple issues across different pages in a highly efficient, parallelized manner, ensuring that fixes for your entire site's audit findings are generated promptly.

5. Output: Batch-Generated SEO Fixes

The output from this step is a comprehensive collection of actionable fixes, categorized and structured for ease of implementation. Each fix includes:

  • Original Issue Reference: Links back to the specific audit finding.
  • Proposed Fix Type: HTML, CSS, JavaScript, Content Recommendation, Configuration.
  • Detailed Instructions: Clear, step-by-step guidance on how to apply the fix.
  • Generated Code/Content: The exact code snippet or content suggestion ready for implementation.
  • Rationale (Optional): A brief explanation of why the proposed fix is optimal.

Here are examples of the types of fixes generated for common SEO issues:

  • Meta Title/Description Uniqueness:

* Fix: Suggested unique <title> and <meta name="description"> tags, dynamically generated based on page content and target keywords.

* Example: <title>New Unique Product Name - Shop Now | YourBrand</title>

  • H1 Presence:

* Fix: HTML snippet for a single, descriptive <h1> tag, derived from the page's primary content.

* Example: <h1>Your Blog Post Title Here</h1>

  • Image Alt Coverage:

* Fix: Descriptive alt attribute values for images, generated by analyzing image context and surrounding text.

* Example: <img src="product.jpg" alt="[Descriptive Alt Text for Product Image]">

  • Canonical Tags:

* Fix: Correct <link rel="canonical" href="[Canonical URL]"> tag, ensuring proper indexing.

* Example: <link rel="canonical" href="https://www.yourdomain.com/preferred-version-of-page">

  • Open Graph Tags:

* Fix: Complete set of Open Graph tags (og:title, og:description, og:image, og:type) tailored for social media sharing.

* Example: <meta property="og:title" content="Page Title for Social">

  • Core Web Vitals (LCP/CLS/FID):

* Fix: Code snippets for image optimization (e.g., loading="lazy", srcset), font preloading (<link rel="preload">), specifying image/video dimensions, or deferring non-critical JavaScript.

* Example (CLS): <img src="image.jpg" width="600" height="400" alt="description">

* Example (LCP/FID): <link rel="preload" href="/font.woff2" as="font" type="font/woff2" crossorigin>

  • Structured Data Presence:

* Fix: Valid JSON-LD schema markup (e.g., Article, Product, LocalBusiness, FAQPage) generated based on page content, ready to be embedded.

* Example:


        <script type="application/ld+json">
        {
          "@context": "https://schema.org",
          "@type": "Article",
          "headline": "...",
          "author": { "@type": "Person", "name": "..." }
        }
        </script>
  • Mobile Viewport:

* Fix: Correct <meta name="viewport" content="width=device-width, initial-scale=1"> tag.

All generated fixes are meticulously validated by Gemini to minimize syntax errors and ensure compatibility.

6. Actionability & Integration

The output from this batch_generate step is directly consumable by your development or content teams. These fixes are then stored within your MongoDB SiteAuditReport as part of the "after" state, allowing for a clear "before/after" diff. This historical record enables you to track the impact of implemented changes over time and measure the effectiveness of the remediation efforts.

7. Next Steps

The generated fixes are now ready for review and implementation. In the subsequent steps of the workflow, these fixes will be presented in a consolidated report, allowing your team to prioritize and apply them to your website, ultimately enhancing your site's SEO performance.

hive_db Output

Step 4 of 5: hive_dbupsert - Data Persistence and Reporting

This step is critical for storing all audit findings, generated fixes, and historical data within your dedicated MongoDB instance (hive_db). It ensures that every aspect of the SEO audit, from individual page issues to overall site performance, is meticulously recorded and available for future analysis and reporting.

1. Purpose of this Step

The primary goal of the hive_dbupsert operation is to:

  • Persist Audit Results: Store the comprehensive SEO audit report for the entire site, including all pages scanned and their respective issues.
  • Integrate Gemini Fixes: Embed the exact fixes generated by Gemini directly into the audit report for each identified issue.
  • Track Historical Performance: Enable "before/after" comparisons by comparing the current audit results with the previous run, providing a clear diff report.
  • Support Reporting: Create a structured dataset that powers your SEO dashboard, historical trends, and issue tracking.

2. Data Model: SiteAuditReport Document

All audit data is consolidated into a single SiteAuditReport document in a dedicated MongoDB collection (e.g., site_audit_reports). This document is designed for comprehensive tracking and easy retrieval.

Key Fields and Structure:

  • _id (ObjectId): MongoDB's unique document identifier.
  • auditId (UUID): A unique identifier for this specific audit run.
  • siteUrl (String): The canonical URL of the website audited (e.g., https://www.example.com).
  • timestamp (Date): The exact date and time when this audit was completed.
  • status (String): Current status of the audit (e.g., completed, processing, failed).
  • totalPagesAudited (Number): The total number of unique pages crawled and audited.
  • overallScore (Number): A calculated aggregate score (0-100) reflecting the overall SEO health based on the 12-point checklist.
  • pages (Array of Objects): An array containing detailed audit results for each individual page.

* pageUrl (String): The URL of the specific page.

* pageStatus (Number): HTTP status code encountered (e.g., 200, 404, 301).

* seoIssues (Array of Objects): Issues identified on this page that failed the 12-point checklist.

* checklistItem (String): The specific SEO checklist item that failed (e.g., meta_title_uniqueness, h1_presence, image_alt_coverage).

* severity (String): The impact level of the issue (critical, high, medium, low, info).

* description (String): A human-readable explanation of the issue.

* elementLocator (String, Optional): CSS selector or XPath to locate the problematic element on the page (e.g., img[src="broken.jpg"]).

* currentValue (String/Object): The value or state found during the audit (e.g., "" for missing meta description, false for no H1).

* recommendedValue (String/Object): The ideal or target value/state (e.g., "Unique, descriptive title").

* geminiFix (String): The exact, actionable code snippet or instruction generated by Gemini to resolve this specific issue.

* coreWebVitals (Object):

* LCP (Number): Largest Contentful Paint (in ms).

* CLS (Number): Cumulative Layout Shift.

* FID (Number): First Input Delay (in ms).

* metaData (Object):

* title (String): The page's <title> tag content.

* description (String): The page's <meta name="description"> content.

* canonical (String): The page's <link rel="canonical"> URL.

* ogTags (Object): Open Graph tags (e.g., og:title, og:description, og:image).

* h1Presence (Boolean): true if an <h1> tag is present, false otherwise.

* imageAltCoverage (Object):

* totalImages (Number): Total images on the page.

* imagesWithAlt (Number): Images with non-empty alt attributes.

* coveragePercentage (Number): Percentage of images with alt attributes.

* internalLinkDensity (Number): Count of internal links found on the page.

* structuredDataPresence (Boolean): true if any structured data (JSON-LD, Microdata, RDFa) is detected, false otherwise.

* mobileViewport (Boolean): true if a <meta name="viewport"> tag is correctly configured, false otherwise.

  • diffReport (Object): This object captures the "before/after" changes compared to the previous audit run for the same siteUrl.

* previousAuditId (UUID): The auditId of the previous audit run used for comparison.

newIssues (Array of Objects): Issues identified in this audit that were not* present in the previous one. Each object contains pageUrl, checklistItem, description, severity, and geminiFix.

resolvedIssues (Array of Objects): Issues from the previous audit that are no longer present* in this audit. Each object contains pageUrl, checklistItem, description.

* changedMetrics (Array of Objects): Significant changes in key metrics (e.g., overall score, Core Web Vitals).

* metric (String): The metric that changed (e.g., overallScore, LCP, CLS).

* pageUrl (String, Optional): If the metric is page-specific.

* previousValue (Number/String): Value from the previous audit.

* currentValue (Number/String): Value from the current audit.

* change (Number): The difference (current - previous).

  • nextScheduledRun (Date): The timestamp for the next automatically scheduled audit for this site.

3. Upsert Logic

The upsert operation works as follows:

  1. Check for Existing Data: The system first attempts to find a SiteAuditReport for the given siteUrl that represents the immediately preceding audit run.
  2. Generate Diff: If a previous report is found, a detailed diffReport is generated by comparing the current audit's findings against the previous one. This includes identifying new issues, resolved issues, and changes in key metrics.
  3. Insert or Update:

* If no previous report exists (first audit for this site), a new SiteAuditReport document is inserted.

If a previous report exists, a new SiteAuditReport document is inserted, incorporating the generated diffReport. This maintains a historical record of each* audit run rather than overwriting. This approach allows for full historical analysis.

  1. Index Optimization: Appropriate indexes are created on fields like siteUrl, timestamp, and status to ensure fast query performance for reporting and historical lookups.

4. Deliverable and Value to Customer

Upon completion of this step, the following value is delivered:

  • Comprehensive Audit Record: A complete, detailed record of your site's SEO performance at a specific point in time, stored securely in hive_db.
  • Actionable Fixes: Every identified issue comes with an exact, Gemini-generated fix, making it incredibly easy for your team to implement improvements.
  • Clear Before/After Comparison: The diffReport provides an immediate understanding of what has changed since the last audit. You can quickly see:

* What new issues have emerged.

* Which previous issues have been successfully resolved.

* How key performance metrics (like Core Web Vitals or overall score) have evolved.

  • Historical Tracking: The hive_db now contains a chronological series of SiteAuditReport documents, enabling long-term trend analysis and demonstrating the ROI of your SEO efforts.
  • Foundation for Reporting: This structured data is the backbone for any future dashboards, custom reports, and alerts you might wish to generate, providing a single source of truth for your site's SEO health.

This stored data is now ready to be consumed by other services for visualization, notification, or further analysis.

hive_db Output

Step 5 of 5: hive_db Conditional Update - Site SEO Auditor Report Finalization

This final step in the "Site SEO Auditor" workflow is critical for persisting the comprehensive audit results into your dedicated MongoDB database (hive_db). It ensures that all findings, proposed fixes, and crucial historical comparisons are securely stored and readily accessible for your review.

Step Overview: Data Persistence and Historical Context

The hive_db → conditional_update operation focuses on:

  1. Storing the Latest Audit Results: Persisting all data gathered by the headless crawler and analyzed by Gemini.
  2. Establishing a "Before/After" Diff: Intelligently linking the current audit results with the previous audit's findings to provide a clear comparison and highlight changes over time.
  3. Enabling Trend Analysis: Laying the groundwork for tracking SEO performance and issue resolution across multiple audit cycles.

Data Model and Storage: The SiteAuditReport

Each successful audit run generates a new, comprehensive SiteAuditReport document in your hive_db MongoDB instance. This document is meticulously structured to capture every detail of the audit.

Key Fields within a SiteAuditReport Document:

  • audit_id (String, Unique): A unique identifier for each specific audit run.
  • site_url (String): The URL of the website that was audited.
  • audit_date (Date/Timestamp): The exact date and time the audit was completed.
  • audit_trigger (String): Indicates how the audit was initiated (e.g., "scheduled_weekly", "on_demand").
  • current_audit_results (Object):

* pages_audited_count (Number): Total number of pages successfully crawled and audited.

* seo_checklist_summary (Object): Aggregated pass/fail status for the 12-point checklist.

page_details (Array of Objects): Detailed findings for each individual page* crawled, including:

* page_url

* meta_title (content, length, uniqueness check)

* meta_description (content, length, uniqueness check)

* h1_presence (boolean, content)

* image_alt_coverage (percentage, list of missing alt texts)

* internal_link_density (count, list of links)

* canonical_tag (present, correct URL)

* open_graph_tags (presence, key properties)

* core_web_vitals (LCP, CLS, FID scores)

* structured_data_presence (boolean, type detected)

* mobile_viewport (presence, configuration)

* broken_elements (Array of Objects: type, selector, issue description)

  • gemini_fixes_proposed (Object):

* summary (String): A high-level overview of critical issues and recommended fixes.

* page_specific_fixes (Array of Objects): Detailed, actionable fixes for specific broken elements on individual pages, generated by Gemini. Each object will include:

* page_url

* element_type

* issue_description

* recommended_fix (exact code or content change)

* severity

  • previous_audit_results_ref (String/Object ID, Optional): A reference to the audit_id or a summarized version of the immediately preceding audit report for the same site_url. This is crucial for the diff.
  • audit_diff_summary (Object, Optional):

new_issues_identified (Array): List of SEO issues present in the current_audit_results that were not* present in the previous_audit_results_ref.

issues_resolved (Array): List of SEO issues present in the previous_audit_results_ref that are no longer* present in the current_audit_results.

* performance_changes (Object): Summary of changes in Core Web Vitals (e.g., LCP improved by X ms).

* overall_score_change (Number): A calculated change in an aggregated SEO health score (if applicable).

Conditional Logic Explained: Ensuring a Robust Audit Trail

The "conditional_update" aspect of this step ensures that each new audit report is correctly contextualized:

  1. Retrieve Prior Audit: Before inserting the new SiteAuditReport, the system first queries hive_db to find the most recently completed SiteAuditReport for the site_url being audited.
  2. Conditional previous_audit_results_ref Population:

If a prior report is found: The previous_audit_results_ref field in the new* SiteAuditReport document is populated with a reference to this immediately preceding audit. A detailed audit_diff_summary is then calculated by comparing current_audit_results with the retrieved previous results.

* If no prior report exists (first audit): The previous_audit_results_ref and audit_diff_summary fields will be null or empty, indicating this is the baseline audit.

  1. Insert New Report: The fully constructed SiteAuditReport document, complete with current findings, Gemini fixes, and (if applicable) a reference to the previous state and a diff summary, is then inserted as a new document into the SiteAuditReports collection in hive_db. This ensures a complete, immutable history of all audit runs.

Actionable Outcomes & Reporting

The detailed storage in hive_db empowers you with several critical capabilities:

  • Comprehensive Issue Identification: Every SEO issue, from missing H1s to poor Core Web Vitals, is documented page-by-page.
  • Exact Fixes: Gemini's generated fixes provide immediate, actionable solutions, often with specific code snippets or content recommendations.
  • Performance Tracking: The "before/after diff" allows you to instantly see the impact of your SEO efforts, identifying areas of improvement and any new regressions.
  • Historical Analysis: Over time, you can track trends in your site's SEO health, identify recurring issues, and measure long-term progress.
  • Reporting & Dashboards: The structured data in hive_db is perfectly suited for integration with reporting tools and custom dashboards, providing a visual overview of your site's SEO performance.

Next Steps: Accessing Your Site Audit Reports

Your SiteAuditReport documents are now securely stored. You can:

  • Access via PantheraHive UI: A dedicated section within your PantheraHive dashboard will present these reports in an intuitive, user-friendly format, highlighting the diffs and Gemini's recommendations.
  • Direct Database Access: For advanced users, you can directly query your hive_db MongoDB instance to retrieve the raw SiteAuditReport documents for custom analysis or integration with your internal systems.
  • Automated Notifications: You will receive notifications (e.g., email, Slack) upon completion of each audit, often including a summary of key changes and a link to the full report.

This final step completes a powerful workflow, transforming raw crawl data into actionable intelligence and a clear historical record of your website's SEO health.

site_seo_auditor.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n ```\n* **Mobile Viewport:**\n * **Fix:** Correct `` tag.\n\nAll generated fixes are meticulously validated by Gemini to minimize syntax errors and ensure compatibility.\n\n### 6. Actionability & Integration\n\nThe output from this `batch_generate` step is directly consumable by your development or content teams. These fixes are then stored within your MongoDB `SiteAuditReport` as part of the \"after\" state, allowing for a clear \"before/after\" diff. This historical record enables you to track the impact of implemented changes over time and measure the effectiveness of the remediation efforts.\n\n### 7. Next Steps\n\nThe generated fixes are now ready for review and implementation. In the subsequent steps of the workflow, these fixes will be presented in a consolidated report, allowing your team to prioritize and apply them to your website, ultimately enhancing your site's SEO performance.\n\n## Step 4 of 5: `hive_db` → `upsert` - Data Persistence and Reporting\n\nThis step is critical for storing all audit findings, generated fixes, and historical data within your dedicated MongoDB instance (`hive_db`). It ensures that every aspect of the SEO audit, from individual page issues to overall site performance, is meticulously recorded and available for future analysis and reporting.\n\n### 1. Purpose of this Step\n\nThe primary goal of the `hive_db` → `upsert` operation is to:\n\n* **Persist Audit Results:** Store the comprehensive SEO audit report for the entire site, including all pages scanned and their respective issues.\n* **Integrate Gemini Fixes:** Embed the exact fixes generated by Gemini directly into the audit report for each identified issue.\n* **Track Historical Performance:** Enable \"before/after\" comparisons by comparing the current audit results with the previous run, providing a clear diff report.\n* **Support Reporting:** Create a structured dataset that powers your SEO dashboard, historical trends, and issue tracking.\n\n### 2. Data Model: `SiteAuditReport` Document\n\nAll audit data is consolidated into a single `SiteAuditReport` document in a dedicated MongoDB collection (e.g., `site_audit_reports`). This document is designed for comprehensive tracking and easy retrieval.\n\n#### Key Fields and Structure:\n\n* **`_id` (ObjectId):** MongoDB's unique document identifier.\n* **`auditId` (UUID):** A unique identifier for this specific audit run.\n* **`siteUrl` (String):** The canonical URL of the website audited (e.g., `https://www.example.com`).\n* **`timestamp` (Date):** The exact date and time when this audit was completed.\n* **`status` (String):** Current status of the audit (e.g., `completed`, `processing`, `failed`).\n* **`totalPagesAudited` (Number):** The total number of unique pages crawled and audited.\n* **`overallScore` (Number):** A calculated aggregate score (0-100) reflecting the overall SEO health based on the 12-point checklist.\n* **`pages` (Array of Objects):** An array containing detailed audit results for each individual page.\n * **`pageUrl` (String):** The URL of the specific page.\n * **`pageStatus` (Number):** HTTP status code encountered (e.g., `200`, `404`, `301`).\n * **`seoIssues` (Array of Objects):** Issues identified on this page that failed the 12-point checklist.\n * **`checklistItem` (String):** The specific SEO checklist item that failed (e.g., `meta_title_uniqueness`, `h1_presence`, `image_alt_coverage`).\n * **`severity` (String):** The impact level of the issue (`critical`, `high`, `medium`, `low`, `info`).\n * **`description` (String):** A human-readable explanation of the issue.\n * **`elementLocator` (String, Optional):** CSS selector or XPath to locate the problematic element on the page (e.g., `img[src=\"broken.jpg\"]`).\n * **`currentValue` (String/Object):** The value or state found during the audit (e.g., `\"\"` for missing meta description, `false` for no H1).\n * **`recommendedValue` (String/Object):** The ideal or target value/state (e.g., `\"Unique, descriptive title\"`).\n * **`geminiFix` (String):** **The exact, actionable code snippet or instruction generated by Gemini to resolve this specific issue.**\n * **`coreWebVitals` (Object):**\n * **`LCP` (Number):** Largest Contentful Paint (in ms).\n * **`CLS` (Number):** Cumulative Layout Shift.\n * **`FID` (Number):** First Input Delay (in ms).\n * **`metaData` (Object):**\n * **`title` (String):** The page's `` tag content.\n * **`description` (String):** The page's `<meta name=\"description\">` content.\n * **`canonical` (String):** The page's `<link rel=\"canonical\">` URL.\n * **`ogTags` (Object):** Open Graph tags (e.g., `og:title`, `og:description`, `og:image`).\n * **`h1Presence` (Boolean):** `true` if an `<h1>` tag is present, `false` otherwise.\n * **`imageAltCoverage` (Object):**\n * **`totalImages` (Number):** Total images on the page.\n * **`imagesWithAlt` (Number):** Images with non-empty `alt` attributes.\n * **`coveragePercentage` (Number):** Percentage of images with `alt` attributes.\n * **`internalLinkDensity` (Number):** Count of internal links found on the page.\n * **`structuredDataPresence` (Boolean):** `true` if any structured data (JSON-LD, Microdata, RDFa) is detected, `false` otherwise.\n * **`mobileViewport` (Boolean):** `true` if a `<meta name=\"viewport\">` tag is correctly configured, `false` otherwise.\n* **`diffReport` (Object):** This object captures the \"before/after\" changes compared to the *previous* audit run for the same `siteUrl`.\n * **`previousAuditId` (UUID):** The `auditId` of the previous audit run used for comparison.\n * **`newIssues` (Array of Objects):** Issues identified in *this* audit that were *not* present in the previous one. Each object contains `pageUrl`, `checklistItem`, `description`, `severity`, and `geminiFix`.\n * **`resolvedIssues` (Array of Objects):** Issues from the *previous* audit that are *no longer present* in this audit. Each object contains `pageUrl`, `checklistItem`, `description`.\n * **`changedMetrics` (Array of Objects):** Significant changes in key metrics (e.g., overall score, Core Web Vitals).\n * **`metric` (String):** The metric that changed (e.g., `overallScore`, `LCP`, `CLS`).\n * **`pageUrl` (String, Optional):** If the metric is page-specific.\n * **`previousValue` (Number/String):** Value from the previous audit.\n * **`currentValue` (Number/String):** Value from the current audit.\n * **`change` (Number):** The difference (current - previous).\n* **`nextScheduledRun` (Date):** The timestamp for the next automatically scheduled audit for this site.\n\n### 3. Upsert Logic\n\nThe `upsert` operation works as follows:\n\n1. **Check for Existing Data:** The system first attempts to find a `SiteAuditReport` for the given `siteUrl` that represents the *immediately preceding* audit run.\n2. **Generate Diff:** If a previous report is found, a detailed `diffReport` is generated by comparing the current audit's findings against the previous one. This includes identifying new issues, resolved issues, and changes in key metrics.\n3. **Insert or Update:**\n * If no previous report exists (first audit for this site), a new `SiteAuditReport` document is inserted.\n * If a previous report exists, a new `SiteAuditReport` document is inserted, incorporating the generated `diffReport`. This maintains a historical record of *each* audit run rather than overwriting. This approach allows for full historical analysis.\n4. **Index Optimization:** Appropriate indexes are created on fields like `siteUrl`, `timestamp`, and `status` to ensure fast query performance for reporting and historical lookups.\n\n### 4. Deliverable and Value to Customer\n\nUpon completion of this step, the following value is delivered:\n\n* **Comprehensive Audit Record:** A complete, detailed record of your site's SEO performance at a specific point in time, stored securely in `hive_db`.\n* **Actionable Fixes:** Every identified issue comes with an **exact, Gemini-generated fix**, making it incredibly easy for your team to implement improvements.\n* **Clear Before/After Comparison:** The `diffReport` provides an immediate understanding of what has changed since the last audit. You can quickly see:\n * **What new issues have emerged.**\n * **Which previous issues have been successfully resolved.**\n * **How key performance metrics (like Core Web Vitals or overall score) have evolved.**\n* **Historical Tracking:** The `hive_db` now contains a chronological series of `SiteAuditReport` documents, enabling long-term trend analysis and demonstrating the ROI of your SEO efforts.\n* **Foundation for Reporting:** This structured data is the backbone for any future dashboards, custom reports, and alerts you might wish to generate, providing a single source of truth for your site's SEO health.\n\nThis stored data is now ready to be consumed by other services for visualization, notification, or further analysis.\n\n## Step 5 of 5: `hive_db` Conditional Update - Site SEO Auditor Report Finalization\n\nThis final step in the \"Site SEO Auditor\" workflow is critical for persisting the comprehensive audit results into your dedicated MongoDB database (`hive_db`). It ensures that all findings, proposed fixes, and crucial historical comparisons are securely stored and readily accessible for your review.\n\n### Step Overview: Data Persistence and Historical Context\n\nThe `hive_db → conditional_update` operation focuses on:\n1. **Storing the Latest Audit Results:** Persisting all data gathered by the headless crawler and analyzed by Gemini.\n2. **Establishing a \"Before/After\" Diff:** Intelligently linking the current audit results with the previous audit's findings to provide a clear comparison and highlight changes over time.\n3. **Enabling Trend Analysis:** Laying the groundwork for tracking SEO performance and issue resolution across multiple audit cycles.\n\n### Data Model and Storage: The `SiteAuditReport`\n\nEach successful audit run generates a new, comprehensive `SiteAuditReport` document in your `hive_db` MongoDB instance. This document is meticulously structured to capture every detail of the audit.\n\n**Key Fields within a `SiteAuditReport` Document:**\n\n* **`audit_id` (String, Unique):** A unique identifier for each specific audit run.\n* **`site_url` (String):** The URL of the website that was audited.\n* **`audit_date` (Date/Timestamp):** The exact date and time the audit was completed.\n* **`audit_trigger` (String):** Indicates how the audit was initiated (e.g., \"scheduled_weekly\", \"on_demand\").\n* **`current_audit_results` (Object):**\n * **`pages_audited_count` (Number):** Total number of pages successfully crawled and audited.\n * **`seo_checklist_summary` (Object):** Aggregated pass/fail status for the 12-point checklist.\n * **`page_details` (Array of Objects):** Detailed findings for *each individual page* crawled, including:\n * `page_url`\n * `meta_title` (content, length, uniqueness check)\n * `meta_description` (content, length, uniqueness check)\n * `h1_presence` (boolean, content)\n * `image_alt_coverage` (percentage, list of missing alt texts)\n * `internal_link_density` (count, list of links)\n * `canonical_tag` (present, correct URL)\n * `open_graph_tags` (presence, key properties)\n * `core_web_vitals` (LCP, CLS, FID scores)\n * `structured_data_presence` (boolean, type detected)\n * `mobile_viewport` (presence, configuration)\n * `broken_elements` (Array of Objects: type, selector, issue description)\n* **`gemini_fixes_proposed` (Object):**\n * **`summary` (String):** A high-level overview of critical issues and recommended fixes.\n * **`page_specific_fixes` (Array of Objects):** Detailed, actionable fixes for specific broken elements on individual pages, generated by Gemini. Each object will include:\n * `page_url`\n * `element_type`\n * `issue_description`\n * `recommended_fix` (exact code or content change)\n * `severity`\n* **`previous_audit_results_ref` (String/Object ID, Optional):** A reference to the `audit_id` or a summarized version of the *immediately preceding* audit report for the same `site_url`. This is crucial for the diff.\n* **`audit_diff_summary` (Object, Optional):**\n * **`new_issues_identified` (Array):** List of SEO issues present in the `current_audit_results` that were *not* present in the `previous_audit_results_ref`.\n * **`issues_resolved` (Array):** List of SEO issues present in the `previous_audit_results_ref` that are *no longer* present in the `current_audit_results`.\n * **`performance_changes` (Object):** Summary of changes in Core Web Vitals (e.g., LCP improved by X ms).\n * **`overall_score_change` (Number):** A calculated change in an aggregated SEO health score (if applicable).\n\n### Conditional Logic Explained: Ensuring a Robust Audit Trail\n\nThe \"conditional_update\" aspect of this step ensures that each new audit report is correctly contextualized:\n\n1. **Retrieve Prior Audit:** Before inserting the new `SiteAuditReport`, the system first queries `hive_db` to find the most recently completed `SiteAuditReport` for the `site_url` being audited.\n2. **Conditional `previous_audit_results_ref` Population:**\n * **If a prior report is found:** The `previous_audit_results_ref` field in the *new* `SiteAuditReport` document is populated with a reference to this immediately preceding audit. A detailed `audit_diff_summary` is then calculated by comparing `current_audit_results` with the retrieved previous results.\n * **If no prior report exists (first audit):** The `previous_audit_results_ref` and `audit_diff_summary` fields will be null or empty, indicating this is the baseline audit.\n3. **Insert New Report:** The fully constructed `SiteAuditReport` document, complete with current findings, Gemini fixes, and (if applicable) a reference to the previous state and a diff summary, is then inserted as a new document into the `SiteAuditReports` collection in `hive_db`. This ensures a complete, immutable history of all audit runs.\n\n### Actionable Outcomes & Reporting\n\nThe detailed storage in `hive_db` empowers you with several critical capabilities:\n\n* **Comprehensive Issue Identification:** Every SEO issue, from missing H1s to poor Core Web Vitals, is documented page-by-page.\n* **Exact Fixes:** Gemini's generated fixes provide immediate, actionable solutions, often with specific code snippets or content recommendations.\n* **Performance Tracking:** The \"before/after diff\" allows you to instantly see the impact of your SEO efforts, identifying areas of improvement and any new regressions.\n* **Historical Analysis:** Over time, you can track trends in your site's SEO health, identify recurring issues, and measure long-term progress.\n* **Reporting & Dashboards:** The structured data in `hive_db` is perfectly suited for integration with reporting tools and custom dashboards, providing a visual overview of your site's SEO performance.\n\n### Next Steps: Accessing Your Site Audit Reports\n\nYour `SiteAuditReport` documents are now securely stored. You can:\n\n* **Access via PantheraHive UI:** A dedicated section within your PantheraHive dashboard will present these reports in an intuitive, user-friendly format, highlighting the diffs and Gemini's recommendations.\n* **Direct Database Access:** For advanced users, you can directly query your `hive_db` MongoDB instance to retrieve the raw `SiteAuditReport` documents for custom analysis or integration with your internal systems.\n* **Automated Notifications:** You will receive notifications (e.g., email, Slack) upon completion of each audit, often including a summary of key changes and a link to the full report.\n\nThis final step completes a powerful workflow, transforming raw crawl data into actionable intelligence and a clear historical record of your website's SEO health.";function phTab(btn,name){document.querySelectorAll(".ph-panel").forEach(function(el){el.classList.remove("active");});document.querySelectorAll(".ph-tab").forEach(function(el){el.classList.remove("active");el.classList.add("inactive");});var p=document.getElementById("panel-"+name);if(p)p.classList.add("active");btn.classList.remove("inactive");btn.classList.add("active");if(name==="preview"){var fr=document.getElementById("ph-preview-frame");if(fr&&!fr.dataset.loaded){if(_phIsHtml){fr.srcdoc=_phCode;}else{var vc=document.getElementById("panel-content");fr.srcdoc=vc?"<html><head><style>body{font-family:-apple-system,system-ui,sans-serif;padding:24px;line-height:1.75;color:#1a1a2e;max-width:860px;margin:0 auto}h2{color:#10b981;margin-top:20px}h3{color:#1a1a2e}pre{background:#0d1117;color:#a5f3c4;padding:16px;border-radius:8px;overflow-x:auto;font-size:.85rem}code{background:#f3f4f6;padding:1px 5px;border-radius:4px;font-size:.85rem}ul,ol{padding-left:20px}li{margin:4px 0}strong{font-weight:700}</style></head><body>"+vc.innerHTML+"</body></html>":"<html><body><p>No content</p></body></html>";}fr.dataset.loaded="1";}}}function phCopyCode(){navigator.clipboard.writeText(_phCode).then(function(){var b=document.getElementById("tab-code");if(b){var o=b.innerHTML;b.innerHTML='<i class="fas fa-check"></i> Copied!';setTimeout(function(){b.innerHTML=o;},2000);}});}function phCopyAll(){var txt=_phAll;if(!txt){var vc=document.getElementById("panel-content");if(vc)txt=vc.innerText||vc.textContent||"";}navigator.clipboard.writeText(txt).then(function(){alert("Content copied to clipboard!");});}function phDownload(){var content=_phCode||_phAll;if(!content){var vc=document.getElementById("panel-content");if(vc)content=vc.innerText||vc.textContent||"";}if(!content){alert("No content to download.");return;}var fn=_phFname;if(!_phCode&&fn.endsWith(".txt"))fn=fn.replace(/\.txt$/,".md");var a=document.createElement("a");a.href="data:text/plain;charset=utf-8,"+encodeURIComponent(content);a.download=fn;a.click();}function phDownloadZip(){ var lbl=document.getElementById("ph-zip-lbl"); if(lbl)lbl.textContent="Preparing…"; /* ===== HELPERS ===== */ function cc(s){ return s.replace(/[_-s]+([a-z])/g,function(m,c){return c.toUpperCase();}) .replace(/^[a-z]/,function(m){return m.toUpperCase();}); } function pkgName(app){ return app.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; } function slugTitle(app){ return app.replace(/_/g," "); } /* Generic code block extractor. Finds marker comments like: // lib/main.dart or # lib/main.dart or ## lib/main.dart and collects lines until the next marker. Also strips markdown fences (```lang ... ```) from each block. */ function extractFiles(txt, pathRe){ var files={}, cur=null, buf=[]; function flush(){ if(cur&&buf.length){ files[cur]=buf.join(" ").trim(); } } txt.split(" ").forEach(function(line){ var m=line.trim().match(pathRe); if(m){ flush(); cur=m[1]; buf=[]; return; } if(cur) buf.push(line); }); flush(); // Strip ```...``` fences from each file Object.keys(files).forEach(function(k){ files[k]=files[k].replace(/^```[a-z]* ?/,"").replace(/ ?```$/,"").trim(); }); return files; } /* General path extractor that covers most languages */ function extractCode(txt){ var re=/^(?://|#|##)s*((?:lib|src|test|tests|Sources?|app|components?|screens?|views?|hooks?|routes?|store|services?|models?|pages?)/[w/-.]+.w+|pubspec.yaml|Package.swift|angular.json|babel.config.(?:js|ts)|vite.config.(?:js|ts)|tsconfig.(?:json|app.json)|app.json|App.(?:tsx|jsx|vue|kt|swift)|MainActivity(?:.kt)?|ContentView.swift)/i; return extractFiles(txt, re); } /* Detect language from combined code+panel text */ function detectLang(code, panel){ var t=(code+" "+panel).toLowerCase(); if(t.indexOf("import 'package:flutter")>=0||t.indexOf('import "package:flutter')>=0) return "flutter"; if(t.indexOf("statelesswidget")>=0||t.indexOf("statefulwidget")>=0) return "flutter"; if((t.indexOf(".dart")>=0)&&(t.indexOf("pubspec")>=0||t.indexOf("flutter:")>=0)) return "flutter"; if(t.indexOf("react-native")>=0||t.indexOf("react_native")>=0) return "react-native"; if(t.indexOf("stylesheet.create")>=0||t.indexOf("view, text, touchableopacity")>=0) return "react-native"; if(t.indexOf("expo(")>=0||t.indexOf(""expo":")>=0||t.indexOf("from 'expo")>=0) return "react-native"; if(t.indexOf("import swiftui")>=0||t.indexOf("import uikit")>=0) return "swift"; if(t.indexOf(".swift")>=0&&(t.indexOf("func body")>=0||t.indexOf("@main")>=0||t.indexOf("var body: some view")>=0)) return "swift"; if(t.indexOf("import android.")>=0||t.indexOf("package com.example")>=0) return "kotlin"; if(t.indexOf("@composable")>=0||t.indexOf("fun mainactivity")>=0||(t.indexOf(".kt")>=0&&t.indexOf("androidx")>=0)) return "kotlin"; if(t.indexOf("@ngmodule")>=0||t.indexOf("@component")>=0) return "angular"; if(t.indexOf("angular.json")>=0||t.indexOf("from '@angular")>=0) return "angular"; if(t.indexOf(".vue")>=0||t.indexOf("<template>")>=0||t.indexOf("definecomponent")>=0) return "vue"; if(t.indexOf("createapp(")>=0&&t.indexOf("vue")>=0) return "vue"; if(t.indexOf("import react")>=0||t.indexOf("reactdom")>=0||(t.indexOf("jsx.element")>=0)) return "react"; if((t.indexOf("usestate")>=0||t.indexOf("useeffect")>=0)&&t.indexOf("from 'react'")>=0) return "react"; if(t.indexOf(".dart")>=0) return "flutter"; if(t.indexOf(".kt")>=0) return "kotlin"; if(t.indexOf(".swift")>=0) return "swift"; if(t.indexOf("import numpy")>=0||t.indexOf("import pandas")>=0||t.indexOf("#!/usr/bin/env python")>=0) return "python"; if(t.indexOf("const express")>=0||t.indexOf("require('express')")>=0||t.indexOf("app.listen(")>=0) return "node"; return "generic"; } /* ===== PLATFORM BUILDERS ===== */ /* --- Flutter --- */ function buildFlutter(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var all=code+" "+panelTxt; var extracted=extractCode(panelTxt); var treeFiles=(code.match(/[w_]+.dart/g)||[]).filter(function(f,i,a){return a.indexOf(f)===i;}); if(!extracted["lib/main.dart"]) extracted["lib/main.dart"]="import 'package:flutter/material.dart'; void main()=>runApp(const "+cc(pn)+"App()); class "+cc(pn)+"App extends StatelessWidget{ const "+cc(pn)+"App({super.key}); @override Widget build(BuildContext context)=>MaterialApp( title: '"+slugTitle(pn)+"', debugShowCheckedModeBanner: false, theme: ThemeData( colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple), useMaterial3: true, ), home: Scaffold(appBar: AppBar(title: const Text('"+slugTitle(pn)+"')), body: const Center(child: Text('Welcome!'))), ); } "; // pubspec.yaml — sniff deps var deps=[" flutter: sdk: flutter"]; var devDeps=[" flutter_test: sdk: flutter"," flutter_lints: ^5.0.0"]; var knownPkg={"go_router":"^14.0.0","flutter_riverpod":"^2.6.1","riverpod_annotation":"^2.6.1","shared_preferences":"^2.3.4","http":"^1.2.2","dio":"^5.7.0","firebase_core":"^3.12.1","firebase_auth":"^5.5.1","cloud_firestore":"^5.6.5","get_it":"^8.0.3","flutter_bloc":"^9.1.0","provider":"^6.1.2","cached_network_image":"^3.4.1","url_launcher":"^6.3.1","intl":"^0.19.0","google_fonts":"^6.2.1","equatable":"^2.0.7","freezed_annotation":"^2.4.4","json_annotation":"^4.9.0","path_provider":"^2.1.5","image_picker":"^1.1.2","uuid":"^4.4.2","flutter_svg":"^2.0.17","lottie":"^3.2.0","hive_flutter":"^1.1.0"}; var knownDev={"build_runner":"^2.4.14","freezed":"^2.5.7","json_serializable":"^6.8.0","riverpod_generator":"^2.6.3","hive_generator":"^2.0.1"}; Object.keys(knownPkg).forEach(function(p){if(all.indexOf("package:"+p)>=0)deps.push(" "+p+": "+knownPkg[p]);}); Object.keys(knownDev).forEach(function(p){if(all.indexOf(p)>=0)devDeps.push(" "+p+": "+knownDev[p]);}); zip.file(folder+"pubspec.yaml","name: "+pn+" description: Flutter app — PantheraHive BOS. version: 1.0.0+1 environment: sdk: '>=3.3.0 <4.0.0' dependencies: "+deps.join(" ")+" dev_dependencies: "+devDeps.join(" ")+" flutter: uses-material-design: true assets: - assets/images/ "); zip.file(folder+"analysis_options.yaml","include: package:flutter_lints/flutter.yaml "); zip.file(folder+".gitignore",".dart_tool/ .flutter-plugins .flutter-plugins-dependencies /build/ .pub-cache/ *.g.dart *.freezed.dart .idea/ .vscode/ "); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash flutter pub get flutter run ``` ## Build ```bash flutter build apk # Android flutter build ipa # iOS flutter build web # Web ``` "); zip.file(folder+"assets/images/.gitkeep",""); Object.keys(extracted).forEach(function(p){ zip.file(folder+p,extracted[p]); }); treeFiles.forEach(function(fn){ if(fn.indexOf("_test.dart")>=0) return; var found=Object.keys(extracted).some(function(p){return p.endsWith("/"+fn)||p===fn;}); if(!found){ var path="lib/"+fn; var cls=cc(fn.replace(".dart","")); var isScr=fn.indexOf("screen")>=0||fn.indexOf("page")>=0||fn.indexOf("view")>=0; var stub=isScr?"import 'package:flutter/material.dart'; class "+cls+" extends StatelessWidget{ const "+cls+"({super.key}); @override Widget build(BuildContext ctx)=>Scaffold( appBar: AppBar(title: const Text('"+fn.replace(/_/g," ").replace(".dart","")+"')), body: const Center(child: Text('"+cls+" — TODO')), ); } ":"// TODO: implement class "+cls+"{ // "+fn+" } "; zip.file(folder+path,stub); } }); } /* --- React Native (Expo) --- */ function buildReactNative(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var extracted=extractCode(panelTxt); var allT=code+" "+panelTxt; var usesTS=allT.indexOf(".tsx")>=0||allT.indexOf(": React.")>=0||allT.indexOf("interface ")>=0; var ext=usesTS?"tsx":"jsx"; zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "1.0.0", "main": "expo-router/entry", "scripts": { "start": "expo start", "android": "expo run:android", "ios": "expo run:ios", "web": "expo start --web" }, "dependencies": { "expo": "~52.0.0", "expo-router": "~4.0.0", "expo-status-bar": "~2.0.1", "expo-font": "~13.0.1", "react": "18.3.1", "react-native": "0.76.7", "react-native-safe-area-context": "4.12.0", "react-native-screens": "~4.3.0", "@react-navigation/native": "^7.0.14" }, "devDependencies": { "@babel/core": "^7.25.0", "typescript": "~5.3.3", "@types/react": "~18.3.12" } } '); zip.file(folder+"app.json",'{ "expo": { "name": "'+slugTitle(pn)+'", "slug": "'+pn+'", "version": "1.0.0", "orientation": "portrait", "scheme": "'+pn+'", "platforms": ["ios","android","web"], "icon": "./assets/icon.png", "splash": {"image": "./assets/splash.png","resizeMode":"contain","backgroundColor":"#ffffff"}, "ios": {"supportsTablet": true}, "android": {"package": "com.example.'+pn+'"}, "newArchEnabled": true } } '); zip.file(folder+"tsconfig.json",'{ "extends": "expo/tsconfig.base", "compilerOptions": { "strict": true, "paths": {"@/*": ["./src/*"]} } } '); zip.file(folder+"babel.config.js","module.exports=function(api){ api.cache(true); return {presets:['babel-preset-expo']}; }; "); var hasApp=Object.keys(extracted).some(function(k){return k.toLowerCase().indexOf("app.")>=0;}); if(!hasApp) zip.file(folder+"App."+ext,"import React from 'react'; import {View,Text,StyleSheet,StatusBar,SafeAreaView} from 'react-native'; export default function App(){ return( <SafeAreaView style={s.container}> <StatusBar barStyle='dark-content'/> <View style={s.body}><Text style={s.title}>"+slugTitle(pn)+"</Text> <Text style={s.sub}>Built with PantheraHive BOS</Text></View> </SafeAreaView>); } const s=StyleSheet.create({ container:{flex:1,backgroundColor:'#fff'}, body:{flex:1,justifyContent:'center',alignItems:'center',padding:24}, title:{fontSize:28,fontWeight:'700',color:'#1a1a2e',marginBottom:8}, sub:{fontSize:14,color:'#6b7280'} }); "); zip.file(folder+"assets/.gitkeep",""); Object.keys(extracted).forEach(function(p){ zip.file(folder+p,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npx expo start ``` ## Platforms ```bash npx expo run:android npx expo run:ios npx expo start --web ``` "); } /* --- Swift (SwiftUI via Swift Package Manager, open Package.swift in Xcode) --- */ function buildSwift(zip,folder,app,code,panelTxt){ var pn=pkgName(app).replace(/_/g,""); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"Package.swift","// swift-tools-version: 5.9 import PackageDescription let package = Package( name: ""+C+"", platforms: [ .iOS(.v17), .macOS(.v14) ], targets: [ .executableTarget( name: ""+C+"", path: "Sources/"+C+"" ), .testTarget( name: ""+C+"Tests", dependencies: [""+C+""], path: "Tests/"+C+"Tests" ) ] ) "); var hasEntry=Object.keys(extracted).some(function(k){return k.indexOf("App.swift")>=0||k.indexOf("main.swift")>=0;}); if(!hasEntry) zip.file(folder+"Sources/"+C+"/"+C+"App.swift","import SwiftUI @main struct "+C+"App: App { var body: some Scene { WindowGroup { ContentView() } } } "); var hasCV=Object.keys(extracted).some(function(k){return k.indexOf("ContentView")>=0;}); if(!hasCV) zip.file(folder+"Sources/"+C+"/ContentView.swift","import SwiftUI struct ContentView: View { var body: some View { NavigationStack { VStack(spacing: 20) { Image(systemName: "app.fill") .font(.system(size: 60)) .foregroundColor(.accentColor) Text(""+slugTitle(pn)+"") .font(.largeTitle) .fontWeight(.bold) Text("Built with PantheraHive BOS") .foregroundColor(.secondary) } .navigationTitle(""+slugTitle(pn)+"") } } } #Preview { ContentView() } "); zip.file(folder+"Tests/"+C+"Tests/"+C+"Tests.swift","import XCTest @testable import "+C+" final class "+C+"Tests: XCTestCase { func testExample() throws { XCTAssertTrue(true) } } "); Object.keys(extracted).forEach(function(p){ var fp=p.indexOf("/")>=0?p:"Sources/"+C+"/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Open in Xcode 1. Unzip 2. `File > Open...` → select `Package.swift` 3. Xcode resolves dependencies automatically ## Run - Press ▶ in Xcode or `swift run` in terminal ## Test ```bash swift test ``` "); zip.file(folder+".gitignore",".DS_Store .build/ *.xcuserdata .swiftpm/ "); } /* --- Kotlin (Jetpack Compose Android) --- */ function buildKotlin(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var pkg="com.example."+pn; var srcPath="app/src/main/kotlin/"+pkg.replace(/./g,"/")+"/"; var extracted=extractCode(panelTxt); zip.file(folder+"settings.gradle.kts","pluginManagement { repositories { google() mavenCentral() gradlePluginPortal() } } dependencyResolutionManagement { repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS) repositories { google(); mavenCentral() } } rootProject.name = ""+C+"" include(":app") "); zip.file(folder+"build.gradle.kts","plugins { alias(libs.plugins.android.application) apply false alias(libs.plugins.kotlin.android) apply false alias(libs.plugins.kotlin.compose) apply false } "); zip.file(folder+"gradle.properties","org.gradle.jvmargs=-Xmx2048m -Dfile.encoding=UTF-8 android.useAndroidX=true kotlin.code.style=official android.nonTransitiveRClass=true "); zip.file(folder+"gradle/wrapper/gradle-wrapper.properties","distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-8.9-bin.zip zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists "); zip.file(folder+"app/build.gradle.kts","plugins { alias(libs.plugins.android.application) alias(libs.plugins.kotlin.android) alias(libs.plugins.kotlin.compose) } android { namespace = ""+pkg+"" compileSdk = 35 defaultConfig { applicationId = ""+pkg+"" minSdk = 24 targetSdk = 35 versionCode = 1 versionName = "1.0" } buildTypes { release { isMinifyEnabled = false proguardFiles(getDefaultProguardFile("proguard-android-optimize.txt")) } } compileOptions { sourceCompatibility = JavaVersion.VERSION_11 targetCompatibility = JavaVersion.VERSION_11 } kotlinOptions { jvmTarget = "11" } buildFeatures { compose = true } } dependencies { implementation(platform("androidx.compose:compose-bom:2024.12.01")) implementation("androidx.activity:activity-compose:1.9.3") implementation("androidx.compose.ui:ui") implementation("androidx.compose.ui:ui-tooling-preview") implementation("androidx.compose.material3:material3") implementation("androidx.navigation:navigation-compose:2.8.4") implementation("androidx.lifecycle:lifecycle-runtime-ktx:2.8.7") debugImplementation("androidx.compose.ui:ui-tooling") } "); zip.file(folder+"app/src/main/AndroidManifest.xml","<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android"> <application android:allowBackup="true" android:label="@string/app_name" android:theme="@style/Theme."+C+""> <activity android:name=".MainActivity" android:exported="true" android:theme="@style/Theme."+C+""> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> </manifest> "); var hasMain=Object.keys(extracted).some(function(k){return k.indexOf("MainActivity")>=0;}); if(!hasMain) zip.file(folder+srcPath+"MainActivity.kt","package "+pkg+" import android.os.Bundle import androidx.activity.ComponentActivity import androidx.activity.compose.setContent import androidx.activity.enableEdgeToEdge import androidx.compose.foundation.layout.* import androidx.compose.material3.* import androidx.compose.runtime.* import androidx.compose.ui.Alignment import androidx.compose.ui.Modifier import androidx.compose.ui.unit.dp class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) enableEdgeToEdge() setContent { "+C+"Theme { Scaffold(modifier = Modifier.fillMaxSize()) { padding -> Box(Modifier.fillMaxSize().padding(padding), contentAlignment = Alignment.Center) { Column(horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.spacedBy(16.dp)) { Text(""+slugTitle(pn)+"", style = MaterialTheme.typography.headlineLarge) Text("Built with PantheraHive BOS", style = MaterialTheme.typography.bodyMedium) } } } } } } } "); zip.file(folder+"app/src/main/res/values/strings.xml","<?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">"+slugTitle(pn)+"</string> </resources> "); zip.file(folder+"app/src/main/res/values/themes.xml","<?xml version="1.0" encoding="utf-8"?> <resources> <style name="Theme."+C+"" parent="Theme.Material3.DayNight.NoActionBar"/> </resources> "); Object.keys(extracted).forEach(function(p){ var fp=p.indexOf("app/src")>=0?p:srcPath+p; if(!fp.endsWith(".kt")&&!fp.endsWith(".xml"))fp=srcPath+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Open in IDE 1. Open **Android Studio** 2. `File > Open...` → select the root folder 3. Let Gradle sync complete ## Run - Click ▶ in Android Studio ## Build Release ```bash ./gradlew assembleRelease ``` "); zip.file(folder+".gitignore","*.iml .gradle/ /local.properties /.idea/ .DS_Store /build/ /captures .externalNativeBuild/ .cxx/ *.apk "); } /* --- React (Vite + TypeScript) --- */ function buildReact(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); var allT=code+" "+panelTxt; var usesTS=allT.indexOf(".tsx")>=0||allT.indexOf("interface ")>=0||allT.indexOf(": React.")>=0; var ext=usesTS?"tsx":"jsx"; zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "react": "^19.0.0", "react-dom": "^19.0.0", "react-router-dom": "^7.1.5", "axios": "^1.7.9" }, "devDependencies": { "@eslint/js": "^9.17.0", "@types/react": "^19.0.2", "@types/react-dom": "^19.0.2", "@vitejs/plugin-react": "^4.3.4", "typescript": "~5.7.2", "vite": "^6.0.5" } } '); zip.file(folder+"vite.config."+ext,"import { defineConfig } from 'vite' import react from '@vitejs/plugin-react' export default defineConfig({ plugins: [react()], resolve: { alias: { '@': '/src' } } }) "); zip.file(folder+"tsconfig.json",'{ "files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"lib":["ES2020","DOM","DOM.Iterable"], "module":"ESNext","skipLibCheck":true,"moduleResolution":"bundler", "allowImportingTsExtensions":true,"isolatedModules":true,"moduleDetection":"force", "noEmit":true,"jsx":"react-jsx","strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src"] } '); zip.file(folder+"index.html","<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>"+slugTitle(pn)+"
"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react' import ReactDOM from 'react-dom/client' import App from './App' import './index.css' ReactDOM.createRoot(document.getElementById('root')!).render( ) "); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react' import './App.css' function App(){ return(

"+slugTitle(pn)+"

Built with PantheraHive BOS

) } export default App "); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e} .app{min-height:100vh;display:flex;flex-direction:column} .app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px} h1{font-size:2.5rem;font-weight:700} "); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` ## Open in IDE Open the project folder in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vue-tsc -b && vite build", "preview": "vite preview" }, "dependencies": { "vue": "^3.5.13", "vue-router": "^4.4.5", "pinia": "^2.3.0", "axios": "^1.7.9" }, "devDependencies": { "@vitejs/plugin-vue": "^5.2.1", "typescript": "~5.7.3", "vite": "^6.0.5", "vue-tsc": "^2.2.0" } } '); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' import { resolve } from 'path' export default defineConfig({ plugins: [vue()], resolve: { alias: { '@': resolve(__dirname,'src') } } }) "); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]} '); zip.file(folder+"tsconfig.app.json",'{ "compilerOptions":{ "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"], "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true, "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue", "strict":true,"paths":{"@/*":["./src/*"]} }, "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"] } '); zip.file(folder+"env.d.ts","/// "); zip.file(folder+"index.html"," "+slugTitle(pn)+"
"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue' import { createPinia } from 'pinia' import App from './App.vue' import './assets/main.css' const app = createApp(App) app.use(createPinia()) app.mount('#app') "); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue"," "); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547} "); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install npm run dev ``` ## Build ```bash npm run build ``` Open in VS Code or WebStorm. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local "); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{ "name": "'+pn+'", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test" }, "dependencies": { "@angular/animations": "^19.0.0", "@angular/common": "^19.0.0", "@angular/compiler": "^19.0.0", "@angular/core": "^19.0.0", "@angular/forms": "^19.0.0", "@angular/platform-browser": "^19.0.0", "@angular/platform-browser-dynamic": "^19.0.0", "@angular/router": "^19.0.0", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^19.0.0", "@angular/cli": "^19.0.0", "@angular/compiler-cli": "^19.0.0", "typescript": "~5.6.0" } } '); zip.file(folder+"angular.json",'{ "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "newProjectRoot": "projects", "projects": { "'+pn+'": { "projectType": "application", "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/'+pn+'", "index": "src/index.html", "browser": "src/main.ts", "tsConfig": "tsconfig.app.json", "styles": ["src/styles.css"], "scripts": [] } }, "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"} } } } } '); zip.file(folder+"tsconfig.json",'{ "compileOnSave": false, "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]}, "references":[{"path":"./tsconfig.app.json"}] } '); zip.file(folder+"tsconfig.app.json",'{ "extends":"./tsconfig.json", "compilerOptions":{"outDir":"./dist/out-tsc","types":[]}, "files":["src/main.ts"], "include":["src/**/*.d.ts"] } '); zip.file(folder+"src/index.html"," "+slugTitle(pn)+" "); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch(err => console.error(err)); "); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; } "); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = '"+pn+"'; } "); zip.file(folder+"src/app/app.component.html","

"+slugTitle(pn)+"

Built with PantheraHive BOS

"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1} "); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes) ] }; "); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router'; export const routes: Routes = []; "); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+" Generated by PantheraHive BOS. ## Setup ```bash npm install ng serve # or: npm start ``` ## Build ```bash ng build ``` Open in VS Code with Angular Language Service extension. "); zip.file(folder+".gitignore","node_modules/ dist/ .env .DS_Store *.local .angular/ "); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join(" "):"# add dependencies here "; zip.file(folder+"main.py",src||"# "+title+" # Generated by PantheraHive BOS print(title+" loaded") "); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` ## Run ```bash python main.py ``` "); zip.file(folder+".gitignore",".venv/ __pycache__/ *.pyc .env .DS_Store "); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^```[w]* ?/m,"").replace(/ ?```$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+" "; zip.file(folder+"package.json",pkgJson); var fallback="const express=require("express"); const app=express(); app.use(express.json()); app.get("/",(req,res)=>{ res.json({message:""+title+" API"}); }); const PORT=process.env.PORT||3000; app.listen(PORT,()=>console.log("Server on port "+PORT)); "; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000 "); zip.file(folder+".gitignore","node_modules/ .env .DS_Store "); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Setup ```bash npm install ``` ## Run ```bash npm run dev ``` "); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:" "+title+" "+code+" "; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */ *{margin:0;padding:0;box-sizing:border-box} body{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e} "); zip.file(folder+"script.js","/* "+title+" — scripts */ "); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. ## Open Double-click `index.html` in your browser. Or serve locally: ```bash npx serve . # or python3 -m http.server 3000 ``` "); zip.file(folder+".gitignore",".DS_Store node_modules/ .env "); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/**(.+?)**/g,"$1"); hc=hc.replace(/ {2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+" Generated by PantheraHive BOS. Files: - "+app+".md (Markdown) - "+app+".html (styled HTML) "); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); }function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}