Site SEO Auditor
Run ID: 69caf94d26e01bf7c6786f4e2026-03-30SEO & Growth
PantheraHive BOS
BOS Dashboard

Step 3 of 5: AI-Powered Fix Generation (gemini → batch_generate)

This step leverages the advanced capabilities of the Gemini AI model to automatically generate precise, actionable fixes for all identified SEO issues during the site audit. By feeding specific details about each broken element and its context to Gemini, we ensure that the recommended solutions are tailored, accurate, and ready for immediate implementation.


Purpose of this Step

Following the comprehensive site crawl and audit (Steps 1 & 2), a detailed list of SEO violations and broken elements is compiled. The primary goal of this step is to transform these identified problems into concrete, executable solutions. Gemini acts as an intelligent SEO consultant, providing the "exact fix" for each issue, significantly reducing the manual effort required to diagnose and resolve common SEO problems.

Input Data for Gemini

For each identified SEO violation, Gemini receives a structured input payload containing all necessary context to generate an accurate fix. This typically includes:

Gemini's Fix Generation Process

Gemini processes each input by:

  1. Understanding the Violation: Analyzing the audit rule and the current state to pinpoint the exact nature of the problem.
  2. Contextual Analysis: Examining the provided HTML snippet and, if available, the page content summary to understand the surrounding elements and the page's overall topic and intent.
  3. Formulating the "Exact Fix": Based on its understanding and context, Gemini generates the specific code change, content suggestion, or configuration update required to resolve the issue. This isn't just a general recommendation but a precise instruction or snippet.
  4. Ensuring Best Practices: Gemini is trained on SEO best practices, ensuring that its generated fixes not only resolve the immediate problem but also adhere to current SEO guidelines (e.g., optimal meta description length, descriptive alt text, correct canonical tag syntax).
  5. Batch Processing: Gemini efficiently handles multiple issues in a batch, processing all identified violations concurrently to provide a comprehensive set of fixes in a single operation.

Output: Exact Fixes and Actionable Recommendations

The output from Gemini is a structured collection of proposed fixes, presented in a clear, actionable format. Each fix is designed to be directly implementable by a developer or content editor.

Structure of Generated Fixes

Each generated fix object includes:

Examples of Generated Fixes

Here are illustrative examples of the "exact fixes" Gemini generates for various common SEO issues:

1. Issue: Missing H1 Tag

html • 205 chars
    <meta name="description" content="Stay hydrated with our durable, BPA-free eco-friendly water bottle. Designed for sustainability and convenience, perfect for daily use and adventures. Shop now!">
    
Sandboxed live preview

Step 1 of 5: Data Acquisition - Headless Crawl with Puppeteer

Overview

This initial step is the foundational phase of your "Site SEO Auditor" workflow. It involves deploying a sophisticated headless crawler powered by Puppeteer to systematically visit and collect comprehensive data from every discoverable page on your website. This process simulates a real user's browser experience, ensuring that the collected data accurately reflects how search engines and visitors perceive your site.

Purpose of This Step

The primary objective of this step is to create a complete and accurate snapshot of your website's current state. By thoroughly crawling your site, we gather all the raw data necessary to perform the subsequent 12-point SEO audit. Without this comprehensive data collection, a detailed and actionable audit would not be possible.

How the Headless Crawler Works

Technology: Puppeteer

We leverage Puppeteer, a Node.js library, to control a headless Chrome or Chromium browser. This allows us to programmatically navigate your website, interact with pages, and extract data just as a full browser would, but without a visible user interface.

Execution: Headless Browser Simulation

  1. Initial Seed URL: The crawler starts with the provided root URL of your website.
  2. Page Loading & Rendering: For each page, Puppeteer initiates a full browser rendering cycle. This is crucial because many modern websites are built with JavaScript frameworks (e.g., React, Angular, Vue), and their content is dynamically loaded. A simple HTTP request would miss this content. Puppeteer ensures all JavaScript executes and the DOM (Document Object Model) is fully constructed.
  3. Navigation & Discovery: As each page loads, the crawler identifies all internal links (<a> tags with href attributes pointing to your domain). These newly discovered links are added to a queue for subsequent crawling, ensuring complete site coverage.
  4. Respecting robots.txt: The crawler is configured to respect your website's robots.txt file. This ensures that any pages or sections you've explicitly disallowed for crawling are not accessed, maintaining your site's intended privacy and server load management.

Data Collected During Crawl

For every unique URL discovered and successfully crawled, the following critical data points are meticulously collected:

  • Page URL: The canonical URL of the page.
  • HTTP Status Code: The response code received from the server (e.g., 200 OK, 301 Redirect, 404 Not Found, 500 Server Error). This helps identify broken links or server issues.
  • Full HTML Content (DOM Snapshot): The complete rendered HTML of the page after all JavaScript has executed. This includes all elements, content, and attributes required for the subsequent SEO audit.
  • Internal & External Links: A list of all href attributes found on the page, categorized as internal (within your domain) or external (to other domains).
  • Initial Performance Metrics: Basic timing information related to page load, which will contribute to the Core Web Vitals assessment in a later step. This includes metrics like page load time and network requests.
  • Visual Snapshot (Optional): A screenshot of the fully rendered page can be captured (on demand or for specific failures) to visually inspect layout issues or dynamic content rendering.

Scope and Coverage

The crawler is designed to achieve comprehensive coverage of your website. It will recursively follow all internal links until all discoverable pages within your specified domain have been visited or a predefined depth limit (if configured) is reached. This ensures that no corner of your site is left unaudited.

Output of This Step

The immediate output of this "puppeteer → crawl" step is a raw dataset, a collection of all the aforementioned data points for each crawled URL. This structured data is then prepared to be passed to the subsequent step in the workflow, which will perform the actual SEO analysis against the 12-point checklist.

Value Proposition

This meticulous crawling process provides the indispensable foundation for an accurate and comprehensive SEO audit. By simulating a real browser, we ensure that:

  • Accuracy: Data reflects the live user experience, not just static server responses.
  • Completeness: All content, including dynamically loaded elements, is captured.
  • Reliability: Identifies issues that only appear during full page rendering.

Next Steps

The collected raw data from this crawling phase will now be passed to the next stage of the "Site SEO Auditor" workflow. In Step 2, this data will be systematically analyzed against the predefined 12-point SEO checklist to identify specific areas for improvement.

hive_db Output

Workflow Step: hive_db → diff (Site SEO Auditor)

This document details the execution of Step 2 of 5 for the "Site SEO Auditor" workflow, focusing on the critical "diff" operation performed against the hive_db (MongoDB). This step is crucial for understanding changes, identifying regressions, and tracking improvements in your site's SEO performance over time.


1. Introduction to the Diff Operation

Following the comprehensive audit conducted in Step 1 (crawling and SEO checklist evaluation), the "diff" operation compares the current audit results against the most recent prior audit report stored in our hive_db. This comparison provides a clear, actionable overview of what has changed on your site from an SEO perspective, highlighting new issues, resolved problems, and persistent areas for improvement.

The primary goal of this step is to generate a structured "before/after" analysis that will inform subsequent actions, such as generating specific fixes via Gemini and compiling your detailed SiteAuditReport.

2. Data Retrieval from hive_db (MongoDB)

To perform the diff, two key datasets are retrieved or prepared:

  • Current Audit Report (Post-Crawl & Audit): This dataset represents the complete, freshly generated audit results from the current execution. It contains detailed SEO metrics and status for every page discovered during the recent crawl.
  • Previous Audit Report (from hive_db): The system queries the SiteAuditReport collection in your dedicated MongoDB instance to retrieve the most recent successful audit report for your specific site. This report serves as the baseline for comparison.

* Selection Criteria: The system identifies the previous report by matching the site's unique identifier and selecting the document with the latest auditTimestamp that is older than the current audit's timestamp. If no previous report exists (e.g., first-time audit), the diff will effectively treat all current findings as "new issues."

3. Comprehensive Diffing Mechanism

The diffing mechanism systematically compares the current and previous audit reports at both the page level and the individual SEO metric level.

3.1. Page-Level Comparison

The first layer of comparison identifies changes in the site's structure or discoverability:

  • New Pages Discovered: Pages present in the current audit that were not found in the previous audit. These could be newly published content, previously unlinked pages, or newly sitemapped URLs.
  • Pages No Longer Found: Pages present in the previous audit that are not found in the current audit. This could indicate pages being removed, redirected, experiencing crawl errors, or being temporarily inaccessible.
  • Existing Pages (Unchanged URLs): Pages that were present in both the current and previous audits. These pages are then subjected to a detailed metric-level comparison.

3.2. Metric-Level Comparison for Existing Pages

For each existing page, the system compares every point of the 12-point SEO checklist:

  • Meta Title & Description Uniqueness:

* Change Detection: Identifies if a title/description has changed, or if its uniqueness status (e.g., previously unique, now duplicate) has changed.

* Issue Status: Tracks if a page's meta title/description has gone from "passing" to "failing" (e.g., missing, too long/short, duplicate) or from "failing" to "passing."

  • H1 Presence & Uniqueness:

* Change Detection: Notes changes in the H1 content or its presence/absence.

* Issue Status: Reports if H1 issues (missing, multiple, empty) have emerged or been resolved.

  • Image Alt Coverage:

* Change Detection: Quantifies the percentage of images with missing alt text.

* Issue Status: Flags if alt text coverage has worsened or improved.

  • Internal Link Density:

* Change Detection: Monitors significant changes in the number of internal links on a page.

* Issue Status: Highlights pages with unusually low or high internal link counts compared to the baseline.

  • Canonical Tags:

* Change Detection: Identifies changes in the canonical URL or the presence/absence of the tag.

* Issue Status: Reports on new or resolved canonicalization issues (e.g., self-referencing vs. external, missing).

  • Open Graph Tags (OG):

* Change Detection: Notes changes in critical OG tags (e.g., og:title, og:description, og:image).

* Issue Status: Flags missing or incorrect OG tags.

  • Core Web Vitals (LCP, CLS, FID):

* Change Detection: Provides a direct numerical comparison of LCP, CLS, and FID scores.

* Issue Status: Clearly indicates if a page's Core Web Vitals have moved from "Good" to "Needs Improvement" or "Poor," or vice-versa, with specific delta values.

  • Structured Data Presence:

* Change Detection: Identifies changes in the type or presence of structured data.

* Issue Status: Reports if structured data has been added, removed, or if validation errors have emerged/been resolved.

  • Mobile Viewport Configuration:

* Change Detection: Confirms consistent and correct viewport meta tag configuration.

* Issue Status: Flags any new or resolved issues related to mobile responsiveness setup.

4. Output of the Diff Operation: Structured Change Log

The output of this hive_db → diff step is a highly structured JSON object, embedded within the SiteAuditReport, that categorizes all identified changes. This structured output is optimized for subsequent processing by Gemini and for generating your user-facing report.

Key categories within the diff output include:

  • newIssues: A list of specific SEO problems found in the current audit that were not present in the previous audit. Each entry includes the URL, the specific SEO checklist item, and a brief description of the issue.

Example:* {"url": "/new-product-page", "issue": "Missing H1 Tag"}

  • resolvedIssues: A list of specific SEO problems that were present in the previous audit but are no longer found in the current audit. This highlights successful fixes and improvements.

Example:* {"url": "/old-blog-post", "issue": "Duplicate Meta Description"}

  • persistentIssues: A list of specific SEO problems that were present in the previous audit and continue to be present in the current audit. These are high-priority items that still require attention.

Example:* {"url": "/homepage", "issue": "Core Web Vitals: LCP Poor (3.5s)"}

  • newPagesDiscovered: A list of URLs that appeared in the current audit but not in the previous one.
  • pagesNoLongerFound: A list of URLs that were in the previous audit but are absent from the current one.
  • metricChanges: A detailed breakdown of numerical or status changes for key metrics, especially Core Web Vitals.

Example:* {"url": "/product-category", "metric": "LCP", "before": "2.8s", "after": "1.5s", "statusChange": "Needs Improvement -> Good"}

Example:* {"url": "/contact-us", "metric": "Image Alt Coverage", "before": "60%", "after": "95%"}

5. Actionability and Next Steps

The detailed diff output generated in this step is critical for the subsequent phases of the "Site SEO Auditor" workflow:

  • Gemini Fix Generation (Step 3): The newIssues and persistentIssues identified here will be directly fed to Gemini, which will then generate precise, actionable fixes for each broken element.
  • Comprehensive Reporting (Step 4 & 5): This diff data forms the core of your SiteAuditReport, providing a clear "before" and "after" comparison. This allows you to easily track the impact of your SEO efforts, identify regressions, and prioritize future optimization tasks.
  • Performance Monitoring: By systematically tracking changes, you gain invaluable insights into your site's SEO health trends, allowing for proactive intervention and strategic decision-making.

This concludes the hive_db → diff step, providing a robust foundation for automated fix generation and insightful reporting.

  • targetElementSelector: head
  • originalValue: null
  • explanation: "An og:image tag ensures that a visually appealing image is displayed when your page is shared on social media platforms like Facebook and LinkedIn, increasing engagement."
  • confidenceScore: 0.96

Impact and Benefits

This AI-powered fix generation step offers significant advantages:

  • Efficiency: Automates a traditionally time-consuming manual process of diagnosing and prescribing fixes.
  • Accuracy: Gemini's deep understanding of SEO best practices ensures high-quality, precise recommendations.
  • Actionability: Provides "exact fixes" that can be directly implemented, minimizing guesswork for development teams.
  • Scalability: Handles an unlimited number of issues across any site size, making it suitable for large enterprise websites.
  • Cost Reduction: Reduces the need for extensive manual SEO audits and expert consultation for common issues.

Integration and Next Steps

Once Gemini has generated all the batch fixes, this output is then passed to the final steps of the workflow:

  • Storage in MongoDB: The complete set of generated fixes, along with the original audit report, is stored in MongoDB as part of the SiteAuditReport document. This allows for historical tracking and the creation of "before/after" diffs.
  • "Before/After" Diff: The system will use these proposed fixes to simulate a "fixed" state, comparing it against the original audit to demonstrate the potential improvements (Step 4 of 5).
  • Reporting and Notifications: The generated fixes will be presented in a comprehensive report, often coupled with a ticketing system integration or direct notifications to relevant teams for implementation.
hive_db Output

Step 4: hive_db → upsert - Site Audit Report Data Persistence

This document details the execution and outcomes of Step 4 in your "Site SEO Auditor" workflow: the hive_db → upsert operation. This crucial step is responsible for securely storing your site's comprehensive SEO audit results within our robust MongoDB database, making them accessible for analysis, historical tracking, and future comparisons.


1. Purpose of the hive_db → upsert Step

The hive_db → upsert operation serves as the data persistence layer for your SEO audit. After our headless crawler (powered by Puppeteer) meticulously audits every page on your site and Gemini generates precise fixes for identified issues, this step ensures that all collected data is:

  • Stored Reliably: Your audit reports are saved in a structured format in MongoDB.
  • Accessible On-Demand: You can retrieve historical and current audit reports at any time.
  • Trackable: By using an upsert (update if exists, insert if not) mechanism, we can maintain a history of your site's SEO performance over time.
  • Diffable: This step enables the generation of "before/after" diffs, providing clear insights into changes between audit runs.

2. Data Structure: The SiteAuditReport Document

Each time an audit is completed, a comprehensive SiteAuditReport document is generated and stored. This document is meticulously structured to capture all relevant SEO metrics, issues, and proposed fixes. Below is a detailed overview of the structure and the information it contains:

2.1. Audit Metadata

This section provides general information about the audit run itself.

  • auditId (String): A unique identifier for this specific audit instance.
  • siteUrl (String): The base URL of the website that was audited (e.g., https://www.yourwebsite.com).
  • auditTimestamp (Date): The exact date and time when this audit was completed.
  • auditTrigger (String): Indicates how the audit was initiated (e.g., "scheduled" for Sunday 2 AM runs, or "on-demand" for manual triggers).
  • overallStatus (String): A high-level summary of the audit's completion status (e.g., "completed", "completed_with_issues", "failed").
  • totalPagesAudited (Number): The total number of unique pages successfully crawled and audited.

2.2. Page-Level Audit Details (pagesAudited Array)

This is an array where each object represents the detailed audit findings for a specific page on your site.

  • pageUrl (String): The full URL of the page being reported on.
  • statusCode (Number): The HTTP status code returned when accessing the page (e.g., 200 for OK, 404 for Not Found).
  • auditDetails (Object): A comprehensive object containing results for each of the 12 SEO checklist items for this specific page. Each item typically includes:

* status (String): "pass", "fail", or "not_applicable".

* value (String/Object): The actual data found (e.g., the meta title content, canonical URL).

* issues (Array of Objects): If status is "fail", this array lists specific problems found. Each issue object includes:

* type (String): e.g., "length", "missing", "duplicate".

* severity (String): "low", "medium", "high".

* description (String): A human-readable explanation of the issue.

* geminiFix (Object): If issues are found, this object contains the precise, AI-generated fix by Gemini.

* title (String): A summary of the fix.

* rationale (String): Explanation of why this fix is recommended.

* codeSnippet (String): The exact code to implement the fix (e.g., <title>Your New Title</title>).

Specific Audit Items Included:

* metaTitle: Uniqueness, length, presence.

* metaDescription: Uniqueness, length, presence.

* h1Presence: Presence of a single H1 tag, content.

* imageAltCoverage: All images have alt attributes, alt content relevance.

* internalLinkDensity: Number and distribution of internal links.

* canonicalTag: Presence, correctness, self-referencing.

* openGraphTags: Presence and validity of essential OG tags (title, description, image, type, URL).

* coreWebVitals: Performance metrics for Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and First Input Delay (FID).

* structuredDataPresence: Detection of schema.org markup and its validity.

* mobileViewport: Correct configuration of the viewport meta tag for mobile responsiveness.

  • pageIssuesSummary (Array of Strings): A concise list of all issues identified on this specific page.
  • pageFixesSummary (Array of Strings): A concise list of all Gemini-generated fixes for this specific page.

2.3. Overall Site Summary

This section provides aggregated insights across your entire website.

  • overallSeoScore (Number): A calculated score (e.g., out of 100) reflecting the overall SEO health of your site based on the audit.
  • totalIssuesFound (Number): The cumulative count of all unique issues identified across all audited pages.
  • totalFixesGenerated (Number): The total count of actionable fixes proposed by Gemini for your site.
  • topIssues (Array of Objects): A list of the most frequently occurring issues across your site, helping you prioritize fixes.
  • uniquenessReport (Object): Specific details on uniqueness checks for meta titles and descriptions across the entire site, including lists of duplicate pages.

2.4. Before/After Diff Analysis (diffReport Object)

This is a powerful feature enabled by the upsert mechanism. When a new audit is run, we compare its results against the most recent previous audit for your site.

  • newIssues (Array of Objects): Details of issues that were not present in the previous audit but are identified in the current one.
  • resolvedIssues (Array of Objects): Details of issues that were present in the previous audit but are now successfully resolved in the current one.
  • changedMetrics (Object): Quantifiable changes in key performance indicators (e.g., LCP improved by X ms, number of missing alt tags reduced by Y).
  • overallScoreChange (Number): The delta (increase or decrease) in your overallSeoScore compared to the previous audit.
  • previousAuditId (String): The auditId of the report used for comparison.

3. The Upsert Mechanism in Action

When Step 4 executes, the following logic is applied:

  1. Retrieve Previous Audit: The system first queries MongoDB to find the most recent SiteAuditReport associated with your siteUrl.
  2. Generate Diff: If a previous report is found, it's used as the "before" state.
hive_db Output

Step 5 of 5: hive_dbconditional_update - Site SEO Auditor Report Finalization

This document details the successful completion of the "Site SEO Auditor" workflow, specifically focusing on the final database update step. All audit data, analysis, and generated fixes have been processed and are now being securely stored and made accessible.


1. Introduction to the Final Step

The conditional_update operation is the crucial final stage of the Site SEO Auditor workflow. Following the comprehensive crawling, 12-point SEO analysis, Core Web Vitals assessment, and AI-driven fix generation by Gemini, this step is responsible for persisting all the gathered insights into our secure MongoDB database. This ensures your audit results are stored reliably, are easily retrievable, and can be compared against previous audits to track progress over time.

2. Workflow Context & Preceding Operations

Before reaching this conditional_update step, the following critical operations have been successfully executed:

  • Headless Crawling (Puppeteer): Your website was thoroughly crawled, visiting every accessible page to gather raw data.
  • 12-Point SEO Checklist Audit: Each page was meticulously audited against key SEO metrics, including:

* Meta Title Presence & Uniqueness

* Meta Description Presence & Uniqueness

* H1 Tag Presence & Best Practices

* Image Alt Attribute Coverage

* Internal Link Density & Quality

* Canonical Tag Implementation

* Open Graph (OG) Tags for Social Sharing

* Core Web Vitals (LCP, CLS, FID) Performance

* Structured Data (Schema.org) Presence

* Mobile Viewport Configuration

  • AI-Powered Fix Generation (Gemini): For every identified issue or broken element, our integrated Gemini AI model generated precise, actionable fix recommendations.
  • Data Aggregation & Structuring: All raw data, audit findings, and Gemini-generated fixes were consolidated into a structured SiteAuditReport object, ready for storage.

3. Database Operation Details: conditional_update

This step performs a sophisticated database operation to manage your SEO audit reports:

  • Report Storage: The newly generated SiteAuditReport document, containing all the detailed audit results for every page, overall summaries, and AI-generated fix suggestions, is now being stored in your dedicated MongoDB collection.
  • Conditional Logic: The conditional_update aspect ensures intelligent handling of your audit history:

New Audit: If this is the first* audit for your site, a new SiteAuditReport document is created and inserted into the database.

* Subsequent Audits: If previous audit reports exist, the system retrieves the most recent prior report.

  • Before/After Diff Generation: A critical feature of this step is the automatic generation of a "before/after diff." This involves comparing the current audit report with the most recent previous report (if available). The diff identifies:

* New Issues: Problems detected in the current audit that were not present in the previous one.

* Resolved Issues: Problems identified in the previous audit that are no longer present, indicating successful remediation.

* Persisting Issues: Issues that remain unresolved across both audits.

* Metric Changes: Any significant fluctuations in performance metrics (e.g., Core Web Vitals scores).

This diff is then embedded directly into the new SiteAuditReport document, providing immediate context on your SEO progress.

  • Data Structure: The SiteAuditReport document includes comprehensive fields such as:

* auditId: A unique identifier for this specific audit run.

* siteUrl: The URL of the audited website.

* timestamp: The exact date and time the audit was completed.

* overallSummary: High-level statistics and a summary of critical issues.

* pagesAudited: An array of detailed audit results for each individual page, including all 12 SEO points, their status (pass/fail), specific issues, and Gemini's fix suggestions.

* previousAuditId: A reference to the prior audit report, enabling historical tracking.

* diffReport: The generated "before/after" comparison, highlighting changes from the last audit.

4. Key Outcomes & Deliverables

Upon completion of this step, the following outcomes are delivered to you:

  • Comprehensive SEO Audit Report: A complete and detailed report for your website, covering all 12 critical SEO points.
  • Actionable Fixes: For every identified issue, you receive precise, AI-generated recommendations on how to resolve them.
  • Historical Tracking: The report is linked to previous audits, allowing you to monitor your SEO performance trends over time.
  • Progress Visualization: The integrated "before/after diff" provides an immediate overview of improvements made and new issues that may have arisen since the last audit.
  • Secure & Accessible Data: All your audit data is securely stored in your dedicated MongoDB instance, ready for retrieval and analysis through your PantheraHive dashboard or API.

5. Next Steps & Report Accessibility

Your latest Site SEO Audit Report is now fully processed and stored.

  • Report Access: You can access the detailed report, including all individual page audits, issue summaries, and Gemini's fix suggestions, directly through your PantheraHive dashboard. Look for the "Site SEO Auditor" section to view your latest report and its historical comparisons.
  • Automated Scheduling: Remember, this audit will automatically run every Sunday at 2 AM, providing continuous monitoring of your site's SEO health.
  • On-Demand Audits: You can also trigger a new audit at any time via your PantheraHive interface, for immediate insights after implementing changes or before major launches.

6. Summary

Step 5, the conditional_update to hive_db, marks the successful conclusion of the "Site SEO Auditor" workflow. Your website has been thoroughly audited, issues identified, fixes generated by AI, and all this valuable information has been securely stored. You now have a comprehensive, actionable SEO roadmap and the tools to track your progress effectively.

site_seo_auditor.html
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}