"+slugTitle(pn)+"
\nBuilt with PantheraHive BOS
\nhive_db → diff - Generating Site SEO Audit DifferencesThis document details the successful execution and output of Step 2 in the "Site SEO Auditor" workflow. In this crucial phase, the system performs a comprehensive comparison between the newly completed SEO audit report and the most recent previous audit report stored in our hive_db (MongoDB). This 'diff' operation is essential for identifying changes, improvements, and regressions in your site's SEO performance over time.
The primary objective of the hive_db → diff step is to provide a clear, actionable, and historical perspective on your site's SEO health. By meticulously comparing the current audit results against the previous baseline, we can pinpoint specific areas that have improved, new issues that have emerged, or persistent problems that still require attention. This comparative analysis transforms raw audit data into insightful intelligence, driving targeted optimization efforts.
This step involves a series of sub-processes to ensure an accurate and comprehensive comparison:
* The system queries hive_db (MongoDB) to retrieve the latest completed SiteAuditReport for your domain. This report represents the current state of your site's SEO.
* Concurrently, the system retrieves the immediately preceding SiteAuditReport for your domain. This serves as the historical baseline for comparison. If no previous report exists (e.g., first-ever audit), the system will treat all current findings as 'new issues'.
* For each URL crawled in the current audit, the system attempts to find a corresponding URL in the previous audit.
* If a URL is new, all its findings are marked as 'new'. If a URL was removed, its previous issues are marked as 'resolved' (contextually, as the page no longer exists).
* For each identified URL, the system iterates through all 12 points of the SEO checklist:
* Meta Title Uniqueness
* Meta Description Uniqueness
* H1 Presence & Uniqueness
* Image Alt Coverage
* Internal Link Density
* Canonical Tag Presence & Correctness
* Open Graph Tag Presence & Correctness
* Core Web Vitals (LCP, CLS, FID scores)
* Structured Data Presence & Validity
* Mobile Viewport Meta Tag Presence
* Broken Elements (e.g., broken links, missing images)
* Robots.txt & Sitemap.xml accessibility (site-wide)
* For each metric, a comparison is made between the current value/status and the previous value/status.
New Issues: Problems identified in the current audit that were not* present in the previous audit for a specific page/metric.
Resolved Issues: Problems identified in the previous audit that are no longer* present in the current audit for a specific page/metric.
* Persistent Issues: Problems that were present in the previous audit and remain present in the current audit. These are critical areas needing attention.
* Improvements: Positive changes in quantitative metrics (e.g., faster LCP, higher image alt coverage percentage).
* Regressions: Negative changes in quantitative metrics (e.g., slower LCP, lower image alt coverage percentage).
* No Change: Metrics that remain identical between audits.
* A structured SiteAuditDiffReport document is generated, encapsulating all identified changes. This document is then stored in hive_db for historical tracking and subsequent processing.
The primary output of this step is a comprehensive SiteAuditDiffReport object, stored within hive_db. This report serves as the canonical record of changes between audits and is structured to facilitate further automated actions.
SiteAuditDiffReport Structure (Conceptual){
"_id": "unique_diff_id",
"siteId": "your_site_id",
"currentAuditId": "id_of_current_audit_report",
"previousAuditId": "id_of_previous_audit_report",
"auditDate": "YYYY-MM-DDTHH:MM:SS.sssZ", // Date of the current audit
"summary": {
"totalNewIssues": 15,
"totalResolvedIssues": 8,
"totalPersistentIssues": 30,
"totalImprovements": 5,
"totalRegressions": 2,
"overallHealthChange": "Neutral" // e.g., "Improved", "Declined", "Neutral"
},
"changesByPage": [
{
"url": "https://www.yourdomain.com/example-page-1",
"newIssues": [
{
"metric": "H1_PRESENCE",
"description": "Missing H1 tag.",
"severity": "High"
},
{
"metric": "CORE_WEB_VITALS_LCP",
"description": "LCP increased from 2.0s to 3.5s (poor).",
"severity": "Medium"
}
],
"resolvedIssues": [
{
"metric": "IMAGE_ALT_COVERAGE",
"description": "All images now have alt text.",
"severity": "Low"
}
],
"persistentIssues": [
{
"metric": "META_DESCRIPTION_UNIQUENESS",
"description": "Duplicate meta description.",
"severity": "Medium"
}
],
"improvements": [
{
"metric": "CORE_WEB_VITALS_CLS",
"description": "CLS improved from 0.15 to 0.05.",
"delta": -0.10
}
],
"regressions": [
{
"metric": "INTERNAL_LINK_DENSITY",
"description": "Internal links decreased from 10 to 5.",
"delta": -5
}
]
},
{
"url": "https://www.yourdomain.com/another-page",
// ... similar structure for other pages
}
],
"siteWideChanges": {
"robotsTxt": {
"status": "No Change", // or "New Issue", "Resolved Issue"
"details": "No changes detected in robots.txt content or accessibility."
},
"sitemapXml": {
"status": "New Issue",
"details": "Sitemap.xml is now inaccessible, previously accessible."
}
}
}
This document details the successful execution and deliverables for "Step 1: puppeteer → crawl" of your Site SEO Auditor workflow. This crucial initial phase involves comprehensively traversing your website to discover all accessible pages, laying the foundation for the subsequent in-depth SEO analysis.
The primary objective of this step is to act as a headless crawler, systematically visiting every page on your specified website. Utilizing Puppeteer, a Node.js library that provides a high-level API to control headless Chrome or Chromium, we simulate a real user's browser experience. This ensures that not only static HTML but also dynamically rendered content (JavaScript-driven pages) are fully discovered and captured for auditing.
To generate a complete and accurate inventory of all unique, discoverable URLs on your website, along with their raw HTML content and initial HTTP status codes, serving as the foundational dataset for the 12-point SEO checklist audit.
robots.txt Adherence: The crawler strictly respects your website's robots.txt file, ensuring that only pages permitted for crawling are accessed, maintaining ethical and compliant behavior.The primary input required to initiate this step is:
https://www.yourwebsite.com). This was provided as part of the initial setup.Upon successful completion of the crawling phase, the following raw data is generated and prepared for the next auditing steps:
This raw data is securely stored in a temporary staging area, ready to be processed by the subsequent SEO auditing logic.
The output from this "puppeteer → crawl" step is the direct input for "Step 2: SEO Audit & Analysis". The collected raw HTML and associated data for each URL will now be systematically analyzed against the 12-point SEO checklist, identifying specific areas for improvement.
summary: Provides a high-level overview of the audit's delta, indicating the overall trend of your SEO performance.changesByPage: This is the most granular and actionable section. Each entry corresponds to a specific URL and details all SEO changes detected on that page: * newIssues: Directly actionable list of new problems that need immediate attention. These are prime candidates for automated fix generation by Gemini.
* resolvedIssues: Confirmation that previous efforts or site updates have successfully addressed identified SEO problems. This provides valuable feedback on implemented changes.
* persistentIssues: Highlights long-standing problems that have not yet been resolved. These often require more in-depth investigation or strategic planning.
* improvements: Quantifiable positive shifts in metrics.
* regressions: Quantifiable negative shifts in metrics, indicating potential new problems or areas where previous optimizations have deteriorated.
siteWideChanges: Captures changes related to global site configurations like robots.txt and sitemap.xml, which affect the entire domain.The generated SiteAuditDiffReport is a critical artifact that directly informs the subsequent steps in the "Site SEO Auditor" workflow:
newIssues and persistentIssues identified in this diff report, particularly those related to structured content (meta tags, H1s, alt text), will be automatically fed into Gemini (our AI assistant). Gemini will then analyze these specific issues and generate precise, actionable code fixes or content recommendations.hive_db allows for long-term trend analysis, enabling you to track your SEO progress over months and years.This completes the hive_db → diff step, providing a robust and detailed comparison of your site's SEO performance.
gemini → batch_generate)This crucial step leverages Google's advanced Gemini AI model to automatically generate precise, actionable fixes for all identified SEO issues. Following the comprehensive audit performed by our headless crawler (Step 2), any detected "broken elements" or non-compliant SEO attributes are systematically fed into Gemini. The AI then processes these issues in batches, providing exact recommendations and code snippets to resolve them.
The gemini → batch_generate step is the intelligence core of our Site SEO Auditor. Its primary purpose is to transform raw audit findings into concrete, implementable solutions. Instead of simply reporting problems, this step empowers you with immediate, AI-generated remedies, significantly accelerating the SEO optimization process.
Key Objectives:
The input for Gemini is meticulously structured to provide maximum context for accurate fix generation. Each identified SEO issue from the audit (e.g., missing H1, duplicate meta description, image without alt text, Core Web Vitals degradation) is packaged with relevant page data.
Typical Input Data Points for Each Issue:
MISSING_H1, DUPLICATE_META_DESCRIPTION, NO_ALT_TEXT, LCP_THRESHOLD_EXCEEDED).Example Input for a Missing H1 Issue:
{
"url": "https://www.yourdomain.com/blog/article-title-example",
"issue_type": "MISSING_H1",
"issue_description": "Page is missing a primary H1 heading.",
"context": {
"page_title": "Understanding SEO Best Practices for 2024",
"main_content_snippet": "This article delves into the latest SEO strategies...",
"existing_headings": ["<h2>Introduction</h2>", "<h3>What's New?</h3>"]
}
}
Gemini's advanced natural language understanding and code generation capabilities are central to this step. It acts as an expert SEO consultant, analyzing each issue within its context and proposing the most effective solution.
Gemini's Processing Logic:
* Meta Tags: If a meta description is missing or duplicated, Gemini generates a unique, compelling description based on the page's main content and title, adhering to character limits.
* H1 Tags: If an H1 is missing, it proposes an appropriate H1 text derived from the page title or main content, ensuring it's semantically relevant and unique.
* Image Alt Text: For images without alt text, Gemini analyzes the image's context (e.g., surrounding text, image filename) to generate descriptive and keyword-rich alt attributes.
* Internal Linking: For low internal link density, it suggests relevant anchor texts and target pages within your site, based on content similarity.
* Canonical Tags: If canonical issues are detected, it suggests the correct canonical URL.
* Open Graph Tags: For missing or incorrect OG tags, it generates appropriate og:title, og:description, og:image, etc., based on page content.
* Structured Data: For pages that could benefit from structured data (e.g., articles, products, FAQs), Gemini generates the appropriate JSON-LD schema markup.
* Core Web Vitals: For LCP/CLS/FID issues, Gemini analyzes the underlying cause (e.g., large images, render-blocking resources, layout shifts) and suggests specific optimizations (e.g., image compression, lazy loading, CSS/JS minification, font preloading).
The output from Gemini is designed to be directly implementable. It provides the "exact fix" required, minimizing the effort for your development or content teams.
Examples of Generated Fixes:
* Generated Fix (HTML Snippet):
<!-- Proposed H1 to be inserted at the top of the main content area -->
<h1>Understanding SEO Best Practices for 2024</h1>
* Generated Fix (Meta Tag):
<!-- New, unique meta description for https://www.yourdomain.com/blog/article-title-example -->
<meta name="description" content="Explore the latest SEO strategies for 2024, covering core web vitals, AI content optimization, and effective link-building tactics to boost your search rankings." />
* Generated Fix (HTML Attribute Update):
<!-- Update for an image at https://www.yourdomain.com/images/seo-trends.webp -->
<img src="/images/seo-trends.webp" alt="Graph showing increasing SEO trends and strategies for 2024" />
og:image Tag* Generated Fix (Open Graph Tag):
<!-- New Open Graph image tag for social sharing -->
<meta property="og:image" content="https://www.yourdomain.com/images/social-share-image.jpg" />
<meta property="og:image:alt" content="Visual summary of SEO best practices" />
(Note: Gemini might also suggest a suitable image URL if a default is configured or inferrable.)
* Generated Fix (Textual Recommendation + HTML):
**Recommendation:** Consider adding an internal link from this page (https://www.yourdomain.com/blog/article-title-example) to related content on 'Keyword Research'.
**Proposed Insertion Point:** Within the "What's New?" section.
**Proposed Link:**
<p>Learn more about effective <a href="/seo-guides/keyword-research">keyword research strategies</a> to enhance your content visibility.</p>
* Generated Fix (JSON-LD Snippet):
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "Understanding SEO Best Practices for 2024",
"image": [
"https://www.yourdomain.com/images/seo-trends.webp"
],
"datePublished": "2024-03-15T08:00:00+08:00",
"dateModified": "2024-03-15T09:20:00+08:00",
"author": {
"@type": "Person",
"name": "PantheraHive SEO Team"
},
"publisher": {
"@type": "Organization",
"name": "YourDomain.com",
"logo": {
"@type": "ImageObject",
"url": "https://www.yourdomain.com/logo.png"
}
},
"description": "Explore the latest SEO strategies for 2024, covering core web vitals, AI content optimization, and effective link-building tactics..."
}
</script>
* Generated Fix (Technical Recommendation):
**Recommendation:** The primary cause of high LCP on https://www.yourdomain.com/product/xyz appears to be a large hero image (image-hero-xyz.jpg).
**Actionable Fixes:**
1. **Optimize Image:** Compress `image-hero-xyz.jpg` using modern formats (WebP, AVIF).
2. **Lazy Load (if below fold):** Implement `loading="lazy"` if the image is not immediately visible on page load.
3. **Preload (if critical):** Add `<link rel="preload" href="/images/image-hero-xyz.jpg" as="image">` to the `<head>` section to prioritize loading.
4. **Server-Side Resizing:** Ensure the image is served at the correct dimensions for the viewport.
To ensure efficiency and handle large websites, the identified issues are grouped into batches before being sent to Gemini.
This automated, AI-powered fix generation step offers substantial advantages:
The generated fixes are a critical component for the subsequent steps:
SiteAuditReport. This includes the "before" state (the issue) and the "after" state (the proposed fix).By automating the fix generation process with Gemini AI, the Site SEO Auditor provides not just insights, but direct, actionable solutions, transforming your SEO strategy from reactive problem identification to proactive, intelligent optimization.
hive_db → Upsert - Site Audit Report PersistenceThis document details the successful execution and implications of Step 4, where the comprehensive SEO audit results are securely stored within your dedicated hive_db instance. This crucial step transforms raw audit data into actionable, persistent records, forming the foundation for historical tracking and performance analysis.
The hive_db → upsert step is responsible for ingesting the fully processed SEO audit data, including any Gemini-generated fixes, and persisting it as a SiteAuditReport document within your MongoDB database. This process ensures data integrity, proper structuring, and the intelligent calculation of a "before/after" differential against your site's previous audit.
The primary purpose of this step is to:
This step receives a rich dataset, meticulously compiled from the preceding workflow stages:
Upon ingestion, the system performs the following critical processing:
SiteAuditReport schema to ensure consistency and data integrity.hive_db to fetch the most recent successful SiteAuditReport for your website. This previous report serves as the baseline for comparison.* Improvements: Pages or metrics that have moved from a 'fail' to a 'pass' state, or show significant positive change.
* Degradations: Pages or metrics that have moved from a 'pass' to a 'fail' state, or show significant negative change.
* New Issues: SEO problems identified in the current audit that were not present or detected in the previous one.
* Resolved Issues: SEO problems from the previous audit that are no longer present in the current one.
This differential analysis is granular, providing insights at both the overall site level and for individual pages and specific SEO metrics.
SiteAuditReport Document Construction: A comprehensive JSON document is constructed, encapsulating:* All raw audit results per page.
* Gemini's suggested fixes, linked to specific issues.
* A link to the _id of the previousAuditReport.
* The detailed diffFromPrevious object, containing the comparison results.
* Overall summary statistics for the current audit.
The core of this step is the upsert operation within MongoDB:
SiteAuditReport document is stored in a dedicated collection within hive_db.upsert operation is used. While each audit generates a new report document to maintain historical snapshots, the term upsert here can also imply the capability to update an existing report if a specific _id and audit date combination were to be re-run or refined, though the primary mode is to create a new, distinct report for each audit run. More importantly, the upsert concept applies to ensuring the linking and comparison with the previous report is robust.siteId and auditDate are indexed to ensure rapid retrieval of historical reports and efficient diff calculation.Upon successful completion of this step, the following outcomes are delivered:
SiteAuditReport: A new, complete SiteAuditReport document is permanently stored in hive_db, accessible for reporting and analysis._id, enabling precise referencing._id of the newly created report.This hive_db → upsert step directly translates into significant benefits for you:
This step ensures that your SEO audit data is not just collected, but intelligently organized, analyzed, and stored, providing a powerful asset for continuous website optimization.
hive_db → conditional_update)This final and critical step in the "Site SEO Auditor" workflow is responsible for securely storing your site's comprehensive SEO audit report in our MongoDB database (hive_db) and enabling robust historical tracking. The conditional_update logic ensures that each audit is intelligently processed, providing a clear "before and after" comparison for continuous improvement.
The primary goal of the hive_db → conditional_update step is to:
SiteAuditReport documents in MongoDB, facilitating easy retrieval and analysis.This step orchestrates the finalization and storage of your audit data with precision:
* Receives the complete, processed audit data from the previous steps, including:
* Detailed findings for Meta Title/Description, H1, Image Alt, Internal Link Density, Canonical Tags, Open Graph Tags, Structured Data, and Mobile Viewport.
* Core Web Vitals metrics (LCP, CLS, FID).
* Specific, actionable fixes generated by Gemini for all identified broken elements.
* Performs final validation to ensure data consistency and readiness for storage.
* First-Time Audit: If this is the initial audit for your site, a brand-new SiteAuditReport document is created in MongoDB. This report serves as the baseline for all future comparisons.
* Subsequent Audits: For all audits conducted after the first, the system intelligently:
1. Retrieves Previous Report: Fetches the most recent SiteAuditReport for your specific site from the database.
2. Generates "Before/After Diff": Compares the current audit's findings with the retrieved previous report. This generates a granular diff that pinpoints:
* New Issues: Problems identified in the current audit that were not present previously.
* Resolved Issues: Issues from the previous audit that are no longer detected.
* Changes: Any modifications or shifts in metrics (e.g., Core Web Vitals scores, link counts).
3. Stores New Report: A new SiteAuditReport document is created, incorporating all current audit findings, Gemini's fixes, and the generated "before/after diff," alongside a clear reference to the previousAuditId.
SiteAuditReport): * Each audit is stored as a comprehensive SiteAuditReport document in MongoDB. This document includes, but is not limited to:
* auditId: Unique identifier for the audit run.
* siteUrl: The URL of the audited site.
* timestamp: Date and time of the audit.
* status: (e.g., "completed", "failed").
* pagesAudited: Count of pages visited.
* overallScore: A high-level SEO health score.
* results: Detailed, page-by-page breakdown of the 12-point checklist findings.
* coreWebVitals: Specific LCP, CLS, FID scores and recommendations.
* geminiFixes: A list of all AI-generated fixes, categorized by issue and page.
* previousAuditId: Reference to the auditId of the immediately preceding audit (if applicable).
* diffSummary: A structured representation of the "before/after" changes, highlighting improvements and new regressions.
This final step provides you with invaluable insights and capabilities:
Upon completion of an audit, you will be able to access the detailed SiteAuditReport through your designated PantheraHive dashboard or via API integration. This report will clearly present:
This step ensures that every audit contributes to a living, evolving record of your site's SEO performance, empowering you with the data needed to drive continuous optimization.
No content
";}fr.dataset.loaded="1";}}}function phCopyCode(){navigator.clipboard.writeText(_phCode).then(function(){var b=document.getElementById("tab-code");if(b){var o=b.innerHTML;b.innerHTML=' Copied!';setTimeout(function(){b.innerHTML=o;},2000);}});}function phCopyAll(){navigator.clipboard.writeText(_phAll).then(function(){alert("Content copied to clipboard!");});}function phDownload(){var content=_phCode||_phAll;if(!content){alert("No content to download.");return;}var fn=_phFname;if(!_phCode&&fn.endsWith(".txt"))fn=fn.replace(/\.txt$/,".md");var a=document.createElement("a");a.href="data:text/plain;charset=utf-8,"+encodeURIComponent(content);a.download=fn;a.click();}function phDownloadZip(){ var lbl=document.getElementById("ph-zip-lbl"); if(lbl)lbl.textContent="Preparing\u2026"; /* ===== HELPERS ===== */ function cc(s){ return s.replace(/[_\-\s]+([a-z])/g,function(m,c){return c.toUpperCase();}) .replace(/^[a-z]/,function(m){return m.toUpperCase();}); } function pkgName(app){ return app.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; } function slugTitle(app){ return app.replace(/_/g," "); } /* Generic code block extractor. Finds marker comments like: // lib/main.dart or # lib/main.dart or ## lib/main.dart and collects lines until the next marker. Also strips markdown fences (\`\`\`lang ... \`\`\`) from each block. */ function extractFiles(txt, pathRe){ var files={}, cur=null, buf=[]; function flush(){ if(cur&&buf.length){ files[cur]=buf.join("\n").trim(); } } txt.split("\n").forEach(function(line){ var m=line.trim().match(pathRe); if(m){ flush(); cur=m[1]; buf=[]; return; } if(cur) buf.push(line); }); flush(); // Strip \`\`\`...\`\`\` fences from each file Object.keys(files).forEach(function(k){ files[k]=files[k].replace(/^\`\`\`[a-z]*\n?/,"").replace(/\n?\`\`\`$/,"").trim(); }); return files; } /* General path extractor that covers most languages */ function extractCode(txt){ var re=/^(?:\/\/|#|##)\s*((?:lib|src|test|tests|Sources?|app|components?|screens?|views?|hooks?|routes?|store|services?|models?|pages?)\/[\w\/\-\.]+\.\w+|pubspec\.yaml|Package\.swift|angular\.json|babel\.config\.(?:js|ts)|vite\.config\.(?:js|ts)|tsconfig\.(?:json|app\.json)|app\.json|App\.(?:tsx|jsx|vue|kt|swift)|MainActivity(?:\.kt)?|ContentView\.swift)/i; return extractFiles(txt, re); } /* Detect language from combined code+panel text */ function detectLang(code, panel){ var t=(code+" "+panel).toLowerCase(); if(t.indexOf("import 'package:flutter")>=0||t.indexOf('import "package:flutter')>=0) return "flutter"; if(t.indexOf("statelesswidget")>=0||t.indexOf("statefulwidget")>=0) return "flutter"; if((t.indexOf(".dart")>=0)&&(t.indexOf("pubspec")>=0||t.indexOf("flutter:")>=0)) return "flutter"; if(t.indexOf("react-native")>=0||t.indexOf("react_native")>=0) return "react-native"; if(t.indexOf("stylesheet.create")>=0||t.indexOf("view, text, touchableopacity")>=0) return "react-native"; if(t.indexOf("expo(")>=0||t.indexOf("\"expo\":")>=0||t.indexOf("from 'expo")>=0) return "react-native"; if(t.indexOf("import swiftui")>=0||t.indexOf("import uikit")>=0) return "swift"; if(t.indexOf(".swift")>=0&&(t.indexOf("func body")>=0||t.indexOf("@main")>=0||t.indexOf("var body: some view")>=0)) return "swift"; if(t.indexOf("import android.")>=0||t.indexOf("package com.example")>=0) return "kotlin"; if(t.indexOf("@composable")>=0||t.indexOf("fun mainactivity")>=0||(t.indexOf(".kt")>=0&&t.indexOf("androidx")>=0)) return "kotlin"; if(t.indexOf("@ngmodule")>=0||t.indexOf("@component")>=0) return "angular"; if(t.indexOf("angular.json")>=0||t.indexOf("from '@angular")>=0) return "angular"; if(t.indexOf(".vue")>=0||t.indexOf("")>=0||t.indexOf("definecomponent")>=0) return "vue"; if(t.indexOf("createapp(")>=0&&t.indexOf("vue")>=0) return "vue"; if(t.indexOf("import react")>=0||t.indexOf("reactdom")>=0||(t.indexOf("jsx.element")>=0)) return "react"; if((t.indexOf("usestate")>=0||t.indexOf("useeffect")>=0)&&t.indexOf("from 'react'")>=0) return "react"; if(t.indexOf(".dart")>=0) return "flutter"; if(t.indexOf(".kt")>=0) return "kotlin"; if(t.indexOf(".swift")>=0) return "swift"; if(t.indexOf("import numpy")>=0||t.indexOf("import pandas")>=0||t.indexOf("#!/usr/bin/env python")>=0) return "python"; if(t.indexOf("const express")>=0||t.indexOf("require('express')")>=0||t.indexOf("app.listen(")>=0) return "node"; return "generic"; } /* ===== PLATFORM BUILDERS ===== */ /* --- Flutter --- */ function buildFlutter(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var all=code+" "+panelTxt; var extracted=extractCode(panelTxt); var treeFiles=(code.match(/\b[\w_]+\.dart\b/g)||[]).filter(function(f,i,a){return a.indexOf(f)===i;}); if(!extracted["lib/main.dart"]) extracted["lib/main.dart"]="import 'package:flutter/material.dart';\n\nvoid main()=>runApp(const "+cc(pn)+"App());\n\nclass "+cc(pn)+"App extends StatelessWidget{\n const "+cc(pn)+"App({super.key});\n @override\n Widget build(BuildContext context)=>MaterialApp(\n title: '"+slugTitle(pn)+"',\n debugShowCheckedModeBanner: false,\n theme: ThemeData(\n colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),\n useMaterial3: true,\n ),\n home: Scaffold(appBar: AppBar(title: const Text('"+slugTitle(pn)+"')),\n body: const Center(child: Text('Welcome!'))),\n );\n}\n"; // pubspec.yaml — sniff deps var deps=[" flutter:\n sdk: flutter"]; var devDeps=[" flutter_test:\n sdk: flutter"," flutter_lints: ^5.0.0"]; var knownPkg={"go_router":"^14.0.0","flutter_riverpod":"^2.6.1","riverpod_annotation":"^2.6.1","shared_preferences":"^2.3.4","http":"^1.2.2","dio":"^5.7.0","firebase_core":"^3.12.1","firebase_auth":"^5.5.1","cloud_firestore":"^5.6.5","get_it":"^8.0.3","flutter_bloc":"^9.1.0","provider":"^6.1.2","cached_network_image":"^3.4.1","url_launcher":"^6.3.1","intl":"^0.19.0","google_fonts":"^6.2.1","equatable":"^2.0.7","freezed_annotation":"^2.4.4","json_annotation":"^4.9.0","path_provider":"^2.1.5","image_picker":"^1.1.2","uuid":"^4.4.2","flutter_svg":"^2.0.17","lottie":"^3.2.0","hive_flutter":"^1.1.0"}; var knownDev={"build_runner":"^2.4.14","freezed":"^2.5.7","json_serializable":"^6.8.0","riverpod_generator":"^2.6.3","hive_generator":"^2.0.1"}; Object.keys(knownPkg).forEach(function(p){if(all.indexOf("package:"+p)>=0)deps.push(" "+p+": "+knownPkg[p]);}); Object.keys(knownDev).forEach(function(p){if(all.indexOf(p)>=0)devDeps.push(" "+p+": "+knownDev[p]);}); zip.file(folder+"pubspec.yaml","name: "+pn+"\ndescription: Flutter app — PantheraHive BOS.\nversion: 1.0.0+1\n\nenvironment:\n sdk: '>=3.3.0 <4.0.0'\n\ndependencies:\n"+deps.join("\n")+"\n\ndev_dependencies:\n"+devDeps.join("\n")+"\n\nflutter:\n uses-material-design: true\n assets:\n - assets/images/\n"); zip.file(folder+"analysis_options.yaml","include: package:flutter_lints/flutter.yaml\n"); zip.file(folder+".gitignore",".dart_tool/\n.flutter-plugins\n.flutter-plugins-dependencies\n/build/\n.pub-cache/\n*.g.dart\n*.freezed.dart\n.idea/\n.vscode/\n"); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nflutter pub get\nflutter run\n\`\`\`\n\n## Build\n\`\`\`bash\nflutter build apk # Android\nflutter build ipa # iOS\nflutter build web # Web\n\`\`\`\n"); zip.file(folder+"assets/images/.gitkeep",""); Object.keys(extracted).forEach(function(p){ zip.file(folder+p,extracted[p]); }); treeFiles.forEach(function(fn){ if(fn.indexOf("_test.dart")>=0) return; var found=Object.keys(extracted).some(function(p){return p.endsWith("/"+fn)||p===fn;}); if(!found){ var path="lib/"+fn; var cls=cc(fn.replace(".dart","")); var isScr=fn.indexOf("screen")>=0||fn.indexOf("page")>=0||fn.indexOf("view")>=0; var stub=isScr?"import 'package:flutter/material.dart';\n\nclass "+cls+" extends StatelessWidget{\n const "+cls+"({super.key});\n @override\n Widget build(BuildContext ctx)=>Scaffold(\n appBar: AppBar(title: const Text('"+fn.replace(/_/g," ").replace(".dart","")+"')),\n body: const Center(child: Text('"+cls+" — TODO')),\n );\n}\n":"// TODO: implement\n\nclass "+cls+"{\n // "+fn+"\n}\n"; zip.file(folder+path,stub); } }); } /* --- React Native (Expo) --- */ function buildReactNative(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var extracted=extractCode(panelTxt); var allT=code+" "+panelTxt; var usesTS=allT.indexOf(".tsx")>=0||allT.indexOf(": React.")>=0||allT.indexOf("interface ")>=0; var ext=usesTS?"tsx":"jsx"; zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "1.0.0",\n "main": "expo-router/entry",\n "scripts": {\n "start": "expo start",\n "android": "expo run:android",\n "ios": "expo run:ios",\n "web": "expo start --web"\n },\n "dependencies": {\n "expo": "~52.0.0",\n "expo-router": "~4.0.0",\n "expo-status-bar": "~2.0.1",\n "expo-font": "~13.0.1",\n "react": "18.3.1",\n "react-native": "0.76.7",\n "react-native-safe-area-context": "4.12.0",\n "react-native-screens": "~4.3.0",\n "@react-navigation/native": "^7.0.14"\n },\n "devDependencies": {\n "@babel/core": "^7.25.0",\n "typescript": "~5.3.3",\n "@types/react": "~18.3.12"\n }\n}\n'); zip.file(folder+"app.json",'{\n "expo": {\n "name": "'+slugTitle(pn)+'",\n "slug": "'+pn+'",\n "version": "1.0.0",\n "orientation": "portrait",\n "scheme": "'+pn+'",\n "platforms": ["ios","android","web"],\n "icon": "./assets/icon.png",\n "splash": {"image": "./assets/splash.png","resizeMode":"contain","backgroundColor":"#ffffff"},\n "ios": {"supportsTablet": true},\n "android": {"package": "com.example.'+pn+'"},\n "newArchEnabled": true\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "extends": "expo/tsconfig.base",\n "compilerOptions": {\n "strict": true,\n "paths": {"@/*": ["./src/*"]}\n }\n}\n'); zip.file(folder+"babel.config.js","module.exports=function(api){\n api.cache(true);\n return {presets:['babel-preset-expo']};\n};\n"); var hasApp=Object.keys(extracted).some(function(k){return k.toLowerCase().indexOf("app.")>=0;}); if(!hasApp) zip.file(folder+"App."+ext,"import React from 'react';\nimport {View,Text,StyleSheet,StatusBar,SafeAreaView} from 'react-native';\n\nexport default function App(){\n return(\nBuilt with PantheraHive BOS
\n"); h+="
"+hc+"