"+slugTitle(pn)+"
\nBuilt with PantheraHive BOS
\nThis document details the successful execution and output of the initial crawling phase for your "Site SEO Auditor" workflow. This crucial first step involves systematically visiting every page on your specified website to gather comprehensive data, which will subsequently be used for the in-depth SEO audit.
The primary objective of this step is to act as a headless crawler, simulating a real user's browser visit to your website. Using Puppeteer, a Node.js library that provides a high-level API to control headless Chrome or Chromium, we navigate through your site, identify all discoverable pages, and collect essential raw data from each one. This collected data forms the foundation for the subsequent 12-point SEO checklist audit.
Our crawling mechanism leverages Puppeteer with the following key configurations to ensure a robust and accurate representation of how search engines and users interact with your site:
networkidle0 or networkidle2). This ensures that all critical resources (HTML, CSS, JavaScript, images) have finished loading and all asynchronous operations (like API calls rendering dynamic content) have completed before the page's content is considered "stable" for data extraction.For every successfully crawled page, the following critical data points are extracted and stored:
<a> tags found on the page, including their href attributes. This fuels the discovery of new pages within your site and helps assess internal linking structure.* Largest Contentful Paint (LCP): Timings related to the rendering of the largest image or text block visible within the viewport.
* Cumulative Layout Shift (CLS): Records of unexpected layout shifts that occur during page load.
* First Input Delay (FID): While direct FID measurement requires user interaction, Puppeteer helps capture the main thread blocking time, which is a strong proxy.
Upon completion of the crawling phase, the following structured data is generated and prepared for the next auditing step:
* Example:
* **Raw Page Data Store**: For each successfully crawled URL, a rich data object containing all extracted information, including the full HTML content, discovered links, and raw performance metrics. This data is stored in a temporary, highly accessible format, ready for immediate processing.
* **Example (abbreviated for clarity)**:
With the site crawl successfully completed and all necessary raw data collected, the workflow will now proceed to Step 2: SEO Audit and Analysis. In this next phase, the collected HTML content and performance data will be meticulously analyzed against the 12-point SEO checklist, and any identified issues will be flagged for remediation.
hive_db → diff)This crucial step integrates the newly generated SEO audit report with your historical data within PantheraHive's secure MongoDB database (hive_db) and then meticulously generates a before-and-after difference report. This report provides invaluable insights into the changes and trends in your site's SEO performance over time.
The primary goal of the hive_db → diff step is to:
SiteAuditReport collection in MongoDB.This step involves a sequence of intelligent operations to ensure data integrity and deliver a precise comparative analysis:
* The system queries your dedicated SiteAuditReport collection in MongoDB.
It identifies and retrieves the most recent successfully completed audit report* for your specific site. This serves as the "before" snapshot for comparison.
* The newly generated audit report (the "after" snapshot) is then securely stored as a new document within your SiteAuditReport collection in MongoDB.
* Each report is timestamped and includes a unique identifier, ensuring a clear audit trail.
* A sophisticated diffing algorithm is initiated.
* It systematically compares the newly stored report with the retrieved previous report, page by page and metric by metric, across all 12 SEO checklist points.
* The comparison identifies:
* Improvements: Issues resolved or metrics that have improved (e.g., higher image alt coverage, better LCP score).
* Regressions: Metrics that have worsened or new issues that have appeared (e.g., new pages missing H1, lower internal link density).
* No Change: Elements that remain consistent.
* New Pages/Removed Pages: Handles changes in site structure or content.
The generated diff report is designed for clarity, actionability, and comprehensive understanding of your site's SEO evolution:
* Meta Titles/Descriptions: Identification of newly duplicate or newly unique titles/descriptions.
* H1 Presence: Pages where H1s were added or removed.
* Image Alt Coverage: Pages with improved or worsened alt text percentages.
* Internal Link Density: Pages with significant changes in inbound/outbound internal links.
* Canonical Tags: Detection of new canonicalization issues or fixes.
* Open Graph Tags: Changes in OG tag implementation across pages.
* Core Web Vitals (LCP/CLS/FID): Quantitative changes in performance metrics, highlighting improvements or regressions.
* Structured Data: Pages where structured data was added, removed, or changed.
* Mobile Viewport: Pages that gained or lost proper mobile viewport configuration.
/product-page-1").This hive_db → diff step empowers you with:
Upon completion of this step, the following will be available:
SiteAuditReport in MongoDB: Your database now contains the latest, most comprehensive audit data for your site.The findings from this diff report, particularly any newly identified broken elements or regressions, will be fed into the next step (Gemini → fix) for automated remediation suggestions.
This step marks a critical phase in your Site SEO Auditor workflow, where detected SEO issues are transformed into actionable solutions. Following the comprehensive crawl and audit performed by our headless crawler (Puppeteer), all identified "broken elements" are now systematically fed into Google's advanced Gemini AI model for intelligent, context-aware fix generation.
Context: After the crawler meticulously audited every page against the 12-point SEO checklist and identified specific deficiencies (e.g., missing meta tags, broken links, non-optimized images), these issues are compiled. This current step leverages the power of Gemini AI to analyze each identified problem and generate precise, developer-ready fixes.
Goal: To automatically produce "exact fixes" for all detected SEO violations, providing your development team with clear, implementable solutions to enhance your site's SEO performance.
Gemini receives a structured input for each identified SEO issue. This input is meticulously crafted to provide the AI with all necessary context, ensuring highly relevant and accurate fix suggestions.
Each input package includes:
MISSING_H1, DUPLICATE_META_DESCRIPTION, MISSING_IMAGE_ALT, INCORRECT_CANONICAL).Examples of "Broken Elements" sent to Gemini:
<h1> tag, or pages with multiple <h1> tags, including the relevant section of the page.<img>) missing the alt attribute, or with generic/empty alt text, alongside the image's context and surrounding text.og:title, og:description, og:image), including the page's primary content.<meta name="viewport"> tag, or with incorrect configurations.Upon receiving the detailed input, Gemini's advanced capabilities come into play:
The output from Gemini is a set of highly specific, actionable recommendations, presented in a format that your development team can directly use. Each fix is tailored to the identified problem.
Examples of "Exact Fixes" generated by Gemini:
<!-- Suggested H1 tag based on page content -->
<h1 class="page-title-seo">Your Primary Page Heading Here</h1>
<!-- Suggested unique meta description based on page content -->
<meta name="description" content="[Gemini-generated unique, keyword-rich description for this specific page, max 160 chars.]">
<!-- Original: <img src="/images/product-xyz.jpg"> -->
<!-- Suggested fix: -->
<img src="/images/product-xyz.jpg" alt="[Gemini-generated descriptive alt text for product XYZ]">
<!-- Suggested correct canonical tag -->
<link rel="canonical" href="https://yourdomain.com/correct-version-of-this-page/">
<!-- Suggested Open Graph tags based on page content -->
<meta property="og:title" content="[Page Title for Social Media]">
<meta property="og:description" content="[Page Description for Social Media]">
<meta property="og:image" content="https://yourdomain.com/path/to/social-share-image.jpg">
<meta property="og:url" content="https://yourdomain.com/this-page-url/">
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "[Page Headline]",
"image": [
"https://yourdomain.com/image1.jpg",
"https://yourdomain.com/image2.jpg"
],
"datePublished": "[YYYY-MM-DD]",
"dateModified": "[YYYY-MM-DD]",
"author": {
"@type": "Person",
"name": "[Author Name]"
},
"publisher": {
"@type": "Organization",
"name": "[Your Company Name]",
"logo": {
"@type": "ImageObject",
"url": "https://yourdomain.com/logo.png"
}
},
"description": "[Short description of the article]"
}
</script>
The "batch_generate" aspect of this step is crucial for scalability and efficiency. Instead of processing issues one by one, the system groups similar issues or issues from the same page and sends them to Gemini in optimized batches. This ensures:
Once Gemini has generated all the proposed fixes, they are compiled and prepared for the next stage of the workflow:
SiteAuditReport document. This includes the "before" state (the detected issue) and the "after" state (the proposed fix).This step significantly reduces the manual effort required to diagnose and formulate solutions for complex SEO issues, allowing your team to focus on strategic implementation rather than problem identification and basic fix generation.
hive_db → conditional_update for Site SEO AuditorThis document details the final and crucial step of the "Site SEO Auditor" workflow, focusing on the secure and intelligent storage of your audit results within the PantheraHive database. This step ensures that every audit contributes to a historical record, enabling comprehensive tracking of your site's SEO performance over time.
The "Site SEO Auditor" is a robust, automated system designed to provide deep insights into your website's SEO health. Utilizing a headless crawler (Puppeteer), it systematically visits every page, auditing against a comprehensive 12-point SEO checklist. Key audit points include:
Crucially, any identified broken elements are intelligently processed by Gemini, which generates precise, actionable fixes. All these findings are then meticulously stored in your dedicated MongoDB instance as a SiteAuditReport, complete with a "before/after" diff, facilitating clear progress tracking. The system runs automatically every Sunday at 2 AM or can be triggered on demand.
hive_db → conditional_update - Database PersistenceThis final step is responsible for persisting the comprehensive audit results into your PantheraHive MongoDB database. The conditional_update operation ensures that your audit history is maintained efficiently and intelligently, differentiating between initial audits and subsequent updates.
The primary purpose of this step is to:
SiteAuditReport StructureThe audit results are stored as a SiteAuditReport document within your MongoDB collection. This document is designed for comprehensive data capture and easy historical comparison.
Key Fields of the SiteAuditReport Document:
_id: MongoDB's unique document identifier.siteUrl (String, Indexed, Unique): The root URL of the audited website (e.g., https://www.example.com). This serves as the primary identifier for each site's audit history.latestAuditTimestamp (Date): The timestamp when the most recent audit was completed.overallStatus (String): A high-level status of the audit (e.g., "Success", "Partial Success", "Failed").summaryMetrics (Object): Aggregated metrics across the entire site. * totalPagesAudited (Number)
* issuesFound (Number)
* criticalIssues (Number)
* warnings (Number)
* performanceScore (Number) - e.g., average LCP.
* seoScore (Number) - a calculated score based on all checks.
currentAuditReport (Object): Contains the detailed results of the latest audit. * auditTimestamp (Date)
* pageReports (Array of Objects): Detailed breakdown for each audited URL.
* pageUrl (String)
* statusCode (Number)
* isIndexed (Boolean)
* metaTitle (String)
* metaTitleUnique (Boolean)
* metaDescription (String)
* metaDescriptionUnique (Boolean)
* h1Present (Boolean)
* h1Content (String)
* imageAltCoverage (Number, percentage)
* missingAltImages (Array of Strings)
* internalLinksCount (Number)
* externalLinksCount (Number)
* canonicalTagPresent (Boolean)
* canonicalUrl (String)
* openGraphTagsPresent (Boolean)
* ogTitle, ogDescription, ogImage (String)
* coreWebVitals (Object):
* LCP (Number, ms)
* CLS (Number)
* FID (Number, ms)
* structuredDataPresent (Boolean)
* structuredDataTypes (Array of Strings)
* mobileViewportMeta (Boolean)
* issues (Array of Objects): Specific issues found on this page.
* type (String, e.g., "Missing H1", "Duplicate Meta Title")
* severity (String, e.g., "Critical", "Warning")
* description (String)
* element (String, e.g., "meta[name='description']")
* geminiFixes (Array of Objects): Specific fixes generated by Gemini for broken elements.
* pageUrl (String)
* issueType (String)
* originalDescription (String)
* geminiGeneratedFix (String, code snippet or detailed instruction)
* confidenceScore (Number, 0-1)
previousAuditReport (Object, Optional): Contains the detailed results of the immediately preceding audit. This field is crucial for generating the "before" state for the diff.diffSummary (Object): A high-level summary of changes between previousAuditReport and currentAuditReport. * newIssuesIntroduced (Number)
* issuesResolved (Number)
* overallScoreChange (Number)
* keyChanges (Array of Strings): Bullet points summarizing significant changes (e.g., "LCP improved by 200ms", "5 duplicate meta titles resolved").
* pageLevelChanges (Array of Objects): Summary of changes per page.
* pageUrl (String)
* changes (Array of Strings)
The hive_db → conditional_update step intelligently handles the persistence of your audit data based on whether a previous audit for the siteUrl exists.
SiteAuditReport document for the given siteUrl. * If a previousAuditReport exists:
* The existing document is updated.
* The content of the current currentAuditReport field is moved into the previousAuditReport field.
* The newly generated audit results are then saved into the currentAuditReport field.
* The diffSummary is calculated by comparing the new currentAuditReport with the new previousAuditReport.
* The latestAuditTimestamp is updated.
* If no previousAuditReport exists (First Audit):
* A new SiteAuditReport document is created.
* The currentAuditReport field is populated with the newly generated audit results.
* The previousAuditReport field is left empty (or set to null).
* The diffSummary will indicate "Initial Audit".
* The latestAuditTimestamp is set.
This ensures that for every subsequent audit, you always have a complete record of the immediate prior state, enabling robust "before/after" comparisons directly within the database.
Upon successful completion of this step, the following deliverables are available:
SiteAuditReport Document: A comprehensive SiteAuditReport document is stored in your dedicated MongoDB instance, accessible via the PantheraHive platform.currentAuditReport and previousAuditReport fields.diffSummary provides an instant overview of changes between the latest two audits, highlighting improvements or new issues.The data stored in this step is the foundation for continuous SEO improvement and monitoring:
SiteAuditReport documents through your PantheraHive dashboard to review detailed findings for specific pages and overall site health.diffSummary and historical previousAuditReport data to monitor the impact of your SEO efforts over time. Identify trends, validate fixes, and spot new regressions quickly.geminiFixes array of the currentAuditReport.This robust storage mechanism ensures that your "Site SEO Auditor" provides not just a snapshot, but a living, evolving record of your website's search engine optimization journey.
No content
";}fr.dataset.loaded="1";}}}function phCopyCode(){navigator.clipboard.writeText(_phCode).then(function(){var b=document.getElementById("tab-code");if(b){var o=b.innerHTML;b.innerHTML=' Copied!';setTimeout(function(){b.innerHTML=o;},2000);}});}function phCopyAll(){navigator.clipboard.writeText(_phAll).then(function(){alert("Content copied to clipboard!");});}function phDownload(){var content=_phCode||_phAll;if(!content){alert("No content to download.");return;}var fn=_phFname;if(!_phCode&&fn.endsWith(".txt"))fn=fn.replace(/\.txt$/,".md");var a=document.createElement("a");a.href="data:text/plain;charset=utf-8,"+encodeURIComponent(content);a.download=fn;a.click();}function phDownloadZip(){ var lbl=document.getElementById("ph-zip-lbl"); if(lbl)lbl.textContent="Preparing\u2026"; /* ===== HELPERS ===== */ function cc(s){ return s.replace(/[_\-\s]+([a-z])/g,function(m,c){return c.toUpperCase();}) .replace(/^[a-z]/,function(m){return m.toUpperCase();}); } function pkgName(app){ return app.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; } function slugTitle(app){ return app.replace(/_/g," "); } /* Generic code block extractor. Finds marker comments like: // lib/main.dart or # lib/main.dart or ## lib/main.dart and collects lines until the next marker. Also strips markdown fences (\`\`\`lang ... \`\`\`) from each block. */ function extractFiles(txt, pathRe){ var files={}, cur=null, buf=[]; function flush(){ if(cur&&buf.length){ files[cur]=buf.join("\n").trim(); } } txt.split("\n").forEach(function(line){ var m=line.trim().match(pathRe); if(m){ flush(); cur=m[1]; buf=[]; return; } if(cur) buf.push(line); }); flush(); // Strip \`\`\`...\`\`\` fences from each file Object.keys(files).forEach(function(k){ files[k]=files[k].replace(/^\`\`\`[a-z]*\n?/,"").replace(/\n?\`\`\`$/,"").trim(); }); return files; } /* General path extractor that covers most languages */ function extractCode(txt){ var re=/^(?:\/\/|#|##)\s*((?:lib|src|test|tests|Sources?|app|components?|screens?|views?|hooks?|routes?|store|services?|models?|pages?)\/[\w\/\-\.]+\.\w+|pubspec\.yaml|Package\.swift|angular\.json|babel\.config\.(?:js|ts)|vite\.config\.(?:js|ts)|tsconfig\.(?:json|app\.json)|app\.json|App\.(?:tsx|jsx|vue|kt|swift)|MainActivity(?:\.kt)?|ContentView\.swift)/i; return extractFiles(txt, re); } /* Detect language from combined code+panel text */ function detectLang(code, panel){ var t=(code+" "+panel).toLowerCase(); if(t.indexOf("import 'package:flutter")>=0||t.indexOf('import "package:flutter')>=0) return "flutter"; if(t.indexOf("statelesswidget")>=0||t.indexOf("statefulwidget")>=0) return "flutter"; if((t.indexOf(".dart")>=0)&&(t.indexOf("pubspec")>=0||t.indexOf("flutter:")>=0)) return "flutter"; if(t.indexOf("react-native")>=0||t.indexOf("react_native")>=0) return "react-native"; if(t.indexOf("stylesheet.create")>=0||t.indexOf("view, text, touchableopacity")>=0) return "react-native"; if(t.indexOf("expo(")>=0||t.indexOf("\"expo\":")>=0||t.indexOf("from 'expo")>=0) return "react-native"; if(t.indexOf("import swiftui")>=0||t.indexOf("import uikit")>=0) return "swift"; if(t.indexOf(".swift")>=0&&(t.indexOf("func body")>=0||t.indexOf("@main")>=0||t.indexOf("var body: some view")>=0)) return "swift"; if(t.indexOf("import android.")>=0||t.indexOf("package com.example")>=0) return "kotlin"; if(t.indexOf("@composable")>=0||t.indexOf("fun mainactivity")>=0||(t.indexOf(".kt")>=0&&t.indexOf("androidx")>=0)) return "kotlin"; if(t.indexOf("@ngmodule")>=0||t.indexOf("@component")>=0) return "angular"; if(t.indexOf("angular.json")>=0||t.indexOf("from '@angular")>=0) return "angular"; if(t.indexOf(".vue")>=0||t.indexOf("")>=0||t.indexOf("definecomponent")>=0) return "vue"; if(t.indexOf("createapp(")>=0&&t.indexOf("vue")>=0) return "vue"; if(t.indexOf("import react")>=0||t.indexOf("reactdom")>=0||(t.indexOf("jsx.element")>=0)) return "react"; if((t.indexOf("usestate")>=0||t.indexOf("useeffect")>=0)&&t.indexOf("from 'react'")>=0) return "react"; if(t.indexOf(".dart")>=0) return "flutter"; if(t.indexOf(".kt")>=0) return "kotlin"; if(t.indexOf(".swift")>=0) return "swift"; if(t.indexOf("import numpy")>=0||t.indexOf("import pandas")>=0||t.indexOf("#!/usr/bin/env python")>=0) return "python"; if(t.indexOf("const express")>=0||t.indexOf("require('express')")>=0||t.indexOf("app.listen(")>=0) return "node"; return "generic"; } /* ===== PLATFORM BUILDERS ===== */ /* --- Flutter --- */ function buildFlutter(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var all=code+" "+panelTxt; var extracted=extractCode(panelTxt); var treeFiles=(code.match(/\b[\w_]+\.dart\b/g)||[]).filter(function(f,i,a){return a.indexOf(f)===i;}); if(!extracted["lib/main.dart"]) extracted["lib/main.dart"]="import 'package:flutter/material.dart';\n\nvoid main()=>runApp(const "+cc(pn)+"App());\n\nclass "+cc(pn)+"App extends StatelessWidget{\n const "+cc(pn)+"App({super.key});\n @override\n Widget build(BuildContext context)=>MaterialApp(\n title: '"+slugTitle(pn)+"',\n debugShowCheckedModeBanner: false,\n theme: ThemeData(\n colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),\n useMaterial3: true,\n ),\n home: Scaffold(appBar: AppBar(title: const Text('"+slugTitle(pn)+"')),\n body: const Center(child: Text('Welcome!'))),\n );\n}\n"; // pubspec.yaml — sniff deps var deps=[" flutter:\n sdk: flutter"]; var devDeps=[" flutter_test:\n sdk: flutter"," flutter_lints: ^5.0.0"]; var knownPkg={"go_router":"^14.0.0","flutter_riverpod":"^2.6.1","riverpod_annotation":"^2.6.1","shared_preferences":"^2.3.4","http":"^1.2.2","dio":"^5.7.0","firebase_core":"^3.12.1","firebase_auth":"^5.5.1","cloud_firestore":"^5.6.5","get_it":"^8.0.3","flutter_bloc":"^9.1.0","provider":"^6.1.2","cached_network_image":"^3.4.1","url_launcher":"^6.3.1","intl":"^0.19.0","google_fonts":"^6.2.1","equatable":"^2.0.7","freezed_annotation":"^2.4.4","json_annotation":"^4.9.0","path_provider":"^2.1.5","image_picker":"^1.1.2","uuid":"^4.4.2","flutter_svg":"^2.0.17","lottie":"^3.2.0","hive_flutter":"^1.1.0"}; var knownDev={"build_runner":"^2.4.14","freezed":"^2.5.7","json_serializable":"^6.8.0","riverpod_generator":"^2.6.3","hive_generator":"^2.0.1"}; Object.keys(knownPkg).forEach(function(p){if(all.indexOf("package:"+p)>=0)deps.push(" "+p+": "+knownPkg[p]);}); Object.keys(knownDev).forEach(function(p){if(all.indexOf(p)>=0)devDeps.push(" "+p+": "+knownDev[p]);}); zip.file(folder+"pubspec.yaml","name: "+pn+"\ndescription: Flutter app — PantheraHive BOS.\nversion: 1.0.0+1\n\nenvironment:\n sdk: '>=3.3.0 <4.0.0'\n\ndependencies:\n"+deps.join("\n")+"\n\ndev_dependencies:\n"+devDeps.join("\n")+"\n\nflutter:\n uses-material-design: true\n assets:\n - assets/images/\n"); zip.file(folder+"analysis_options.yaml","include: package:flutter_lints/flutter.yaml\n"); zip.file(folder+".gitignore",".dart_tool/\n.flutter-plugins\n.flutter-plugins-dependencies\n/build/\n.pub-cache/\n*.g.dart\n*.freezed.dart\n.idea/\n.vscode/\n"); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nflutter pub get\nflutter run\n\`\`\`\n\n## Build\n\`\`\`bash\nflutter build apk # Android\nflutter build ipa # iOS\nflutter build web # Web\n\`\`\`\n"); zip.file(folder+"assets/images/.gitkeep",""); Object.keys(extracted).forEach(function(p){ zip.file(folder+p,extracted[p]); }); treeFiles.forEach(function(fn){ if(fn.indexOf("_test.dart")>=0) return; var found=Object.keys(extracted).some(function(p){return p.endsWith("/"+fn)||p===fn;}); if(!found){ var path="lib/"+fn; var cls=cc(fn.replace(".dart","")); var isScr=fn.indexOf("screen")>=0||fn.indexOf("page")>=0||fn.indexOf("view")>=0; var stub=isScr?"import 'package:flutter/material.dart';\n\nclass "+cls+" extends StatelessWidget{\n const "+cls+"({super.key});\n @override\n Widget build(BuildContext ctx)=>Scaffold(\n appBar: AppBar(title: const Text('"+fn.replace(/_/g," ").replace(".dart","")+"')),\n body: const Center(child: Text('"+cls+" — TODO')),\n );\n}\n":"// TODO: implement\n\nclass "+cls+"{\n // "+fn+"\n}\n"; zip.file(folder+path,stub); } }); } /* --- React Native (Expo) --- */ function buildReactNative(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var extracted=extractCode(panelTxt); var allT=code+" "+panelTxt; var usesTS=allT.indexOf(".tsx")>=0||allT.indexOf(": React.")>=0||allT.indexOf("interface ")>=0; var ext=usesTS?"tsx":"jsx"; zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "1.0.0",\n "main": "expo-router/entry",\n "scripts": {\n "start": "expo start",\n "android": "expo run:android",\n "ios": "expo run:ios",\n "web": "expo start --web"\n },\n "dependencies": {\n "expo": "~52.0.0",\n "expo-router": "~4.0.0",\n "expo-status-bar": "~2.0.1",\n "expo-font": "~13.0.1",\n "react": "18.3.1",\n "react-native": "0.76.7",\n "react-native-safe-area-context": "4.12.0",\n "react-native-screens": "~4.3.0",\n "@react-navigation/native": "^7.0.14"\n },\n "devDependencies": {\n "@babel/core": "^7.25.0",\n "typescript": "~5.3.3",\n "@types/react": "~18.3.12"\n }\n}\n'); zip.file(folder+"app.json",'{\n "expo": {\n "name": "'+slugTitle(pn)+'",\n "slug": "'+pn+'",\n "version": "1.0.0",\n "orientation": "portrait",\n "scheme": "'+pn+'",\n "platforms": ["ios","android","web"],\n "icon": "./assets/icon.png",\n "splash": {"image": "./assets/splash.png","resizeMode":"contain","backgroundColor":"#ffffff"},\n "ios": {"supportsTablet": true},\n "android": {"package": "com.example.'+pn+'"},\n "newArchEnabled": true\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "extends": "expo/tsconfig.base",\n "compilerOptions": {\n "strict": true,\n "paths": {"@/*": ["./src/*"]}\n }\n}\n'); zip.file(folder+"babel.config.js","module.exports=function(api){\n api.cache(true);\n return {presets:['babel-preset-expo']};\n};\n"); var hasApp=Object.keys(extracted).some(function(k){return k.toLowerCase().indexOf("app.")>=0;}); if(!hasApp) zip.file(folder+"App."+ext,"import React from 'react';\nimport {View,Text,StyleSheet,StatusBar,SafeAreaView} from 'react-native';\n\nexport default function App(){\n return(\nBuilt with PantheraHive BOS
\n"); h+="
"+hc+"