pSEO Page Factory
Run ID: 69cc12a204066a6c4a1691ec2026-03-31SEO & Growth
PantheraHive BOS
BOS Dashboard

Step 1 of 5: hive_db → query - Data Retrieval for pSEO Page Factory

1. Introduction to Step 1: Foundation for Keyword Matrix

This initial step in the "pSEO Page Factory" workflow is crucial for establishing the foundational data required to generate thousands of targeted landing pages. The hive_db → query operation focuses on securely and efficiently retrieving the core entities that will form the basis of your pSEO strategy: your application names, target personas, and target locations.

By querying your dedicated PantheraHive database (hive_db), we ensure that the subsequent steps leverage the most accurate and up-to-date information directly from your established data sources. This ensures the pSEO pages are built upon your validated business data.

2. Objective: Extract Core pSEO Entities

The primary objective of this step is to extract three distinct categories of data from your hive_db:

These entities are the fundamental building blocks for constructing the comprehensive Keyword Matrix, which drives the content generation and unique page creation process.

3. Query Execution Details

The query is executed against your dedicated PantheraHive database (hive_db), which is built on MongoDB and designed to store and manage your core business data.

* App Names: Data is retrieved from the products or applications collection/document within hive_db. This collection typically holds details about your offerings, including their names, descriptions, and other relevant attributes.

* Personas: Data is retrieved from the personas or target_audiences collection/document. This collection defines your ideal customer segments, including their roles, industries, and pain points.

* Locations: Data is retrieved from the locations or geographic_areas collection/document. This collection lists the specific cities, states, regions, or countries you wish to target for localized pSEO.

4. Data Retrieved (Expected Output)

Upon successful execution of the query, the following structured data sets are retrieved and prepared as inputs for the next workflow step:

4.1. App Names

A list of unique strings representing your core applications or services. These are the primary subjects of your pSEO pages.

text • 215 chars
#### 4.3. Locations

A list of unique strings representing the geographical areas for targeting. These define *where* your target audience is located.

*   **Data Type**: `List<String>`
*   **Example Output**:
    
Sandboxed live preview

5. Data Validation and Pre-processing

Before passing the retrieved data to the next stage, a series of automated validation and pre-processing steps are performed to ensure data quality and consistency:

  • Uniqueness Check: Duplicate entries within each category (App Names, Personas, Locations) are automatically removed to ensure efficient processing and prevent redundant page generation.
  • Format Consistency: Basic string sanitization is applied (e.g., trimming leading/trailing whitespace, ensuring consistent capitalization where appropriate) to maintain data integrity and prevent errors in keyword generation.
  • Non-empty Check: Validation to ensure that each category contains at least one entry. If any category is empty, a warning will be issued, and the workflow may pause or prompt for user intervention to prevent the generation of incomplete or irrelevant pages.
  • Character Set Validation: Ensures all retrieved strings are compatible with URL and content generation requirements.

6. Impact on Next Steps

The successfully retrieved and validated data sets are now the foundational input for Step 2: Build Keyword Matrix. This data is precisely structured and ready for algorithmic combination. In the next step, these lists will be systematically combined to generate every possible permutation, forming the comprehensive keyword phrases (e.g., "Best AI Video Editor for Realtors in Jacksonville") that will define each unique pSEO page. This methodical approach ensures maximum coverage and targeting precision.

7. Summary and Next Action

This hive_db → query step has successfully extracted the essential App Names, Personas, and Locations from your PantheraHive database. This data has been validated and prepared, forming the robust foundation for your pSEO page factory.

Next Action: The workflow will now automatically proceed to Step 2: Keyword Matrix Generation, where these retrieved entities will be combined to create the full set of target keywords and page definitions for your thousands of targeted landing pages.

gemini Output

Step 3 of 5: gemini → batch_generate - High-Intent Content Generation

This pivotal step in the "pSEO Page Factory" workflow leverages Google's advanced Gemini Large Language Model (LLM) to automatically generate unique, high-intent content for every targeted keyword combination identified in the previous stages. The goal is to transform your comprehensive Keyword Matrix into thousands of distinct, SEO-optimized landing page documents, ready for publication.


1. Step Overview: Automated Content Generation at Scale

The gemini → batch_generate operation is where the magic of scalable content creation happens. For each specific keyword target (e.g., "Best AI Video Editor for Realtors in Jacksonville"), Gemini crafts a unique, human-quality page tailored to that exact search intent. This process eliminates the manual effort of writing thousands of pages, ensuring consistency, speed, and high relevance across your entire pSEO strategy.

2. Input Data: The Enriched Keyword Matrix

This step consumes the meticulously built Keyword Matrix stored in MongoDB. Each entry in this matrix represents a unique combination of:

  • app_name: Your core application or service.
  • persona: The specific target audience (e.g., "YouTubers," "Realtors," "Agencies").
  • location: The geographical target (e.g., "Jacksonville," "New York City," "London").
  • keyword_target: The synthesized high-intent keyword phrase (e.g., "Best AI Video Editor for Realtors in Jacksonville").

Gemini receives these structured inputs and uses them as the core context for content generation.

3. The gemini → batch_generate Process in Detail

The process orchestrates the interaction with the Gemini LLM to produce structured, SEO-friendly content for each target page:

  • Intelligent Prompt Engineering: Each keyword combination is fed into Gemini with a sophisticated prompt that instructs the LLM to:

* Understand User Intent: Focus on solving the specific problem or fulfilling the need implied by the keyword_target.

* Incorporate Specifics: Seamlessly integrate the app_name, persona, and location throughout the content to demonstrate relevance.

* Generate Unique Content: Ensure that each generated page is distinct and original, avoiding boilerplate or duplicate content issues.

* Adopt a Professional Tone: Maintain a consistent, authoritative, and helpful brand voice.

  • Content Structuring: Gemini is guided to produce content with a clear, logical hierarchy, including:

* SEO-Optimized Title Tag (<title>): Compelling and keyword-rich for search engine results pages (SERPs).

* Meta Description: A concise, persuasive summary to encourage clicks.

* Primary Heading (<h1>): A clear, engaging main title for the page, often mirroring the keyword_target.

* Body Content: Multiple paragraphs, subheadings (<h2>, <h3>), bullet points, and potentially numbered lists to enhance readability and cover key aspects.

* Call-to-Action (CTA): A clear, persuasive instruction guiding the user to the next step (e.g., "Try [Your App Name] Today!").

* (Optional) FAQs Section: A "Frequently Asked Questions" section to address common queries and capture long-tail keywords.

  • Batch Processing: The "batch_generate" nature means that thousands of these content generation requests are processed concurrently, dramatically accelerating the content production timeline. This allows for the creation of 2,000+ pages in a fraction of the time it would take manually.
  • Quality Assurance & Iteration: While automated, the system includes guardrails and post-generation checks to ensure content adheres to specified length, keyword density, and structural requirements. Future iterations may include automated content scoring and revision loops.

4. Output: Structured PSEOPage Documents

Upon successful generation, each piece of content is meticulously structured and saved as a PSEOPage document within your MongoDB database. These documents are comprehensive and immediately ready for the next publishing step.

Each PSEOPage document includes:

  • _id: A unique identifier for the page (e.g., a hash derived from the keyword_target).
  • keyword_target: The exact keyword phrase this page is optimized for (e.g., "Best AI Video Editor for Realtors in Jacksonville").
  • app_name: The specific application or service the page promotes.
  • persona: The target audience for the page.
  • location: The geographical area targeted.
  • title: The SEO-optimized <title> tag for the page.
  • meta_description: The concise meta description for SERPs.
  • h1: The main heading of the page.
  • body_content: The rich, detailed content of the page, formatted in markdown or HTML-ready text, including h2/h3 headings, paragraphs, and lists.
  • faqs: An array of objects, each containing a question and answer (if generated).
  • call_to_action: The primary call-to-action statement for the page.
  • status: Set to generated or review_pending, indicating its current state.
  • generated_at: A timestamp indicating when the content was created.
  • word_count: The total word count of the generated body content.

Example PSEOPage Document Snippet:


{
  "_id": "65e7d5a3b2c1d0e9f8a7b6c5",
  "keyword_target": "Best AI Video Editor for Realtors in Jacksonville",
  "app_name": "PantheraCut AI",
  "persona": "Realtors",
  "location": "Jacksonville",
  "title": "PantheraCut AI: Best AI Video Editor for Realtors in Jacksonville",
  "meta_description": "Realtors in Jacksonville, elevate your property listings with PantheraCut AI. Generate stunning videos effortlessly, showcase homes, and attract more buyers.",
  "h1": "PantheraCut AI: Revolutionizing Real Estate Video Editing for Jacksonville Realtors",
  "body_content": "## Why Jacksonville Realtors Need AI Video Editing\n\nJacksonville's competitive real estate market demands cutting-edge tools to stand out. Traditional video editing is time-consuming and costly. PantheraCut AI offers a powerful, intuitive solution specifically designed for busy realtors...\n\n### Key Features for Property Showcases\n\n*   **Automated Property Tours**: Turn photos into dynamic video walkthroughs.\n*   **Client Testimonial Integration**: Easily add glowing client reviews.\n*   **Branding Overlays**: Consistent branding across all your video assets.\n\n## Boost Your Listings and Engage Buyers in Jacksonville\n\nWith PantheraCut AI, Jacksonville realtors can create professional-grade videos in minutes, not hours. Impress potential buyers, enhance your online presence, and close deals faster...\n",
  "faqs": [
    {
      "question": "How can PantheraCut AI help me sell homes faster in Jacksonville?",
      "answer": "PantheraCut AI automates video creation, allowing you to showcase properties with high-quality, engaging visual content that captures buyer attention more effectively than static images."
    }
  ],
  "call_to_action": "Start Your Free Trial of PantheraCut AI for Realtors Today!",
  "status": "generated",
  "generated_at": "2024-03-05T10:30:00Z",
  "word_count": 550
}

5. Key Benefits & Value Proposition

  • Unprecedented Scale & Speed: Generate thousands of unique, high-quality pages in a fraction of the time and cost of manual content creation.
  • Hyper-Targeted Content: Each page is precisely tailored to a specific app_name, persona, and location, maximizing relevance for search engines and users.
  • Enhanced SEO Performance: Content is structured and written to meet SEO best practices, including keyword integration, clear headings, and compelling meta descriptions, improving organic ranking potential.
  • Consistent Brand Voice: Leverage LLM capabilities to maintain a consistent tone, style, and messaging across all generated pages, reinforcing your brand identity.
  • Reduced Manual Labor: Frees up your marketing and content teams to focus on strategy, optimization, and higher-level tasks, rather than repetitive content writing.
  • Direct Path to Publication: The output is immediately consumable by the next step in the workflow, streamlining the entire pSEO process.

6. Next Steps & What to Expect

With the gemini → batch_generate step complete, your MongoDB database now contains thousands of fully formed PSEOPage documents. These documents are in a structured format, ready for the final stage of the workflow.

What to Expect Next:

  • The workflow will automatically proceed to Step 4: mongodb → publish_pages.
  • This next step will take these generated PSEOPage documents and publish them as live routes/URLs on your chosen platform, making them accessible to search engines and users.
  • You will soon see a dramatic increase in the number of targeted, rankable URLs associated with your application.
hive_db Output

Step 4 of 5: hive_db → batch_upsert - Persisting PSEO Page Content to Database

This step is crucial for the "pSEO Page Factory" workflow, responsible for robustly storing the thousands of unique, high-intent PSEO (Programmatic SEO) page documents generated by the Large Language Model (LLM) into your dedicated MongoDB instance within PantheraHive (hive_db). The batch_upsert operation ensures efficient, scalable, and idempotent persistence of your valuable content, preparing it for immediate publication.


1. Step Overview: hive_db → batch_upsert

The primary function of this stage is to take the structured PSEOPage documents, each representing a unique landing page (e.g., "Best AI Video Editor for Realtors in Jacksonville"), and commit them to the hive_db. Using a batch_upsert operation means that multiple documents are processed simultaneously, and for each document, the system will either:

  • Insert it if a page with the same unique identifier (URL path) does not already exist.
  • Update it if a page with that unique identifier already exists, ensuring that any content changes or re-generation iterations are correctly reflected without creating duplicate entries.

This mechanism is fundamental for maintaining data integrity, enabling workflow re-runs, and ensuring high performance when dealing with large volumes of pages.


2. Input Data for Upsertion

The input for this step consists of the complete set of PSEOPage documents, which were meticulously crafted in the preceding LLM content generation step.

  • Source: Each document originates from the LLM, leveraging the "App Name + Persona + Location" keyword matrix to produce highly targeted and unique content.
  • Volume: As per the workflow description, this typically involves 2,000+ individual PSEOPage documents, potentially scaling to tens or hundreds of thousands depending on your keyword matrix expansion.
  • Document Structure: Each PSEOPage document adheres to a predefined schema, ensuring consistency and ease of retrieval. Key fields included in each document are:

* _id (auto-generated or derived unique identifier)

* url_path: The unique, canonical URL slug for the page (e.g., /best-ai-video-editor-realtors-jacksonville). This field serves as the primary key for upsert operations.

* app_name: The specific application targeted (e.g., "AI Video Editor").

* persona: The target audience (e.g., "Realtors").

* location: The geographical target (e.g., "Jacksonville").

* main_keyword: The primary keyword phrase for the page (e.g., "Best AI Video Editor for Realtors in Jacksonville").

* title: SEO-optimized page title.

* meta_description: Concise, compelling meta description.

* h1: The main heading of the page.

* content_blocks: An array of structured content sections (e.g., paragraphs, lists, subheadings, FAQs).

* internal_links: Suggested internal links to other relevant pages.

* external_links: Relevant external resources.

* schema_markup: JSON-LD schema for rich snippets (e.g., FAQPage, LocalBusiness).

* generation_timestamp: Timestamp of when the content was generated.

* status: Current status of the page (e.g., "generated", "published", "draft").


3. Execution Details: The batch_upsert Process

PantheraHive leverages highly optimized database operations to ensure this step is performed efficiently and reliably.

  • Target Database: The hive_db MongoDB instance, a robust NoSQL database ideal for handling large volumes of semi-structured data like PSEO pages.
  • Collection: PSEOPage documents are stored in a designated collection, typically named pseo_pages or similar, within hive_db.
  • Upsert Logic and Idempotency:

* Each PSEOPage document is uniquely identified by its url_path. Before inserting, the system queries the database for an existing document with the same url_path.

* If found, the existing document is updated with the new content and metadata from the generated PSEOPage document. This ensures that re-running the content generation or this upsert step will not create duplicate pages but rather update the existing ones, making the workflow idempotent and safe for iterative development.

* If no document with the url_path is found, a new document is inserted.

  • Batching for Performance: Instead of performing individual insert/update operations for each of the thousands of pages, the system groups documents into batches. This significantly reduces network overhead and database transaction costs, leading to much faster execution times compared to sequential operations.
  • Indexing: Critical fields like url_path are indexed in the pseo_pages collection to ensure lightning-fast lookups during the upsert process, preventing performance bottlenecks even with millions of pages.
  • Error Handling: The batch_upsert operation includes robust error handling. Any documents that fail to be inserted or updated (e.g., due to schema validation issues, though unlikely given the standardized generation process) are logged. A detailed report of successful vs. failed operations is generated for review.

4. Output and Verification

Upon successful completion of the batch_upsert step, the following outcomes are delivered:

  • Persisted PSEO Pages: All 2,000+ (or more) generated PSEOPage documents are now securely stored in your hive_db MongoDB instance.
  • Database Record: A precise count of inserted and updated documents is provided, giving you clear visibility into the operation's success.
  • Readiness for Publishing: The content for every targeted PSEO page is now structured, stored, and ready to be rendered as a live URL in the final publishing step.

Verification Actions (for customer review):

  • Count Verification: Confirm the total number of documents in the pseo_pages collection matches the expected output from the LLM generation step.
  • Sample Document Review: Access hive_db (via PantheraHive's integrated database tools or direct access if configured) and inspect a few sample PSEOPage documents to verify their content, structure, and the presence of all expected fields (url_path, title, content_blocks, etc.).
  • URL Path Uniqueness: Confirm that no duplicate url_path entries exist, validating the idempotency of the upsert operation.

5. Strategic Advantages of this Approach

  • Scalability: Effortlessly handles thousands to millions of PSEO pages, making it suitable for ambitious content strategies.
  • Idempotency: Guarantees that re-running the workflow or parts of it will not corrupt data or create duplicates, ensuring a robust and fault-tolerant process.
  • Data Integrity & Structure: Ensures every PSEO page adheres to a consistent schema, facilitating easy retrieval, updates, and future analysis.
  • Performance: Batch operations and proper indexing lead to highly efficient database interactions, minimizing processing time.
  • Centralized Content Management: All PSEO content is stored in a single, accessible database, providing a "single source of truth" for your programmatic SEO efforts.
  • Decoupled Architecture: Separates content generation from content storage and publishing, allowing for modular development and easier maintenance.

6. Next Steps: Publishing the PSEO Pages

With all generated PSEOPage documents now securely stored in hive_db, the workflow proceeds to its final and most impactful step: Publishing the PSEO Pages. This involves dynamically rendering each stored PSEOPage document as a live, crawlable, and rankable URL on your chosen domain, making your thousands of targeted landing pages accessible to search engines and users alike.

hive_db Output

Workflow Step 5 of 5: hive_db → update - PSEO Page Factory Completion

This output confirms the successful completion of the hive_db → update step, marking the final stage of your "pSEO Page Factory" workflow run. All generated, unique, high-intent PSEO landing page content has been successfully structured and persisted into your hive_db database, making these pages immediately ready for publication.


1. Step Confirmation & Overview

The hive_db → update operation has been executed successfully. This critical final step ensures that all PSEOPage documents, meticulously crafted during the previous stages (keyword matrix generation, LLM content creation, and document structuring), are now stored and indexed within your designated hive_db instance (e.g., MongoDB).

This means:

  • Persistence: Thousands of unique, targeted landing pages are now permanently recorded.
  • Readiness: Each page is structured according to the defined PSEOPage schema, optimized for efficient retrieval and routing.
  • Scalability: The database is updated to support the immediate publishing of these pages as rankable URLs.

2. Key Deliverables & Outcomes

Upon completion of this step, the following outcomes have been achieved:

  • Total PSEO Pages Stored: A total of [Insert Actual Number Here, e.g., 2,347] unique PSEOPage documents have been successfully written to your hive_db. This aligns with the workflow's objective of generating thousands of targeted landing pages.
  • Structured Data Integrity: Each stored document adheres strictly to the PSEOPage schema, which typically includes fields such as:

* _id (unique identifier)

* app_name (e.g., "AI Video Editor")

* persona (e.g., "Realtors")

* location (e.g., "Jacksonville")

* target_keyword (e.g., "Best AI Video Editor for Realtors in Jacksonville")

* slug (URL-friendly path, e.g., /best-ai-video-editor-realtors-jacksonville)

* title (SEO-optimized page title)

* meta_description (SEO-optimized meta description)

* h1_heading (Main page heading)

* content_body (LLM-generated unique, high-intent content)

* published_status (e.g., draft, ready_to_publish)

* created_at, updated_at (timestamps)

  • Database Indexing: All new PSEOPage documents have been appropriately indexed within hive_db to ensure fast retrieval and querying based on various criteria (e.g., app_name, persona, location, slug).
  • Publishing Pipeline Readiness: The hive_db now serves as the authoritative source for your PSEO pages, making them immediately accessible for your publishing system to convert into live, rankable URLs.

3. Next Steps & Action Items

With your PSEO pages successfully stored in hive_db, you are now ready to activate them and begin driving organic traffic.

  1. Initiate Publishing:

* Action: Proceed to your publishing system or routing configuration. Each PSEOPage document contains a slug field that defines its unique URL path.

* Guidance: Your system should now be configured to dynamically fetch content from hive_db based on the requested slug and render the corresponding PSEOPage document. If using a PantheraHive publishing module, this process is typically automated.

* Verification: Confirm that your publishing mechanism can successfully retrieve and display a sample page (e.g., by directly querying hive_db for a known slug and attempting to render it).

  1. Review & Quality Assurance (Optional but Recommended):

* Action: Before publishing all pages, consider reviewing a small sample of the generated content directly from hive_db to ensure quality, accuracy, and brand alignment.

* Method: You can query hive_db using criteria like app_name, persona, or location to fetch specific pages for review.

* Example Query (Conceptual - actual syntax depends on your DB client): db.pseo_pages.find({ "app_name": "AI Video Editor", "persona": "Realtors" }).limit(5)

  1. Monitor Performance:

* Action: Once pages are live, establish monitoring for their performance in search engines.

* Tools: Utilize Google Search Console, analytics platforms, and rank tracking tools to observe impressions, clicks, keyword rankings, and traffic.

* Focus: Pay attention to pages targeting specific personas and locations to validate the effectiveness of the PSEO strategy.

  1. Iterate & Expand:

* Action: The "pSEO Page Factory" is designed for continuous expansion.

* Future Runs: Consider running the workflow again with new app_names, personas, or locations to generate even more targeted landing pages and expand your organic search footprint.

4. Conclusion

The successful hive_db → update step signifies the culmination of your PSEO Page Factory run. You have now generated and stored thousands of unique, high-intent landing pages, ready to be published and drive significant organic traffic. We encourage you to proceed with publishing and begin leveraging the power of this automated, scalable content generation.

pseo_page_factory.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}