Keyword Cannibalization Fixer
Run ID: 69c955d8a17964d77e86e11f2026-03-29SEO
PantheraHive BOS
BOS Dashboard

Identify and resolve keyword cannibalization issues where multiple pages compete for the same terms.

Keyword Cannibalization Identification & Initial Analysis

This document outlines the initial findings and analysis regarding potential keyword cannibalization issues on your website. Keyword cannibalization occurs when multiple pages on your site compete for the same keyword in organic search results. This dilutes your authority, confuses search engines, and can ultimately lead to lower rankings, reduced organic traffic, and inefficient crawl budget utilization.

Our initial analysis aims to identify these competing pages and keywords, laying the groundwork for a strategic resolution in Step 2.


Methodology for Identifying Cannibalization

To effectively identify keyword cannibalization, a multi-faceted approach is typically employed, combining data from various sources:

  • Google Search Console (GSC) Data Analysis:

* Reviewing "Performance" reports to identify queries where multiple URLs from your site are appearing in the SERP (Search Engine Results Page) for the same keyword over time.

* Looking for instances where a primary target page is frequently outranked by a less relevant, secondary page.

* Analyzing click-through rates (CTR) and average positions for competing URLs for specific queries.

  • SERP Analysis:

* Manually searching for your target keywords to observe which pages from your domain rank. If different pages rotate in and out of the top positions for the same query, or if two of your pages consistently rank near each other, it's a strong indicator of cannibalization.

  • Site Content Audit:

* Reviewing your website's content inventory for pages with highly similar topics, target keywords, or user intent.

* Identifying pages that might be inadvertently optimized for the same primary or secondary keywords.

  • Internal Link Structure Examination:

* Analyzing your internal linking to see if multiple pages are receiving similar anchor text from other pages, potentially signaling to search engines that they are equally relevant for a given term.

* Identifying instances where link equity might be diluted across several similar pages instead of being consolidated on a single authoritative page.

  • URL Structure & Information Architecture Review:

* Assessing if your site's structure inherently creates overlap (e.g., very similar subcategories, blog posts that mirror product descriptions).


Potential Keyword Cannibalization Instances (Hypothetical Examples)

Based on common website structures and typical cannibalization patterns, we have identified several potential areas where keyword cannibalization might be occurring. Please note that these are illustrative examples, and a deeper dive with your specific website data will confirm and detail actual instances.

Scenario 1: Product Category vs. Specific Product vs. Blog Post

  • Keyword Cluster: "Best [Product Type]", "[Product Type] Reviews", "Buy [Product Type]"
  • Competing URLs (Hypothetical):

* /category/product-type-a/ (Category Page)

* /product/product-type-a-model-xyz/ (Specific Product Page)

* /blog/guide-to-choosing-product-type-a/ (Informational Blog Post)

  • Observed Behavior: The category page, a specific product page, and a blog post all fluctuate in rankings for terms like "best product type a" or "product type a reviews." Sometimes the blog post ranks higher than the category page, even though the intent for "buy product type a" is transactional and better served by the category or product page.
  • Potential Issue: Search engines are unsure which page is most authoritative or relevant for commercial intent queries related to "Product Type A." The blog post might be capturing informational intent, but if it ranks for commercial queries, it can detract from conversion-focused pages.

Scenario 2: Broad Service Page vs. Niche Service Page

  • Keyword Cluster: "Digital Marketing Services", "SEO Services", "Content Marketing Agency"
  • Competing URLs (Hypothetical):

* /services/digital-marketing/ (Broad Service Page)

* /services/seo/ (Niche Service Page)

* /services/content-marketing/ (Niche Service Page)

  • Observed Behavior: The main "Digital Marketing Services" page sometimes ranks for "SEO services" or "Content Marketing agency," even though dedicated, more detailed pages exist for these specific services. Conversely, the niche pages might rank for broader terms where the main service page is more appropriate.
  • Potential Issue: The broad service page might be overly optimized for specific keywords that are already covered in detail by niche service pages, leading to internal competition and diluted relevance for both the broad and specific offerings.

Scenario 3: Blog Posts with Overlapping Topics

  • Keyword Cluster: "Benefits of Cloud Computing", "Why Use Cloud Services", "Cloud Computing Advantages"
  • Competing URLs (Hypothetical):

* /blog/5-benefits-of-cloud-computing-for-businesses/

* /blog/understanding-the-advantages-of-cloud-infrastructure/

* /blog/is-cloud-computing-right-for-you-key-benefits/

  • Observed Behavior: All three blog posts rank interchangeably for very similar informational queries, none consistently dominating.
  • Potential Issue: These articles cover almost identical ground, targeting the same user intent with very similar keywords. This signals to search engines that your site has multiple, equally relevant (or equally un-relevant) pieces of content, preventing any single article from achieving maximum authority and ranking potential.

Scenario 4: Location-Specific Pages (If Applicable)

  • Keyword Cluster: "Plumber [City Name]", "Emergency Plumbing [City Name]"
  • Competing URLs (Hypothetical):

* /locations/city-a/ (Main City Page)

* /services/emergency-plumbing-city-a/ (Service-specific page within a city)

* /blog/best-plumbers-in-city-a-guide/ (Informational blog post)

  • Observed Behavior: The main city page, a specific service page within that city, and even a blog post all compete for local search terms.
  • Potential Issue: If not carefully structured and optimized, location pages can easily cannibalize each other or more specific service pages within those locations, confusing local search rankings.

Immediate Impact Assessment

If left unaddressed, keyword cannibalization can lead to several negative outcomes:

  • Diluted Page Authority: Instead of one strong page ranking, your authority is split across multiple weaker pages.
  • Lower Rankings: Search engines may struggle to determine the most relevant page, resulting in none of your pages achieving optimal rankings.
  • Reduced Organic Traffic: Lower rankings inevitably lead to fewer impressions and clicks.
  • Confused User Experience: Users might land on a less relevant page, increasing bounce rates and reducing engagement.
  • Wasted Crawl Budget: Search engine crawlers spend time indexing and evaluating multiple similar pages instead of focusing on unique, valuable content.
  • Inaccurate Analytics: It becomes harder to track the performance of your primary target pages.

Next Steps: Preparing for Resolution (Step 2)

This initial analysis has identified potential areas of concern. The next crucial step is to validate these hypotheses with your specific website data and formulate a resolution strategy.

For the Customer:

To facilitate Step 2, please be prepared to provide access to or data from:

  • Google Search Console: Specifically, performance reports for relevant keywords and URLs.
  • Google Analytics (or equivalent): To understand user behavior on competing pages.
  • Your website's sitemap and content inventory.

For Step 2: "Keyword Cannibalization Fixer - Resolution Strategy"

The subsequent step will involve:

  1. Data Validation: Confirming the identified cannibalization instances using your real-time data.
  2. Impact Quantification: Assessing the actual impact on rankings and traffic for each confirmed instance.
  3. Strategy Formulation: Developing a tailored resolution plan for each identified cannibalization issue, which may include:

* Content Merging/Consolidation: Combining weaker pages into one strong, comprehensive resource.

* Content Deletion & Redirects: Removing irrelevant pages and 301 redirecting them to the authoritative page.

* Content Restructuring/Re-optimization: Differentiating content and optimizing pages for distinct keyword sets and user intents.

* Internal Linking Adjustments: Directing link equity to the primary target page.

* Canonical Tags: Indicating the preferred version of a page to search engines (used carefully).

* Noindex Tags: Preventing certain pages from being indexed if they serve a specific internal purpose but shouldn't rank.

We look forward to collaborating with you on Step 2 to implement effective solutions and enhance your website's organic performance.

gemini Output

Workflow: Keyword Cannibalization Fixer

Step 2 of 2: Gemini - Generate Resolution Strategy

This deliverable provides a comprehensive, detailed, and actionable strategy for identifying and resolving keyword cannibalization issues on your website. Implementing these strategies will help consolidate your site's authority, improve search engine rankings, and enhance overall organic performance.


Understanding Keyword Cannibalization

What is it?

Keyword cannibalization occurs when multiple pages on the same website compete for the exact same or very similar keywords in search engine results. Instead of strengthening your site's authority for a given term, it dilutes it, confusing search engines about which page is most relevant.

Why is it a problem?

  • Diluted Authority: Your pages compete against each other, splitting potential ranking power and backlinks.
  • Lower Rankings: Search engines may struggle to determine the most authoritative page, often ranking neither page optimally.
  • Reduced Organic Traffic: Even if pages rank, the overall traffic potential for the keyword is often diminished.
  • Wasted Crawl Budget: Search engines spend valuable time crawling and indexing multiple similar pages instead of discovering new, unique content.
  • Confused User Experience: Users might land on a less relevant page, leading to higher bounce rates.

Generated Output: Detailed Strategy for Identifying and Resolving Keyword Cannibalization

This strategy is broken down into three phases: Identification, Resolution, and Monitoring.

Phase 1: Identification of Cannibalization Issues

The first step is to accurately pinpoint where keyword cannibalization is occurring on your site.

  1. Utilize Google Search Console (GSC):

* Performance Report: Navigate to the "Performance" report in GSC.

* Filter by Query: Select "Queries" and filter by specific keywords you suspect might be cannibalized (e.g., your core product/service keywords, high-value terms).

* Check "Pages" Tab: For each query, click on it and then select the "Pages" tab. If you see multiple URLs listed for the same keyword, especially if their rankings fluctuate significantly, this is a strong indicator of cannibalization.

* Look for Ranking Fluctuations: Identify instances where different URLs on your site appear in the SERPs for the same query at different times or positions.

  1. Leverage Professional SEO Tools (e.g., SEMrush, Ahrefs, Moz, Sistrix):

* Keyword Ranking Reports: Export your site's organic keyword rankings. Sort this data by keyword. Look for instances where multiple URLs from your domain rank for the same keyword.

* Site Audit Features: Many tools have built-in site audit features that can flag potential cannibalization issues or duplicate content.

* Organic Positions Report: Input your domain into the tool, navigate to "Organic Positions" or "Keywords," and filter to identify multiple URLs ranking for identical or near-identical terms.

  1. Perform site:yourdomain.com "your keyword" Searches:

* Conduct specific searches on Google. For example, site:yourwebsite.com "best CRM software".

* Review the results to see which pages Google presents. If multiple pages with similar titles and descriptions appear, it's a sign of cannibalization.

  1. Conduct a Manual Content Audit:

* Review your content inventory, sitemap, or category pages.

* Identify pages with highly similar topics, content focus, and intended user intent. Pay close attention to blog posts, service pages, and product pages that might overlap.

  1. Create a "Cannibalization Audit Spreadsheet":

* This spreadsheet will be your central document for tracking and managing cannibalization issues.

* Recommended Columns:

* Target Keyword: The keyword being cannibalized.

Primary Ranking URL: The page you want* to rank for the keyword (if one exists).

* Secondary/Cannibalizing URL(s): The page(s) currently competing with the primary URL.

* Current Ranking (Primary): Current position in SERPs for the primary URL.

* Current Ranking (Secondary): Current position in SERPs for the secondary URL(s).

* Page Title (Primary): Title tag of the primary URL.

* Page Title (Secondary): Title tag(s) of the secondary URL(s).

* Content Overlap %: (Estimate or use a tool) How much content is duplicated/similar.

* User Intent (Primary): What user need does the primary page fulfill?

* User Intent (Secondary): What user need does the secondary page fulfill?

* Proposed Action: (e.g., Consolidate, Differentiate, Canonicalize, Noindex).

* Status: (e.g., To Do, In Progress, Completed, Monitoring).

* Notes: Any additional context or observations.

Phase 2: Resolution Strategies

Once you've identified the cannibalization issues, choose the most appropriate resolution strategy based on the nature of the competing pages.

  1. Strategy 1: Consolidate & Merge Content (with 301 Redirect)

When to Use: If multiple pages cover identical topics, target the exact same* user intent, and one page is clearly superior in quality, depth, or existing backlinks. This is often the most effective solution.

* How to Implement:

1. Identify the Strongest Page: Choose the page with the highest authority, best content, most relevant backlinks, and strongest historical performance. This will be your target page.

2. Merge Valuable Content: Extract any unique, valuable, or well-performing sections from the weaker, cannibalizing pages and integrate them into the chosen strong page. Ensure the consolidated page offers the most comprehensive and authoritative resource on the topic.

3. Implement 301 Redirects: Set up permanent (301) redirects from all weaker, merged URLs to the chosen strong, consolidated URL. This passes link equity and tells search engines the content has moved permanently.

4. Update Internal Links: Audit your website and update any internal links that were pointing to the old, weaker pages. Redirect them to the new, consolidated page to ensure proper link flow and user experience.

  1. Strategy 2: Differentiate Content & Intent

When to Use: If pages cover similar but distinct aspects of a topic, or target different stages* of the buyer's journey or user intent. The goal here is to make each page uniquely valuable.

* How to Implement:

1. Refine Keyword Targeting: Assign a unique primary keyword and a distinct set of secondary keywords to each page. Ensure these keywords reflect the specific angle or intent of that page.

2. Adjust Content Scope:

* Broaden one page: Make one page a comprehensive, general overview (e.g., "What is [Topic]?").

* Narrow another: Make the competing page more specific, targeting a niche aspect, a specific problem, a comparison, or a "how-to" guide (e.g., "How to Implement [Topic]" or "[Topic] for Small Businesses").

3. Optimize On-Page Elements: Ensure titles, meta descriptions, H1s, and the body content for each page clearly communicate its unique target keyword and user intent. Remove any ambiguity.

4. Add Unique Value: Expand on unique sections, add specific examples, case studies, or data points that are exclusive to each differentiated page.

  1. Strategy 3: Optimize Internal Linking

* When to Use: To clearly signal to search engines which page is the most important and authoritative for a specific keyword.

* How to Implement:

1. Audit Internal Links: Review all internal links on your site that use anchor text related to the cannibalized keyword.

2. Direct Link Equity: Ensure that all relevant internal links with the target keyword in their anchor text point to your preferred ranking page.

3. Adjust Competing Links: For pages that are competing, either remove internal links pointing to them with the target keyword, or change their anchor text to reflect their differentiated (non-competing) intent.

4. Strengthen Preferred Page: From high-authority pages on your site, add or strengthen internal links to your primary, preferred page using relevant anchor text.

  1. Strategy 4: Implement Canonical Tags (Use with Caution)

When to Use: When you must* have similar or identical content on different URLs (e.g., product variations, print versions, syndicated content, or pages with minor differences due to tracking parameters) but want to tell search engines which is the "master" version. This is often a technical solution for specific scenarios.

* How to Implement: On the cannibalizing page(s), add a rel="canonical" tag in the <head> section, pointing to the URL of the preferred, master page.

* Example: <link rel="canonical" href="https://www.yourwebsite.com/preferred-page/" />

Caution: Canonical tags are a hint*, not a directive. Search engines may choose to ignore them if other strong signals (like internal links or backlinks) contradict the canonical tag

Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}