Search Functionality Builder
Run ID: 69bcaaad77c0421c0bf4a7092026-03-29Development
PantheraHive BOS
BOS Dashboard

Workflow Execution: Search Functionality Builder (Step 1/2: Generate)

Overview

This workflow step has successfully generated a comprehensive plan and initial architectural recommendations for implementing Full-Text Search (FTS) functionality. Given the user inputs of "Full-Text" search type and "Database" as the data source, the focus is on leveraging existing database capabilities while outlining a path for future scalability with dedicated search engines.

User Input Summary

Generated Architecture & Component Recommendations

Based on the requirement to implement Full-Text Search directly from a database, we recommend a phased approach. The initial phase will leverage native database FTS capabilities for quicker implementation, with a clear path to integrate a dedicated search engine for future scalability and advanced features.

Approach 1: Native Database Full-Text Search (Recommended for Initial Implementation)

This approach utilizes the built-in full-text search features of your chosen relational database, providing a solid foundation with minimal new infrastructure.

1. Your Existing Database: The primary data store (e.g., PostgreSQL, MySQL, SQL Server).

2. Application Backend: Your application's server-side logic responsible for constructing and executing search queries, and processing results.

3. Database FTS Features: Specific index types, data types, and functions provided by the database for full-text search.

* Database Selection & Capabilities:

* PostgreSQL: Highly recommended for its advanced tsvector and tsquery types, GIN/GiST indexes, robust ranking functions (ts_rank), and support for multiple language configurations. This offers the most powerful native FTS.

* MySQL: Provides FULLTEXT indexes for MyISAM and InnoDB tables. Simpler to configure but offers fewer advanced features (e.g., less sophisticated ranking, no direct language stemming beyond basic).

* SQL Server: Offers FULLTEXT catalogs and indexes, providing good capabilities including linguistic analysis, but typically requires more setup and configuration.

* Schema Modifications for Searchability:

* Identify Searchable Fields: Determine which text-based columns (e.g., product_name, description, article_body, tags) across your tables need to be included in the search index.

* Dedicated Search Vector Column (PostgreSQL/SQL Server):

* PostgreSQL: Create a new tsvector column (e.g., search_content) in relevant tables. This column will store the pre-processed, tokenized, and stemmed representation of your searchable text. It can be a GENERATED ALWAYS AS column for automatic updates.

* SQL Server: Full-text indexes are typically created directly on existing text columns, but you can concatenate them via a view or application logic.

* Example (PostgreSQL products table):

text • 652 chars
    *   **Querying & Relevance**:
        *   **Database-Specific Operators**: Utilize the native FTS operators for matching (e.g., `@@` in PostgreSQL, `MATCH AGAINST` in MySQL, `CONTAINS`/`FREETEXT` in SQL Server).
        *   **Ranking**: Implement relevance ranking using database functions to order search results.
            *   **PostgreSQL**: Use `ts_rank` or `ts_rank_cd` to assign scores based on term frequency and proximity.
            *   **MySQL**: `MATCH AGAINST` includes a relevance score.
            *   **SQL Server**: `CONTAINS`/`FREETEXT` have built-in ranking mechanisms.
        *   **Example (PostgreSQL query)**:
            
Sandboxed live preview
  • Data Synchronization:

* Automatic Updates: If using generated columns (PostgreSQL) or triggers, the search_content column will automatically update when source columns change. This is the preferred method.

* Initial Backfill: For existing data, a one-time script will be needed to populate the search_content column for all records.

Approach 2: Dedicated Search Engine (Recommended for Future Scalability & Advanced Features)

While native database FTS is excellent for initial needs, dedicated search engines offer superior capabilities for large-scale, complex, and high-performance search requirements. This approach should be considered for future phases.

  • Core Components:

1. Database: Remains the source of truth for your data.

2. Dedicated Search Engine: An external system optimized for search (e.g., Elasticsearch, Apache Solr, OpenSearch).

3. Application Backend: Handles user requests, translates them into search engine queries, and processes results.

4. Indexing Service/Worker: A separate service responsible for extracting data from the database, transforming it, and pushing it to the search engine.

  • Key Considerations for Future Phases:

* Data Synchronization Strategies:

* Real-time (CDC): Implement Change Data Capture (CDC) using tools like Debezium or database-level triggers to stream changes to a message queue (e.g., Kafka, RabbitMQ). An indexing service consumes these messages to update the search engine in near real-time.

* Batch/Scheduled Indexing: For less frequently updated data, a periodic job can re-index relevant data from the database to the search engine.

* Search Engine Schema/Mapping: Define a schema within the search engine that optimizes for search (e.g., text fields for full-text, keyword for exact matching, date for range queries, nested for complex objects).

* Advanced Features: Dedicated engines excel at:

* Faceting & Filtering: Dynamic aggregation of results based on categories, price ranges, etc.

* Autocomplete & Query Suggestion: Providing real-time suggestions as users type.

* Typo Tolerance: Handling misspellings with fuzzy matching.

* Synonyms & Stop Words: Custom dictionaries to improve relevance and expand search results.

* Geospatial Search: For location-based queries.

* Complex Ranking & Relevance Tuning: Highly customizable scoring algorithms.

* Scalability: Distributed architecture for handling vast amounts of data and high query loads.

Actionable Plan for Development (Phase 1: Native Database FTS)

This plan outlines the immediate steps to implement Full-Text Search using your database's native capabilities.

  1. Database Schema Analysis & Modification:

* Task: Identify all tables and specific text columns requiring FTS.

* Action: Based on your chosen database, add the search_content column (e.g., tsvector for PostgreSQL, or identify direct text columns for MySQL/SQL Server).

* Deliverable: SQL migration scripts to add/modify columns. Update ORM/data access layer configuration.

  1. Full-Text Index Creation:

* Task: Create the appropriate full-text index on the search_content column or designated text columns.

* Action: Write SQL scripts to create GIN (PostgreSQL), FULLTEXT (MySQL), or FULLTEXT (SQL Server) indexes.

* Deliverable: SQL migration scripts for index creation.

  1. Backend API Development for Search:

* Task: Create a dedicated API endpoint for search queries.

* Action:

* Define a RESTful endpoint (e.g., GET /api/search?q={query_string}&page={page}&limit={limit}&sort={sort_order}).

* Implement logic to construct database-specific FTS queries based on query_string.

* Incorporate pagination and relevance-based sorting (ORDER BY rank DESC).

* Add basic filtering capabilities (e.g., by category, date) alongside FTS.

* Deliverable: API endpoint documentation, backend code for search logic, unit tests.

  1. Data Synchronization & Backfill:

* Task: Ensure search_content is automatically updated and populate existing data.

* Action:

* Verify that GENERATED ALWAYS AS columns or database triggers are correctly updating the search_content on INSERT/UPDATE.

* Develop a one-time script to backfill the search_content column for all existing records in your database.

* Deliverable: Automated update mechanism, backfill script, and verification plan.

  1. Initial Testing & Performance Benchmarking:

* Task: Validate functionality and performance.

* Action:

* Conduct comprehensive unit and integration tests for the search API.

* Perform basic performance tests with varying query complexities and data volumes to identify potential bottlenecks.

* Deliverable: Test reports, initial performance metrics.

Structured Data Summary

| Category | Key Recommendation / Decision | Details |

| :---------------------- | :------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

| Overall Strategy | Start with Native Database Full-Text Search | Leverage existing database capabilities for quicker initial implementation and lower operational overhead. Plan for dedicated engine later if requirements scale. |

| Database Choice | PostgreSQL (Highly Recommended) | Superior FTS features (tsvector, tsquery, GIN indexes, ranking, language support). MySQL/SQL Server are viable alternatives with fewer advanced features. |

| Schema Design | Dedicated search_content / Full-Text Indexable Columns | Create a computed/generated column (e.g., tsvector in PostgreSQL) or identify direct text columns for indexing. Concatenate relevant text fields for comprehensive search. |

| Indexing | GIN Index (PostgreSQL) / FULLTEXT Index (MySQL/SQL Server) | Essential for query performance. Ensure the index is created on the search_content or designated FTS columns. |

| Querying | Database-Specific FTS Operators & Ranking | Use native operators (@@, MATCH AGAINST, CONTAINS) and ranking functions (ts_rank) for relevance. |

| Data Sync (Initial) | Database Triggers / Generated Columns | Automate updates to the search_content column upon source data changes. Develop a backfill script for existing data. |

| API Endpoint | /api/search?q={query_string}&page={page}&limit={limit} | Standard RESTful endpoint for search, handling query parameters, pagination, and result formatting. |

| Future Consideration| Dedicated Search Engine (Elasticsearch/Solr/OpenSearch) | Plan for migration to a dedicated engine when advanced features (faceting, geospatial, complex ranking, large scale) or extreme performance become critical. Implement a robust data sync strategy. |

Step 2: projectmanager

Workflow Execution: Search Functionality Builder - Step 2: Document

Category: Development

Workflow Name: Search Functionality Builder

Step: 2 of 2 - Document

This document provides comprehensive, actionable guidance for implementing Full-Text Search (FTS) directly within your database, based on the plan derived from your inputs (search_type: Full-Text, data_source: Database). We will focus primarily on PostgreSQL as a robust and feature-rich example, while the core concepts are broadly applicable to other relational databases with native FTS capabilities.


1. Introduction to Database-Native Full-Text Search

Implementing FTS directly within your database leverages the power of your existing data infrastructure to provide fast and relevant search results. This approach simplifies your architecture by avoiding external search engines (like Elasticsearch or Solr) for simpler use cases, reducing operational overhead and maintaining data consistency within a single system.

Key Advantages:

  • Simplified Architecture: No need to manage and synchronize data with an external search index.
  • Data Consistency: Search results are always in sync with your primary data.
  • Lower Latency: Often faster for smaller to medium datasets as data doesn't need to leave the database.
  • Cost-Effective: Utilizes existing database resources, potentially avoiding additional infrastructure costs.

Core Concepts:

  • Tokenization: Breaking text into individual words or "tokens."
  • Stemming: Reducing words to their root form (e.g., "running" -> "run").
  • Stop Words: Removing common, uninformative words (e.g., "the", "a", "is").
  • Dictionaries: Sets of rules for tokenization, stemming, and stop words, often language-specific.
  • tsvector: A special data type in PostgreSQL (and similar in other databases) that stores the processed, indexed form of your text.
  • tsquery: A special data type that stores the processed form of your search query.
  • Ranking: Algorithms to score the relevance of search results.

2. Recommended Database Technology: PostgreSQL Full-Text Search

PostgreSQL offers a powerful and mature native Full-Text Search engine. It provides excellent performance, flexibility, and integrates seamlessly with your existing data.

Why PostgreSQL FTS?

  • Robust Features: Supports stemming, stop words, dictionaries, language configurations, phrase search, and ranking.
  • Performance: Utilizes GiST and GIN indexes for extremely fast query times on large datasets.
  • Integration: FTS capabilities are built directly into the SQL language.
  • Scalability: Can handle significant data volumes and query loads.

3. Database Schema Modifications

To implement FTS, you will need to add a dedicated column to your table(s) to store the tsvector representation of your searchable content.

Actionable Steps:

  1. Identify Searchable Tables & Columns: Determine which tables and columns contain the text you want to make searchable (e.g., products.name, products.description, articles.title, articles.body).
  2. Add tsvector Column: Add a new column to store the pre-processed text.

Example SQL:


    ALTER TABLE products
    ADD COLUMN search_vector tsvector;
  1. Create a GIN Index: A GIN (Generalized Inverted Index) index is crucial for fast tsvector queries.

Example SQL:


    CREATE INDEX idx_products_search_vector
    ON products USING GIN (search_vector);

* Recommendation: GIN indexes are generally preferred for FTS due to their speed for query lookups, though they are slower to build and update than GiST indexes. For very frequent updates, consider GiST, but GIN is usually the better choice for FTS.


4. Indexing Strategy: Populating and Maintaining the tsvector Column

The search_vector column needs to be populated with the processed text from your source columns. This should be done initially and then kept updated automatically.

Actionable Steps:

  1. Initial Population: Populate the search_vector for existing data.

Example SQL:


    -- For English language, combining name and description
    UPDATE products
    SET search_vector = to_tsvector('english', name || ' ' || description);

* Recommendation: Choose the appropriate language configuration (e.g., 'english', 'spanish', 'simple'). You can also combine multiple columns, concatenating them with spaces.

* Weighting: For better relevance, you can assign weights to different fields.


        UPDATE products
        SET search_vector = setweight(to_tsvector('english', coalesce(name, '')), 'A') ||
                            setweight(to_tsvector('english', coalesce(description, '')), 'B');

(Where 'A' is highest weight, 'D' is lowest).

  1. Automated Updates with a Trigger: Use a database trigger to automatically update the search_vector column whenever the source columns change.

Example SQL (for products table, name and description columns):


    -- Create a function to update the tsvector
    CREATE OR REPLACE FUNCTION update_product_search_vector() RETURNS TRIGGER AS $$
    BEGIN
        NEW.search_vector = setweight(to_tsvector('english', coalesce(NEW.name, '')), 'A') ||
                            setweight(to_tsvector('english', coalesce(NEW.description, '')), 'B');
        RETURN NEW;
    END;
    $$ LANGUAGE plpgsql;

    -- Create the trigger
    CREATE TRIGGER trg_products_search_vector
    BEFORE INSERT OR UPDATE OF name, description ON products
    FOR EACH ROW EXECUTE FUNCTION update_product_search_search_vector();

* Recommendation: This approach ensures that your search index is always up-to-date with your data, preventing stale search results.


5. Querying Strategy: Performing Searches

Once your search_vector is populated and indexed, you can perform queries using the @@ operator.

Actionable Steps:

  1. Basic Search:

Example SQL:


    SELECT id, name, description
    FROM products
    WHERE search_vector @@ to_tsquery('english', 'blue & widget');

* to_tsquery: Expects a pre-formatted query string with boolean operators (& for AND, | for OR, ! for NOT, <-> for phrase search).

* plainto_tsquery: A more user-friendly function that converts plain text into a tsquery automatically inserting & between words. This is generally preferred for user-entered search terms.

Example SQL (User-friendly search):


    SELECT id, name, description
    FROM products
    WHERE search_vector @@ plainto_tsquery('english', 'blue widget'); -- Automatically becomes 'blue & widget'
  1. Phrase Search: To search for an exact phrase or words in a specific order.

Example SQL:


    SELECT id, name, description
    FROM products
    WHERE search_vector @@ to_tsquery('english', 'electric <-> car'); -- "electric car"
  1. Prefix Matching: To allow searching for partial words (e.g., "comp" for "computer").

Example SQL:


    SELECT id, name, description
    FROM products
    WHERE search_vector @@ to_tsquery('english', 'comp:*'); -- Matches "computer", "company", etc.

* Recommendation: For user input, you might combine plainto_tsquery with manual prefixing for the last word if you want "starts-with" behavior.


6. Ranking and Ordering Results

To provide the most relevant results first, you need to rank them. PostgreSQL offers ts_rank and ts_rank_cd functions.

Actionable Steps:

  1. Basic Ranking:

Example SQL:


    SELECT
        id,
        name,
        description,
        ts_rank(search_vector, plainto_tsquery('english', 'powerful engine')) AS rank_score
    FROM products
    WHERE search_vector @@ plainto_tsquery('english', 'powerful engine')
    ORDER BY rank_score DESC;
  1. Weighted Ranking: If you used setweight during indexing, ts_rank can use these weights to prioritize matches in certain fields.

Example SQL:


    SELECT
        id,
        name,
        description,
        ts_rank(ARRAY[0.1, 0.2, 0.4, 1.0], search_vector, plainto_tsquery('english', 'powerful engine')) AS rank_score
    FROM products
    WHERE search_vector @@ plainto_tsquery('english', 'powerful engine')
    ORDER BY rank_score DESC;

* ARRAY[D, C, B, A]: These are the weights for the different categories. 1.0 for 'A' (highest), 0.4 for 'B', etc. Adjust these values to fine-tune relevance.

* ts_rank_cd: A variant of ts_rank that uses a different algorithm, often providing more intuitive results for short queries. Experiment to see which works best for your data.


7. Advanced Features & Considerations

  • Highlighting/Snippet Generation (ts_headline): Generate snippets of the matching text with highlighted keywords.

Example SQL:


    SELECT
        id,
        name,
        ts_headline('english', description, plainto_tsquery('english', 'powerful engine')) AS snippet
    FROM products
    WHERE search_vector @@ plainto_tsquery('english', 'powerful engine');
  • Custom Dictionaries and Stop Words: For highly specialized domains or languages not fully supported, you can create custom dictionaries and stop word lists.

* Recommendation: This is an advanced topic. Start with default configurations and only customize if specific linguistic requirements are not met.

  • Synonyms: Implement synonym support using custom dictionaries or rule sets to broaden search results (e.g., "car" also matches "automobile").
  • Performance Tuning:

* Regularly VACUUM ANALYZE your tables, especially those with GIN indexes.

* Use EXPLAIN ANALYZE to understand query plans and identify bottlenecks.

* Ensure your shared_buffers and work_mem PostgreSQL configuration parameters are adequate.

  • Scalability: For extremely large datasets (billions of rows) or very high query loads, consider partitioning your table or eventually integrating with an external search engine (e.g., Elasticsearch) if database-native FTS becomes a bottleneck. However, PostgreSQL FTS can handle surprisingly large datasets efficiently.

8. Application Integration

Integrating FTS into your application involves crafting the SQL queries and handling the results.

  • ORM/Database Client:

* Most ORMs (e.g., SQLAlchemy for Python, ActiveRecord for Ruby, Prisma for Node.js) allow you to execute raw SQL or provide specific FTS functions/extensions.

* If using raw SQL, ensure proper parameter binding to prevent SQL injection.

  • API Design:

* Expose a search endpoint (e.g., /api/search?q=query_term&page=1&limit=10).

* Handle user input sanitization.

* Consider pagination for large result sets.

* Return relevant data, including potentially the ts_headline snippets.

Example Python (SQLAlchemy) Snippet:


from sqlalchemy import text, func
from my_app.database import session # Assuming you have a session object

def search_products(query_term: str, language: str = 'english'):
    tsquery = func.plainto_tsquery(language, query_term)
    
    # Using text() for raw SQL functions not directly mapped by ORM
    results = session.query(
        Product.id,
        Product.name,
        Product.description,
        func.ts_rank(Product.search_vector, tsquery).label('rank_score'),
        func.ts_headline(language, Product.description, tsquery).label('snippet')
    ).filter(
        Product.search_vector.op('@@')(tsquery)
    ).order_by(
        text('rank_score DESC')
    ).all()
    
    return results

9. Maintenance and Monitoring

  • Re-indexing: While triggers keep the index up-to-date, in rare cases (e.g., major data migration, changes to tsvector generation logic), you might need to rebuild the tsvector column and its GIN index. This can be done offline or with concurrent index builds (CREATE INDEX CONCURRENTLY).
  • Monitoring: Monitor database performance metrics, especially I/O on your FTS indexes. Look for slow queries that might indicate a need for index optimization or query refinement.
  • Backup and Recovery: Ensure your FTS indexes are included in your regular database backup strategy.

10. Alternative Database FTS Solutions (Brief Overview)

While PostgreSQL is highly recommended, other databases also offer native FTS:

  • MySQL: Supports FTS for InnoDB and MyISAM tables. Less feature-rich than PostgreSQL FTS, but suitable for simpler needs.
  • SQL Server: Offers Full-Text Search with powerful capabilities including linguistic analyzers and proximity search. Requires a dedicated Full-Text Search component installation.
  • Oracle Text: A very powerful and mature FTS solution for Oracle databases, offering extensive features for complex search requirements.

11. Next Steps / Action Plan

  1. Choose Language Configuration: Decide on the primary language(s) for your search.
  2. Identify Target Tables & Columns: Pinpoint exactly which data needs to be searchable.
  3. Implement Schema Changes: Add search_vector column and GIN index.
  4. Develop Trigger Function: Create the PL/pgSQL function and trigger to keep the search_vector updated.
  5. Perform Initial Indexing: Run the UPDATE statement to populate the search_vector for existing data.
  6. Develop Search Queries: Integrate plainto_tsquery, @@, ts_rank, and ts_headline into your application's data access layer.
  7. Test Thoroughly: Test with various queries, edge cases, and large datasets to ensure correctness and performance.
  8. Monitor Performance: Set up monitoring for your database to track FTS query performance and resource usage.
  9. Refine Ranking: Adjust ts_rank weights as needed based on user feedback and relevance testing.

By following these steps, you will successfully implement robust and efficient Full-Text Search functionality directly within your database.

search_functionality_builder.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}