AI Phone Receptionist
Run ID: 69c955e1a17964d77e86e19d2026-03-29Automation
PantheraHive BOS
BOS Dashboard

Step 2 of 3: Gemini Model Generation for AI Phone Receptionist

This deliverable outlines the core AI model (Gemini) prompts, architectural considerations, and integration points for your AI Phone Receptionist system. This step leverages Google's Gemini Pro to generate the conversational logic, intent recognition, and response generation capabilities essential for the receptionist's functions.


1. System Overview & Gemini's Role

The AI Phone Receptionist system will leverage Gemini Pro as its central intelligence unit. Gemini will be responsible for:


2. Core Gemini Model Prompts & Configuration

Below are the detailed prompts and configuration parameters designed for the Gemini model. These prompts define the AI's persona, its rules of engagement, and its task-specific behaviors.

2.1. Primary System Persona & Instruction Prompt

This is the foundational prompt that establishes the AI's identity and overarching operational guidelines.

Prompt:

text • 1,627 chars
You are "Aura," the professional and friendly AI receptionist for Meridian Solutions.
Your primary goal is to provide exceptional first-line support, efficiently manage calls, and ensure a seamless experience for every caller.

**Core Directives:**
1.  **Persona**: Be polite, professional, clear, concise, empathetic, and always helpful. Maintain a calm and composed tone.
2.  **Greeting**: Always start with a welcoming and professional greeting.
3.  **Intent Identification**: Quickly and accurately determine the caller's purpose.
4.  **Information Gathering**: Extract necessary details for routing, messages, or appointments.
5.  **Task Execution**: Guide callers through specific processes (e.g., taking a message, booking an appointment).
6.  **Routing/Escalation**: Know when and how to route calls or escalate to a human agent.
7.  **Clarity & Confirmation**: Always confirm understanding and key details with the caller.
8.  **Error Handling**: Politely request clarification if input is unclear or incomplete.
9.  **Language**: Speak clearly and use natural language. Respond only in [Primary Language, e.g., English].
10. **Context**: Maintain conversation context throughout the call. Refer to previous statements if necessary.

**Constraints:**
*   Do not invent information. If you don't know something, state that you need to connect them to someone who can help.
*   Do not engage in casual conversation beyond professional pleasantries.
*   Prioritize efficiency while maintaining politeness.

**Initial Greeting Example:**
"Thank you for calling Meridian Solutions, this is Aura. How may I help you today?"
Sandboxed live preview

AI Phone Receptionist System: Initial Design & Capability Generation

This document outlines the initial design and core capabilities for your AI Phone Receptionist system. Leveraging advanced AI, including the Gemini model for sophisticated natural language understanding and conversational flow, this system aims to revolutionize your call management, enhance customer experience, and streamline operational efficiency.

This is Step 1 of 3: Generate - focusing on a comprehensive conceptualization and detailed feature breakdown.


1. System Vision & Objectives

Vision: To create an intelligent, always-on AI phone receptionist that acts as the first point of contact for all incoming calls, providing a seamless, professional, and efficient experience for callers while significantly reducing manual workload for staff.

Key Objectives:

  • 24/7 Availability: Ensure continuous service for callers, regardless of business hours.
  • Improved Caller Experience: Provide quick, accurate, and personalized responses.
  • Efficient Call Routing: Direct callers to the correct department or individual promptly.
  • Automated Information Capture: Reliably take messages and log inquiries.
  • Streamlined Appointment Management: Facilitate self-service appointment booking and modifications.
  • Reduced Operational Costs: Minimize the need for dedicated human receptionists for routine tasks.
  • Scalability: Design a system capable of handling varying call volumes.

2. Core Capabilities & Detailed Functionality

The AI Phone Receptionist system will be built around four primary capabilities, each detailed below:

2.1. Intelligent Call Answering & Greeting

  • Personalized Greetings:

* Dynamic greetings based on time of day (e.g., "Good morning," "Good afternoon").

* Customizable business name and branding integration.

* Option for holiday-specific greetings or out-of-office announcements.

  • Initial Inquiry Qualification:

* Politely ascertain the caller's purpose (e.g., "How can I help you today?").

* Leverage Gemini's natural language understanding (NLU) to interpret intent from free-form speech.

* Handle multiple intents within a single utterance (e.g., "I'd like to book an appointment for a haircut and ask about your pricing").

  • Hold Music & Messaging:

* If immediate routing isn't possible or a human agent is busy, offer customizable hold music and pre-recorded messages (e.g., "Please hold while I connect you," "Your call is important to us").

* Option to offer a callback instead of holding.

2.2. Dynamic Inquiry Routing

  • Intent-Based Routing:

* Departmental Routing: Based on the caller's stated need, route to specific departments (e.g., Sales, Support, Billing, HR).

* Service-Specific Routing: Direct callers to agents specialized in particular services or products.

* Employee-Specific Routing: If a caller asks for a specific employee by name, attempt to connect them.

  • Conditional Routing Logic:

* Business Hours Check: If a department is closed, route to voicemail, offer a message-taking option, or provide alternative contact methods.

* Agent Availability Check: Integrate with internal systems to check agent status (available, busy, offline) before routing.

* Priority Routing: Define rules for high-priority calls (e.g., existing VIP clients) to bypass queues.

  • Transfer & Escalation:

* Seamlessly transfer calls to human agents with a warm handover (passing context).

* Option for the AI to stay on the line initially for support or to provide a summary to the human agent.

* Escalate complex or unhandled queries to a human supervisor or dedicated support line.

2.3. Automated Message Taking

  • Structured Message Capture:

* Prompt callers for essential information: Name, Caller ID, Phone Number, Email (if applicable), and detailed Message.

* Utilize Gemini's speech-to-text for accurate transcription, and NLU to extract key entities.

  • Message Delivery Options:

* Email Notification: Send transcribed messages to designated email addresses (e.g., department inbox, specific employee).

* SMS Notification: Option to send an SMS notification to relevant personnel.

* CRM Integration: Log messages directly into your existing Customer Relationship Management (CRM) system as new leads, tasks, or notes (e.g., Salesforce, HubSpot).

* Internal Communication Platform: Post messages to platforms like Slack or Microsoft Teams.

  • Confirmation & Follow-up:

* Confirm message receipt with the caller.

* Provide an estimated response time if possible.

2.4. Self-Service Appointment Booking & Management

  • Real-time Calendar Integration:

* Connect with popular calendar systems (e.g., Google Calendar, Outlook Calendar, specialized booking software like Calendly, Acuity Scheduling).

* Access real-time availability of staff, rooms, or resources.

  • Intelligent Slot Suggestion:

* Based on caller's preferences (date, time, service type, preferred staff member), suggest available slots.

* Handle complex scheduling rules (e.g., buffer times, specific service durations, staff holidays).

  • Booking Confirmation & Reminders:

* Confirm appointment details with the caller verbally.

* Send automated email or SMS confirmations immediately after booking.

* Schedule automated reminders closer to the appointment time.

  • Modification & Cancellation:

* Allow callers to inquire about, reschedule, or cancel existing appointments by providing identifying information (e.g., name, confirmation number).

* Update the calendar system in real-time.


3. Proposed Technology Stack & Architecture (High-Level)

The system will leverage a robust, scalable cloud-based architecture, with Google Cloud Platform (GCP) services at its core, specifically utilizing Gemini for advanced AI capabilities.

  • Voice Interface (Telephony Integration):

* Twilio Voice API / Google Cloud Telephony: For inbound call reception, call control, and outbound dialing (e.g., for callbacks).

* Google Cloud Text-to-Speech (TTS): To convert AI-generated responses into natural-sounding speech.

* Google Cloud Speech-to-Text (STT): To accurately transcribe caller's speech into text for NLU processing.

  • Conversational AI Engine:

* Google Dialogflow CX / Vertex AI Conversation (powered by Gemini): To manage conversational flows, identify user intent, extract entities, and maintain context across turns. This will be the brain for interpreting caller requests and generating appropriate responses.

* Gemini Model Integration: Directly leveraged within Dialogflow/Vertex AI for enhanced NLU, more natural conversational turns, complex reasoning, and potentially summarization of conversations before transfer.

  • Backend & Orchestration:

* Google Cloud Functions / Cloud Run: Serverless compute for handling business logic, API integrations, and event processing.

* Google Cloud Pub/Sub: For asynchronous communication between services (e.g., message delivery, event triggers).

* Google Cloud Firestore / Cloud SQL: For storing configuration, caller history, and system logs.

  • Integration Layer:

* RESTful APIs: For connecting with external systems like CRM, Calendar APIs (Google Calendar, Outlook 365), and potentially custom internal databases.

* OAuth 2.0: Secure authentication for third-party integrations.

Architectural Flow:

  1. Caller Dials In: Twilio/Google Telephony receives the call.
  2. Speech-to-Text: Caller's voice is converted to text.
  3. Intent Recognition (Gemini/Dialogflow): Text is processed by the AI engine to understand the caller's intent and extract key information.
  4. Business Logic Execution (Cloud Functions): Based on intent, appropriate actions are triggered:

* Querying a calendar system for availability.

* Accessing CRM for customer information.

* Logging a message.

* Checking department availability.

  1. Response Generation (Gemini/Dialogflow): An appropriate text response is generated.
  2. Text-to-Speech: The text response is converted back into natural-sounding speech.
  3. Voice Output: The AI speaks to the caller.
  4. Loop: This cycle continues until the call is concluded, transferred, or a message is taken.

4. Data Requirements & Training Considerations

  • Dialogue Data:

* Example phrases for various intents (e.g., "I want to book an appointment," "What are your hours?," "Can I speak to sales?").

* Variations in phrasing, accents, and tones to train the NLU effectively.

  • Entity Data:

* Lists of departments, services, employee names, common products, and locations.

* Date and time formats, number formats.

  • Integration Data:

* API keys and credentials for CRM, Calendar, and other third-party systems.

* Mapping of internal department names to routing rules.

  • Call Data for Continuous Improvement:

* Anonymized call recordings and transcripts for ongoing model training and refinement.

* Feedback mechanisms for human agents to correct AI errors.


5. Implementation Roadmap (Subsequent Steps)

This initial generation phase lays the groundwork. The next steps will focus on bringing this design to life:

  • Step 2: Develop & Integrate:

* Set up core conversational flows in Dialogflow CX/Vertex AI.

* Integrate with Twilio for telephony.

* Develop API integrations for CRM and Calendar systems.

* Implement initial business logic for routing, message taking, and appointment booking.

* Conduct initial unit and integration testing.

  • Step 3: Test, Refine & Deploy:

* Conduct extensive user acceptance testing (UAT) with internal stakeholders.

* Gather feedback and refine conversational flows, NLU accuracy, and voice prompts.

* Perform performance and load testing.

* Train human agents on AI handover protocols.

* Pilot deployment with a small group of users, then full production rollout.

* Establish monitoring and continuous improvement processes.


6. Next Actions for Customer Review

To proceed effectively, please review this detailed design document and provide feedback on the following:

  1. Feature Prioritization: Are there any features listed that are critical for your immediate launch, or any that could be phased in later?
  2. Specific Integration Needs: Confirm your exact CRM system, calendar software, and any other critical systems the AI needs to connect with.
  3. Branding & Tone: Provide guidelines for the desired tone of voice (e.g., formal, friendly, professional) and any specific branding requirements for greetings.
  4. Key Personnel: Identify the primary stakeholders and subject matter experts for each department (Sales, Support, HR, etc.) who can provide detailed requirements for routing and messaging.
  5. Example Dialogues: Share examples of typical customer inquiries and desired AI responses to help refine the conversational flows.

Your detailed input on these points will be crucial for the successful execution of the subsequent development and deployment phases.

3.2. Key API Integration Points

  1. Voice Platform API (e.g., Twilio Voice, Vonage):

* Functionality: Manages incoming calls, plays audio, collects DTMF input, initiates call transfers, hangs up.

* Integration: The AI Orchestrator will interact with this API to answer calls, send TTS audio for playback, and trigger call transfers.

  1. Speech-to-Text (STT) API (e.g., Google Cloud Speech-to-Text):

*

elevenlabs Output

Deliverable: Eleven Labs Text-to-Speech (TTS) Integration for AI Phone Receptionist

This document outlines the comprehensive integration and configuration of Eleven Labs Text-to-Speech (TTS) for your AI Phone Receptionist system. This step is crucial for providing a natural, human-like voice to your AI, ensuring a professional and positive caller experience.


1. Introduction: The Voice of Your AI Receptionist

The primary goal of this step is to empower your AI Phone Receptionist with a highly natural and engaging voice using Eleven Labs' advanced TTS technology. By converting the AI's generated textual responses into high-quality spoken audio, we ensure callers perceive a professional, clear, and friendly interaction, critical for an effective phone receptionist system.

2. Eleven Labs TTS Integration Overview

The Eleven Labs TTS engine will serve as the voice output module for your AI Phone Receptionist. The process flow is as follows:

  1. AI Response Generation: The core AI logic (e.g., an LLM or predefined script) generates a textual response based on caller input or system state.
  2. Text-to-Speech Conversion: This textual response is sent to the Eleven Labs API.
  3. Audio Generation: Eleven Labs processes the text, applying the selected voice and settings, and generates an audio stream (e.g., MP3, WAV).
  4. Audio Playback: The generated audio stream is then played back to the caller via the telephony platform (e.g., Twilio, Vonage, etc.).

This seamless conversion ensures minimal delay and a fluid conversational experience.

3. Key Configuration Parameters

Successful integration requires careful configuration of several parameters within Eleven Labs:

  • Eleven Labs API Key:

* Purpose: Authentication for accessing the Eleven Labs API.

* Action: Obtain your unique API Key from your Eleven Labs account dashboard. This key must be securely stored and used in all API requests.

  • Voice ID:

* Purpose: Identifies the specific voice model to be used for speech generation.

* Action: Select a voice from the Eleven Labs Voice Library or create a custom voice (see Section 4). Note down its unique voice_id.

  • Model ID:

* Purpose: Specifies the underlying TTS model to use. Different models offer varying quality, latency, and language support.

* Recommendation:

* eleven_multilingual_v2: Recommended for robust performance across multiple languages and high-quality output.

* eleven_english_v2: Optimized for English language content, often providing slightly better quality for English-only applications.

* Action: Choose the appropriate model based on your receptionist's language requirements.

  • Voice Settings:

* Purpose: Fine-tune the expressiveness and consistency of the chosen voice. These are crucial for natural-sounding speech.

* Parameters:

* stability (0.0 - 1.0): Controls the variability of the voice.

Lower values* (e.g., 0.5-0.7) introduce more expressiveness, suitable for engaging conversation.

Higher values* (e.g., 0.8-1.0) make the voice more consistent and monotone, suitable for very formal or robotic tones.

* Recommendation for Receptionist: Start with 0.7 for a balanced, natural flow.

* similarity_boost (0.0 - 1.0): Controls how much the generated speech resembles the original voice (especially relevant for cloned voices).

Higher values* (e.g., 0.7-0.9) ensure strong adherence to the original voice characteristics.

Lower values* allow for more flexibility and potential deviations.

* Recommendation for Receptionist: Start with 0.8 for consistency.

* style_exaggeration (0.0 - 1.0): (Available on some models) Controls the emphasis of the voice's inherent style.

* Recommendation for Receptionist: Keep this relatively low (e.g., 0.0 to 0.2) for a professional, understated tone.

* Action: Experiment with these settings to find the optimal balance for your desired receptionist persona.

4. Voice Selection and Customization

Choosing the right voice is paramount to your AI receptionist's perceived professionalism and brand alignment.

  • Explore Pre-made Voices:

* Action: Navigate to the "Voice Library" in your Eleven Labs dashboard. Listen to various pre-generated voices.

* Considerations:

* Tone: Professional, friendly, empathetic, neutral.

* Gender: Male, female, non-binary options (if available).

* Accent/Dialect: Ensure it aligns with your target audience or brand.

* Speed: How quickly the voice speaks.

* Recommendation: Select 2-3 preferred voices for initial testing.

  • VoiceLab (Custom Voice Creation - Optional):

* Purpose: If you require a unique brand voice or wish to clone an existing employee's voice for consistency.

* Action: Use the "VoiceLab" feature to:

* Instant Voice Cloning: Upload short audio samples (e.g., 1-5 minutes) of a target voice.

* Professional Voice Cloning: For higher fidelity, Eleven Labs offers advanced cloning services.

* Generative Voices: Create entirely new, unique voices from scratch.

* Note: Custom voice creation requires more effort and may have specific usage guidelines.

  • Fine-tuning Voice Settings:

* Once a voice (pre-made or custom) is selected, iterate on the stability and similarity_boost parameters (and style_exaggeration if applicable).

* Goal: Achieve a voice that sounds natural, clear, and appropriately expressive for handling inquiries, routing calls, and taking messages. Avoid settings that make the voice sound overly robotic, too emotional, or difficult to understand.

5. API Integration Details

Integrating Eleven Labs into your backend system involves making HTTP POST requests to their TTS endpoint.

  • API Endpoint:

* Standard (full audio file): https://api.elevenlabs.io/v1/text-to-speech/{voice_id}

* Streaming (real-time): https://api.elevenlabs.io/v1/text-to-speech/{voice_id}/stream

* Recommendation: For real-time phone interactions, the stream endpoint is highly recommended to minimize latency and provide a more fluid conversational experience.

  • Request Method: POST
  • Headers:

* xi-api-key: Your Eleven Labs API Key.

* Content-Type: application/json

* Accept: audio/mpeg (for MP3, common for telephony) or audio/wav

  • Request Body (JSON Example):

    {
        "text": "Thank you for calling PantheraHive. How may I assist you today?",
        "model_id": "eleven_multilingual_v2",
        "voice_settings": {
            "stability": 0.7,
            "similarity_boost": 0.8
        }
    }
  • Response:

* Standard Endpoint: Returns the complete audio data as a binary blob (e.g., MP3 file).

* Streaming Endpoint: Returns audio data in chunks, allowing your telephony platform to start playing the audio before the entire response is generated. This is crucial for low-latency conversations.

  • Example Integration Logic (Pseudo-code):

    import requests
    import os

    ELEVENLABS_API_KEY = os.getenv("ELEVENLABS_API_KEY")
    VOICE_ID = "YOUR_SELECTED_VOICE_ID" # e.g., "21m00Tcm4azwk8nxvUGp"

    def get_ai_response_audio(text_to_speak):
        headers = {
            "Accept": "audio/mpeg",
            "Content-Type": "application/json",
            "xi-api-key": ELEVENLABS_API_KEY
        }
        data = {
            "text": text_to_speak,
            "model_id": "eleven_multilingual_v2",
            "voice_settings": {
                "stability": 0.7,
                "similarity_boost": 0.8
            }
        }
        
        # Using the streaming endpoint for real-time applications
        url = f"https://api.elevenlabs.io/v1/text-to-speech/{VOICE_ID}/stream"
        
        try:
            response = requests.post(url, headers=headers, json=data, stream=True)
            response.raise_for_status() # Raise an HTTPError for bad responses (4xx or 5xx)
            
            # This generator yields audio chunks as they arrive
            for chunk in response.iter_content(chunk_size=4096):
                yield chunk
                
        except requests.exceptions.RequestException as e:
            print(f"Eleven Labs API request failed: {e}")
            # Implement robust error handling, e.g., fallback to pre-recorded message
            return None

    # How your telephony platform would consume this:
    # audio_stream = get_ai_response_audio("Please state the reason for your call.")
    # if audio_stream:
    #     for audio_chunk in audio_stream:
    #         # Send audio_chunk to your telephony platform (e.g., Twilio <Play> verb)
    #         # This part is highly dependent on your telephony integration
    #         pass

6. Real-time Considerations for Phone Calls

For an AI Phone Receptionist, low latency is paramount to a natural conversational flow.

  • Streaming API (Critical): Always use the Eleven Labs streaming API endpoint (/stream) to ensure audio begins playing as soon as the first chunks are generated, minimizing perceived delay.
  • Telephony Platform Integration: Your chosen telephony platform (e.g., Twilio, Vonage, Aircall) must be configured to accept and play audio streams in real-time. This often involves WebSockets or similar protocols to send audio chunks as they arrive from Eleven Labs.
  • Buffering: Implement intelligent buffering on your telephony integration layer to smooth out any minor network fluctuations or processing delays, preventing choppy audio.
  • Concise Responses: Encourage the AI to generate concise and to-the-point responses. Shorter text inputs lead to faster TTS generation.

7. Best Practices for AI Receptionist Voice

  • Punctuation Matters: Ensure the text sent to Eleven Labs has correct punctuation (commas, periods, question marks, exclamation points). This significantly influences the natural prosody, pauses, and intonation of the synthesized speech.
  • Avoid Jargon/Acronyms: Unless specifically trained or formatted, complex jargon and acronyms can be mispronounced. Spell out critical terms if necessary, or use SSML for precise pronunciation.
  • SSML (Speech Synthesis Markup Language - Advanced): For highly specific control over pronunciation, pauses, emphasis, and speaking rate, consider using SSML tags within your text. Eleven
ai_phone_receptionist.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}