AI Phone Receptionist
Run ID: 69c94f32a17964d77e86d9562026-03-29Automation
PantheraHive BOS
BOS Dashboard

AI Phone Receptionist System: Core Design & AI Logic Generation

Workflow Step: 2 of 3 (gemini → generate)

Description: Build an AI receptionist system that answers calls, routes inquiries, takes messages, and books appointments.

This document outlines the comprehensive design, core AI logic, and architectural blueprint for your AI Phone Receptionist system, generated using advanced AI capabilities. This phase provides the foundational intelligence and operational framework, detailing how the system will interact with callers, process information, and integrate with essential services.


1. Introduction & Objectives of this Generation Phase

This "generate" step leverages advanced AI to conceptualize and detail the core intelligence of your AI Phone Receptionist. The primary objective is to produce a robust, intelligent, and adaptable blueprint that covers:

This output serves as the detailed specification for the subsequent implementation and refinement phases.


2. Core AI Logic & Conversational Flows (Powered by Gemini)

The heart of your AI receptionist lies in its ability to understand, process, and respond to natural language. Gemini will serve as the central intelligence for Natural Language Understanding (NLU), Natural Language Generation (NLG), and overall conversational orchestration.

2.1. Initial Call Answering & Intent Identification

* Greeting Generation: Craft dynamic, context-aware greetings.

* Intent Recognition: Analyze caller's initial statements to classify their need (e.g., "sales," "support," "billing," "appointment," "general inquiry").

* Clarification: If intent is unclear, prompt the caller for more information.

text • 1,155 chars
    System Prompt (Initial Greeting):
    "You are a professional and friendly AI receptionist for [Your Company Name].
    Your goal is to greet callers, identify their needs, and direct them appropriately.
    Start by greeting the caller and asking how you can help them today.
    Example: 'Thank you for calling [Your Company Name]. How may I help you today?'"

    System Prompt (Intent Identification):
    "The caller has just stated their request. Analyze their statement to determine their primary intent from the following categories:
    - Sales Inquiry
    - Technical Support
    - Billing/Accounts
    - Appointment Booking
    - General Information/Question
    - Leave a Message
    - Speak to a Human
    If you can confidently identify the intent, state it and propose the next action. If unsure, ask a clarifying question.
    Example Input: 'I'd like to talk to someone about setting up a new account.'
    Expected Output: 'Intent: Sales Inquiry. Action: Route to Sales Department.'
    Example Input: 'My internet isn't working.'
    Expected Output: 'Intent: Technical Support. Action: Route to Technical Support Department.'"
    
Sandboxed live preview

Step 1 of 3: AI Phone Receptionist - Gemini Generation

This document outlines the foundational design and core intelligence parameters for your AI Phone Receptionist system, derived from the initial "gemini -> generate" phase. This output serves as a detailed blueprint for the AI's persona, capabilities, and the high-level architecture required to build a sophisticated, efficient, and customer-centric automated receptionist.


1. System Overview & Objectives

Project Goal: To develop an intelligent AI Phone Receptionist system capable of autonomously handling incoming calls, understanding caller intent, providing information, routing inquiries, taking messages, and managing appointment bookings.

Core Objectives:

  • Enhance Customer Experience: Provide instant, consistent, and professional service 24/7.
  • Improve Operational Efficiency: Reduce reliance on human staff for routine inquiries and tasks.
  • Streamline Call Management: Accurately route calls and ensure no inquiry goes unanswered.
  • Automate Key Functions: Efficiently manage messages and appointment scheduling.
  • Scalability: Design a system that can grow with your business needs.

2. AI Core Persona & Behavioral Guidelines

This section defines the "brain" and personality of your AI receptionist, leveraging Gemini's generative capabilities to establish its core intelligence and interaction model.

AI Identity & Role:

  • Name (Proposed): "PantheraConnect" (or client-preferred name)
  • Role: Professional, efficient, and friendly virtual receptionist. The primary point of contact for all incoming calls.

Key Personality Traits:

  • Professional: Maintains a formal yet approachable tone.
  • Empathetic: Acknowledges caller's needs and expresses understanding.
  • Efficient: Gets straight to the point while ensuring clarity and completeness.
  • Clear & Concise: Uses simple language, avoids jargon, and provides direct answers.
  • Patient: Allows callers time to speak and clarifies when needed.
  • Positive & Helpful: Always aims to provide a solution or clear next step.

Core AI Instructions & Objectives (Internal Directives):

  1. Warm Greeting: Always start with a professional, welcoming greeting that identifies the business.
  2. Intent Identification: Promptly and accurately determine the caller's primary reason for calling (e.g., "How may I help you today?").
  3. Information Provision: Access and relay relevant information from a designated knowledge base.
  4. Inquiry Routing: Based on identified intent, route calls to the correct department, individual, or automated process.
  5. Message Taking: If a direct connection is not possible or desired, offer to take a detailed message.
  6. Appointment Management: Facilitate booking, rescheduling, or cancellation of appointments through integrated calendar systems.
  7. Confirmation & Closure: Confirm actions taken (e.g., "Your appointment is booked") and conclude the call politely.
  8. Brand Representation: Uphold the company's brand image and values in every interaction.

Communication Style & Language:

  • Tone: Balanced between professional formality and helpful friendliness.
  • Vocabulary: Standard business English (or specified primary language), avoiding slang or overly casual expressions.
  • Clarity: Prioritize clear articulation and confirmation of understanding.
  • Structure: Use conversational flow, asking open-ended questions initially, then narrowing down for specifics.
  • Language: English (primary). Future-proofing for multi-lingual support will be considered in subsequent steps.

Error Handling & Fallback Mechanisms:

  • Misunderstanding: If the AI cannot understand the caller, it will politely ask for clarification ("I apologize, I didn't quite catch that. Could you please rephrase your request?").
  • Unfulfillable Request: If a request cannot be met by the AI, it will explain why and offer alternatives (e.g., "I'm unable to book that specific time. Would you like to check other availability, or would you prefer to speak with a human agent?").
  • Human Handoff: For complex, sensitive, or persistent requests that the AI cannot resolve, it will offer a seamless transfer to a live agent, providing context if possible.
  • System Issues: In the event of a system error, the AI will apologize and attempt to transfer the call to a live agent or offer to take a message for immediate follow-up.

3. Key System Capabilities

The AI Phone Receptionist system will encompass the following detailed functionalities:

  • 3.1. Call Answering & Greeting:

* Instant Pick-up: Virtually instantaneous answering of incoming calls.

* Customizable Greetings: Ability to configure dynamic greetings based on time of day, special announcements, or caller segment.

* Company Branding: Integrate company name and welcome message.

  • 3.2. Natural Language Understanding (NLU) & Intent Recognition:

* Advanced NLP: Utilize Gemini's robust NLP capabilities to accurately understand caller intent, even with varied phrasing and accents.

* Dynamic Intent Mapping: Identify common intents such as sales, support, billing, appointments, general inquiry, and specific product/service questions.

* Contextual Awareness: Maintain conversation context across multiple turns to handle follow-up questions effectively.

  • 3.3. Call Routing & Transfer:

* Intelligent Routing: Direct callers to the most appropriate department or individual based on identified intent.

* Conditional Routing: Route calls based on business hours, agent availability, or priority.

* Warm Handoff: Seamlessly transfer calls to human agents with relevant call context (e.g., caller's stated intent, previous questions).

* Voicemail Integration: Option to transfer to a specific voicemail box if no agent is available or preferred.

  • 3.4. Message Taking & Delivery:

* Accurate Transcription: Capture caller's name, contact information, and detailed message with high accuracy.

* Automated Delivery: Deliver messages via email, SMS, or direct integration into a CRM/helpdesk system.

* Confirmation: Confirm message receipt and delivery method to the caller.

  • 3.5. Appointment Management:

* Calendar Integration: Connect with popular calendar systems (e.g., Google Calendar, Outlook Calendar, specific booking platforms).

* Availability Checking: Access real-time availability for multiple staff members or resources.

* Booking & Rescheduling: Allow callers to book new appointments, reschedule existing ones, or cancel.

* Confirmation & Reminders: Send automated confirmation messages (SMS/email) and pre-appointment reminders.

  • 3.6. Information Retrieval & Provision:

* Knowledge Base Integration: Access a centralized knowledge base for FAQs, business hours, directions, service descriptions, pricing, etc.

* Dynamic Information: Ability to provide up-to-date information that can be easily updated by administrators.

  • 3.7. Analytics & Reporting:

* Call Metrics: Track call volume, duration, peak times, and resolution rates.

* Intent Distribution: Analyze common caller intents to identify trends and areas for improvement.

* Call Transcripts: Store full call transcripts for quality assurance, training, and dispute resolution.

* Performance Monitoring: Dashboards to monitor AI performance, human handoff rates, and customer satisfaction.


4. High-Level Architecture & Components

The AI Phone Receptionist system will be built upon a modular and scalable architecture, leveraging cloud-native services and robust APIs.

  • 4.1. Voice Interface & Telephony Platform (CPaaS):

* Function: Handles incoming and outgoing call connections, IVR, and call routing.

* Example Technologies: Twilio, Vonage, SignalWire.

  • 4.2. Speech-to-Text (STT) Engine:

* Function: Converts spoken language from callers into text for NLU processing.

* Example Technologies: Google Cloud Speech-to-Text, AWS Transcribe, Azure Speech.

  • 4.3. Natural Language Understanding (NLU) & Dialog Management (Core AI):

* Function: The "brain" of the system. Processes text from STT, identifies caller intent, extracts entities, manages conversational flow, and determines appropriate responses or actions. This is where Gemini's generative AI capabilities are primarily leveraged for sophisticated understanding and dynamic response generation.

* Example Technologies: Gemini API, Dialogflow CX (with Gemini integration), custom NLU models.

  • 4.4. Text-to-Speech (TTS) Engine:

* Function: Converts the AI's generated text responses back into natural-sounding speech for the caller.

* Example Technologies: Google Cloud Text-to-Speech, AWS Polly, Azure Text-to-Speech.

  • 4.5. Integration Layer (APIs):

* Function: Connects the core AI logic to external systems for data retrieval and action execution.

* Key Integrations:

* CRM/Helpdesk: Salesforce, HubSpot, Zendesk for customer data and ticket creation.

* Calendar Systems: Google Calendar, Outlook Calendar, Calendly for appointment management.

* Knowledge Base: Internal documentation, FAQ systems (e.g., Confluence, custom database).

* Messaging Services: Email (SendGrid, Mailgun), SMS (Twilio, Vonage) for confirmations and message delivery.

  • 4.6. Orchestration & Application Logic (Backend):

* Function: Custom application code (e.g., Python, Node.js) that orchestrates the entire call flow, manages state, calls various APIs, and implements business-specific rules.

* Example Technologies: Serverless functions (AWS Lambda, Google Cloud Functions), containers (Docker, Kubernetes).

  • 4.7. Database & Knowledge Base:

* Function: Stores business rules, caller data (securely and compliantly), FAQs, service information, and configuration settings.

* Example Technologies: PostgreSQL, MongoDB, dedicated content management system.


5. Illustrative Call Flow Example

To demonstrate the system's capabilities, consider a typical customer interaction:

  1. Caller Dials In: A customer calls your business's main number.
  2. AI Greets: "Thank you for calling [Your Business Name]. My name is PantheraConnect, how may I assist you today?"
  3. Caller States Intent: "Hi, I'd like to book an appointment for a consultation next week."
  4. AI Confirms Intent & Gathers Details: "Certainly, I can help with that. For which service are you looking to book a consultation? And do you have a preferred day or time next week?"
  5. Caller Provides Details: "Yes, it's for a marketing strategy consultation, and I'm free on Tuesday morning."
  6. AI Checks Availability: The AI queries the integrated calendar system

4. Key Functionality & Design Considerations

  • Customizable AI Persona: The AI's voice, tone, and conversational style will be tailored to match your brand identity.
  • Scalability: Designed to handle fluctuating call volumes without performance degradation.
  • Security & Privacy: Adherence to data protection regulations (e.g., GDPR, HIPAA if applicable) for caller data and message handling.
  • Multi-language Support (Optional Add-on): Capability to extend to multiple languages based on business needs.
  • Real-time Handover: Seamless transfer to a human agent with full context of the ongoing conversation.
  • Post-Call Summaries: Automated generation of call summaries or action items for internal records.
  • Learning & Improvement Loop: Mechanisms for collecting feedback and data to continuously train and improve Gemini's performance.

5. Actionable Outputs & Deliverables from this Phase

Upon completion of this "generate" step, you will receive the following detailed deliverables:

  1. AI Persona & Core Prompting Strategy Document:

* Detailed instructions and initial prompts for Gemini covering greetings, intent identification, routing, message taking, and appointment booking.

* Guidelines for AI tone, style, and brand voice.

2.

elevenlabs Output

This document details the implementation of Text-to-Speech (TTS) capabilities for your AI Phone Receptionist using ElevenLabs. This step is crucial for enabling your receptionist to communicate verbally with callers in a natural, human-like voice, enhancing the overall user experience.


ElevenLabs Text-to-Speech (TTS) Integration for AI Receptionist

This deliverable outlines the process of integrating ElevenLabs' advanced Text-to-Speech technology into your AI Phone Receptionist system. By converting the receptionist's generated responses from text to high-quality, natural-sounding audio, we ensure a seamless and professional caller interaction.

1. Introduction to ElevenLabs TTS

ElevenLabs provides cutting-edge AI voice synthesis that delivers highly realistic and emotionally expressive speech. For an AI Phone Receptionist, this means:

  • Natural Conversation Flow: Responses sound less robotic and more human, making interactions pleasant and easy to understand.
  • Brand Consistency: Ability to select or even create custom voices that align with your brand's persona.
  • Multilingual Support: Potential to serve callers in multiple languages, expanding accessibility.
  • Reduced Friction: Clear and articulate speech minimizes misunderstandings and frustration.

2. Key Features Utilized for AI Receptionist

  • High-Quality Voice Synthesis: Generating lifelike speech from arbitrary text input.
  • Voice Library: Access to a diverse range of pre-trained voices with various accents, genders, and speaking styles.
  • Low Latency Streaming: Essential for real-time conversational AI, ensuring minimal delay between text generation and audio playback.
  • SSML Support (Speech Synthesis Markup Language): Allows for fine-grained control over speech characteristics such as pauses, emphasis, pronunciation, and speaking rate, critical for nuanced responses.
  • Multilingual Capabilities (Eleven Multilingual v2): Enables the receptionist to speak and understand multiple languages, if configured.

3. Implementation Steps: Integrating ElevenLabs TTS

This section provides a step-by-step guide to integrate ElevenLabs into your AI Receptionist backend.

3.1. ElevenLabs API Key Setup

  1. Account Creation: If you haven't already, create an account on the [ElevenLabs website](https://elevenlabs.io/).
  2. API Key Retrieval: Navigate to your profile settings or API section within the ElevenLabs dashboard to obtain your unique API key. This key will authenticate your requests to the ElevenLabs API.

* Action: Securely store your API key (e.g., in environment variables) and avoid hardcoding it directly into your application.

3.2. Voice Selection and Customization

Choosing the right voice is paramount for setting the tone and persona of your AI receptionist.

  1. Explore Voice Library:

* Visit the ElevenLabs "Voice Library" in your dashboard.

* Listen to various pre-made voices, filtering by gender, age, and accent.

* Recommendation: Select a voice that is clear, professional, and friendly. Consider testing a few options with actual call scripts to gauge caller perception.

  1. Voice ID Retrieval: Once a voice is selected, note its unique voice_id. This ID will be used in your API calls.
  2. Optional: Voice Cloning (Professional Tier):

* If brand consistency requires a specific, unique voice (e.g., matching a human receptionist or brand voice actor), consider using ElevenLabs' Voice Cloning feature. This involves uploading audio samples of the desired voice.

* Action: For initial deployment, we recommend starting with a high-quality pre-made voice to expedite setup. Custom voice cloning can be explored in future iterations.

3.3. Text Preparation for Synthesis

The text generated by your AI (e.g., from an LLM) needs to be prepared for optimal TTS output.

  1. Clean Text: Ensure the text is free of special characters, emojis, or formatting that might confuse the TTS engine.
  2. SSML (Speech Synthesis Markup Language): For more advanced control, integrate SSML tags into your text.

* Pauses: Use <break time="500ms"/> to add natural pauses (e.g., after a greeting or before an important piece of information).

* Emphasis: Use <emphasis level="strong">important</emphasis> for key words.

* Pronunciation: Use <phoneme alphabet="ipa" ph="pɪˈkæntɪ">picante</phoneme> for uncommon words or names.

* Speaking Rate/Pitch: While ElevenLabs handles much of this automatically, SSML can offer further fine-tuning if needed.

* Action: Initially, focus on clean text. Introduce SSML strategically for specific phrases or scenarios where enhanced naturalness is required.

3.4. API Request Structure

Your backend application will make HTTP POST requests to the ElevenLabs TTS API.

Endpoint: https://api.elevenlabs.io/v1/text-to-speech/<voice_id>

Headers:

  • Accept: audio/mpeg (or audio/wav for higher quality/uncompressed)
  • xi-api-key: YOUR_ELEVENLABS_API_KEY
  • Content-Type: application/json

Request Body (JSON):


{
    "text": "Hello, thank you for calling PantheraHive. How may I assist you today?",
    "model_id": "eleven_multilingual_v2", // or "eleven_monolingual_v1" for English only
    "voice_settings": {
        "stability": 0.75, // Controls variability of voice. Higher = more consistent.
        "similarity_boost": 0.75, // Controls how closely the voice matches the original.
        "style": 0.0, // For expressive voices, controls expressiveness.
        "use_speaker_boost": true // Enhances speaker consistency.
    }
}

Example (Python using requests library):


import requests
import os

ELEVENLABS_API_KEY = os.getenv("ELEVENLABS_API_KEY")
VOICE_ID = "21m00Tcm4azwk8nxvUGp" # Example Voice ID (e.g., "Rachel")

def generate_speech(text_to_synthesize: str, voice_id: str = VOICE_ID):
    url = f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}"
    
    headers = {
        "Accept": "audio/mpeg",
        "xi-api-key": ELEVENLABS_API_KEY,
        "Content-Type": "application/json"
    }
    
    data = {
        "text": text_to_synthesize,
        "model_id": "eleven_multilingual_v2",
        "voice_settings": {
            "stability": 0.75,
            "similarity_boost": 0.75
        }
    }
    
    response = requests.post(url, headers=headers, json=data)
    
    if response.status_code == 200:
        # Save the audio to a file or stream it
        audio_content = response.content
        # For demonstration, save to a file
        # with open("output_speech.mp3", "wb") as f:
        #     f.write(audio_content)
        # print("Audio generated successfully.")
        return audio_content # Return the raw audio bytes
    else:
        print(f"Error generating speech: {response.status_code} - {response.text}")
        return None

# Example usage:
# audio_bytes = generate_speech("Your appointment is confirmed for tomorrow at 2 PM.")
# if audio_bytes:
#     # Now, integrate this audio_bytes with your calling platform (e.g., Twilio)
#     print("Audio bytes received, ready for streaming/playback.")

3.5. Real-time Audio Streaming (for Conversational AI)

For a truly interactive receptionist, minimizing latency is critical. Instead of waiting for the entire audio file to be generated and then playing it, ElevenLabs offers a streaming API.

  • Streaming Endpoint: https://api.elevenlabs.io/v1/text-to-speech/<voice_id>/stream
  • Integration with Calling Platforms: Platforms like Twilio or Vonage support real-time audio streaming (e.g., Twilio's <Stream> verb or WebSockets). Your backend will receive audio chunks from ElevenLabs and forward them to the calling platform's stream.
  • Action: Prioritize the streaming API for all dynamic, real-time responses. Pre-rendered audio can be used for static greetings or complex, unchanging informational prompts to reduce API calls and ensure consistent quality.

3.6. Integration with Calling Platform (e.g., Twilio)

Once the audio is generated by ElevenLabs, it needs to be played to the caller.

  1. For Pre-rendered Audio (MP3/WAV files):

* Your backend saves the generated audio to a publicly accessible URL (e.g., an S3 bucket or your web server).

* The calling platform (e.g., Twilio) receives a TwiML instruction to <Play> the audio from that URL.

* Example (TwiML):


        <Response>
            <Play loop="1">https://your-server.com/audio/greeting.mp3</Play>
            <Gather input="speech dtmf" timeout="3" action="/handle-response"/>
        </Response>
  1. For Real-time Streaming Audio (WebSockets):

* When a call comes in, your calling platform initiates a WebSocket connection to your backend.

* Your backend receives the text response from the AI, sends it to ElevenLabs' streaming API.

* As ElevenLabs sends back audio chunks, your backend forwards these chunks directly over the WebSocket to the calling platform, which then plays them to the caller.

* Action: Implement real-time streaming for all dynamic interactions to ensure low latency and a smooth conversational experience. Use pre-rendered audio only for static, unchanging prompts if performance optimization or cost reduction is a concern.

4. Best Practices for AI Receptionist TTS

  • Consistency: Maintain a consistent voice, tone, and speaking rate throughout the call to build trust and familiarity.
  • Natural Pauses: Implement SSML to add appropriate pauses, mimicking natural human speech patterns. Avoid rapid-fire responses.
  • Error Handling: Implement robust error handling for ElevenLabs API calls. If the API fails, have fallback mechanisms (e.g., a generic pre-recorded message, silent failure, or a human agent handover).
  • Performance Optimization:

* Cache frequently used static phrases (e.g., "Please wait while I connect you") as pre-rendered audio files.

* Utilize ElevenLabs' streaming API for dynamic, real-time conversational elements.

* Monitor API usage and latency.

  • Testing: Thoroughly test the TTS output in various scenarios, including different text lengths, complex words, and error conditions, to ensure clarity and naturalness.
  • Pronunciation Dictionary: For unique company names, product names, or jargon, consider creating a custom pronunciation dictionary (if ElevenLabs supports this directly or through SSML) to ensure correct articulation.

5. Monitoring and Maintenance

  • API Usage: Monitor your ElevenLabs API usage to stay within plan limits and manage costs.
  • Latency Metrics: Track the latency of TTS generation to ensure real-time performance is maintained.
  • Voice Quality: Periodically review the voice quality and caller feedback to identify areas for improvement.
  • ElevenLabs Updates: Stay informed about new features or model updates from ElevenLabs that could further enhance the receptionist's voice.

6. Next Steps

Upon successful integration and testing of the ElevenLabs TTS, the AI Receptionist will be able to speak its responses. The next logical steps involve:

  • Integration with AI Core: Connecting the TTS output to the AI's natural language generation (NLG) engine to speak the generated responses.
  • Full Call Flow Testing: Comprehensive testing of the entire call flow, from speech recognition (ASR) to AI processing to TTS output.
  • Performance Tuning: Optimizing latency, voice naturalness, and error handling based on initial testing.
ai_phone_receptionist.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}