Terraform Infrastructure Builder
Run ID: 69bca67377c0421c0bf49b9b2026-03-29Infrastructure
PantheraHive BOS
BOS Dashboard

As a professional AI assistant within PantheraHive, I have executed the "Terraform Infrastructure Builder" workflow for the "generate" step, focusing on your specified inputs.

The workflow has analyzed your request to build cloud infrastructure for "Test Project Name" in the "Development" environment on "AWS", utilizing "S3" for state management. Given the request for comprehensive output, a standard multi-tier web application architecture has been designed, including networking (VPC, subnets, gateways), compute (EC2), storage (S3), and a relational database (RDS).


Terraform Infrastructure Builder: Generation Summary

Workflow Step: Generate

Category: Infrastructure

Workflow Name: Terraform Infrastructure Builder

Description: Generate production-ready Terraform configurations for cloud infrastructure including VPC, compute, storage, databases, and networking.

User Inputs:


Generated Terraform Configuration Files

Below are the generated Terraform configuration files (main.tf, variables.tf, outputs.tf) structured to create a comprehensive AWS infrastructure for your "Test Project Name" in the "Development" environment.

1. main.tf

This file defines the AWS resources, including VPC, subnets, gateways, security groups, an EC2 instance, an RDS PostgreSQL database, and an S3 bucket for application data. It also includes the necessary resources for the Terraform S3 backend with DynamoDB locking.

text • 184 chars
### 3. `outputs.tf`

This file defines the output values that will be displayed after a successful `terraform apply`, providing useful information about the deployed infrastructure.

Sandboxed live preview

Recommendations and Next Steps

To successfully deploy and manage this infrastructure, please follow these recommendations:

  1. Review and Customize:

* Security Group Rules: The SSH ingress rule (0.0.0.0/0) for the web server is highly insecure for production. Strongly recommend narrowing this down to specific trusted IP ranges or a dedicated bastion host's security group.

* EC2 Key Pair: Ensure you have an existing EC2 Key Pair in the specified AWS region. Update the ec2_key_pair_name variable in variables.tf with its name. If you don't have one, create it in the AWS console or using a separate Terraform resource.

* RDS Password: The `db

Step 2: projectmanager

The "Terraform Infrastructure Builder" workflow has been successfully executed. Based on your inputs, a comprehensive set of Terraform configurations for an AWS Development environment has been generated. This output includes the core infrastructure code, variable definitions, output values, backend configuration, a detailed README, and critical recommendations for security, cost, and scalability.


1. Workflow Execution Summary

This workflow generated production-ready Terraform configurations for your "Test Project Name" in the "Development" environment on AWS. The configuration includes essential networking (VPC, subnets, gateways), compute (EC2), storage (S3), and database (RDS) resources, with an S3 backend for state management.

User Inputs Confirmation:

  • Project Name: Test Project Name
  • Cloud Provider: AWS
  • Resources Requested: Comprehensive infrastructure (VPC, compute, storage, database, networking)
  • Environment: Development
  • State Backend: S3

2. Generated Terraform Configuration (main.tf)

This file defines the core AWS infrastructure resources.


# main.tf

# --- VPC and Networking ---
resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name        = "${var.project_name}-${var.environment}-VPC"
    Project     = var.project_name
    Environment = var.environment
  }
}

resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name        = "${var.project_name}-${var.environment}-IGW"
    Project     = var.project_name
    Environment = var.environment
  }
}

# Public Subnets (for NAT Gateway, Load Balancers, etc.)
resource "aws_subnet" "public" {
  count                   = length(var.public_subnet_cidrs)
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnet_cidrs[count.index]
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true # EC2 instances launched here will get a public IP

  tags = {
    Name        = "${var.project_name}-${var.environment}-PublicSubnet-${data.aws_availability_zones.available.names[count.index]}"
    Project     = var.project_name
    Environment = var.environment
  }
}

# Private Subnets (for EC2 instances, RDS, etc.)
resource "aws_subnet" "private" {
  count             = length(var.private_subnet_cidrs)
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnet_cidrs[count.index]
  availability_zone = data.aws_availability_zones.available.names[count.index]

  tags = {
    Name        = "${var.project_name}-${var.environment}-PrivateSubnet-${data.aws_availability_zones.available.names[count.index]}"
    Project     = var.project_name
    Environment = var.environment
  }
}

resource "aws_eip" "nat_gateway" {
  domain = "vpc"
  tags = {
    Name        = "${var.project_name}-${var.environment}-NAT-EIP"
    Project     = var.project_name
    Environment = var.environment
  }
}

resource "aws_nat_gateway" "main" {
  allocation_id = aws_eip.nat_gateway.id
  subnet_id     = aws_subnet.public[0].id # Place NAT Gateway in the first public subnet

  tags = {
    Name        = "${var.project_name}-${var.environment}-NAT-Gateway"
    Project     = var.project_name
    Environment = var.environment
  }
}

# Route Table for Public Subnets
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main.id
  }

  tags = {
    Name        = "${var.project_name}-${var.environment}-PublicRouteTable"
    Project     = var.project_name
    Environment = var.environment
  }
}

resource "aws_route_table_association" "public" {
  count          = length(aws_subnet.public)
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}

# Route Table for Private Subnets
resource "aws_route_table" "private" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.main.id
  }

  tags = {
    Name        = "${var.project_name}-${var.environment}-PrivateRouteTable"
    Project     = var.project_name
    Environment = var.environment
  }
}

resource "aws_route_table_association" "private" {
  count          = length(aws_subnet.private)
  subnet_id      = aws_subnet.private[count.index].id
  route_table_id = aws_route_table.private.id
}

# --- Security Groups ---
resource "aws_security_group" "ec2_sg" {
  name        = "${var.project_name}-${var.environment}-EC2-SG"
  description = "Allow SSH inbound traffic and all outbound"
  vpc_id      = aws_vpc.main.id

  # Ingress: Allow SSH from specified CIDRs
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = var.allowed_ssh_cidrs
    description = "Allow SSH from trusted IPs"
  }

  # Egress: Allow all outbound traffic
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow all outbound traffic"
  }

  tags = {
    Name        = "${var.project_name}-${var.environment}-EC2-SG"
    Project     = var.project_name
    Environment = var.environment
  }
}

resource "aws_security_group" "rds_sg" {
  name        = "${var.project_name}-${var.environment}-RDS-SG"
  description = "Allow inbound traffic from EC2 instances and all outbound"
  vpc_id      = aws_vpc.main.id

  # Ingress: Allow traffic from EC2 Security Group on DB port
  ingress {
    from_port       = 5432 # PostgreSQL default port
    to_port         = 5432
    protocol        = "tcp"
    security_groups = [aws_security_group.ec2_sg.id]
    description     = "Allow PostgreSQL access from EC2 instances"
  }

  # Egress: Allow all outbound traffic
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow all outbound traffic"
  }

  tags = {
    Name        = "${var.project_name}-${var.environment}-RDS-SG"
    Project     = var.project_name
    Environment = var.environment
  }
}

# --- Compute (EC2 Instance) ---
resource "aws_instance" "app_server" {
  ami           = var.ami_id
  instance_type = var.instance_type
  key_name      = var.key_name
  subnet_id     = aws_subnet.private[0].id # Launch in the first private subnet
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]

  # For a development environment, you might want to easily SSH.
  # For production, consider Session Manager or bastion hosts.
  # user_data = file("install_nginx.sh") # Example: Run a script on startup

  tags = {
    Name        = "${var.project_name}-${var.environment}-AppServer"
    Project     = var.project_name
    Environment = var.environment
  }
}

# --- Storage (S3 Bucket) ---
resource "aws_s3_bucket" "application_data" {
  bucket = "${lower(replace(var.project_name, " ", "-"))}-${var.environment}-app-data-${var.aws_region}"

  tags = {
    Name        = "${var.project_name}-${var.environment}-AppDataBucket"
    Project     = var.project_name
    Environment = var.environment
  }
}

resource "aws_s3_bucket_versioning" "application_data_versioning" {
  bucket = aws_s3_bucket.application_data.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_public_access_block" "application_data_public_access_block" {
  bucket = aws_s3_bucket.application_data.id

  block_public_acls       = true
  block_public_and_cross_account_access = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

# --- Database (RDS PostgreSQL) ---
resource "aws_db_subnet_group" "rds_subnet_group" {
  name       = "${var.project_name}-${var.environment}-rds-subnet-group"
  subnet_ids = [for s in aws_subnet.private : s.id] # Use all private subnets

  tags = {
    Name        = "${var.project_name}-${var.environment}-RDS-SubnetGroup"
    Project     = var.project_name
    Environment = var.environment
  }
}

resource "aws_db_instance" "main_db" {
  allocated_storage    = 20
  engine               = "postgres"
  engine_version       = "13.7"
  instance_class       = var.db_instance_type
  db_name              = var.db_name
  username             = var.db_username
  password             = var.db_password # WARNING: Hardcoding passwords is not recommended for production. Use Secrets Manager or Parameter Store.
  parameter_group_name = "default.postgres13"
  skip_final_snapshot  = true # For development, to speed up deletion. Set to false for production.
  publicly_accessible  = false # Keep private
  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.rds_subnet_group.name

  tags = {
    Name        = "${var.project_name}-${var.environment}-MainDB"
    Project     = var.project_name
    Environment = var.environment
  }
}

3. Variables Definition (variables.tf)

This file defines the input variables for your Terraform configuration, making it reusable and configurable.


# variables.tf

variable "project_name" {
  description = "The name of the project."
  type        = string
  default     = "Test Project Name"
}

variable "environment" {
  description = "The deployment environment (e.g., Development, Staging, Production)."
  type        = string
  default     = "Development"
}

variable "aws_region" {
  description = "The AWS region to deploy resources into."
  type        = string
  default     = "us-east-1"
}

variable "vpc_cidr" {
  description = "The CIDR block for the VPC."
  type        = string
  default     = "10.0.0.0/16"
}

variable "public_subnet_cidrs" {
  description = "A list of CIDR blocks for the public subnets."
  type        = list(string)
  default     = ["10.0.1.0/24", "10.0.2.0/24"]
}

variable "private_subnet_cidrs" {
  description = "A list of CIDR blocks for the private subnets."
  type        = list(string)
  default     = ["10.0.101.0/24", "10.0.102.0/24"]
}

variable "ami_id" {
  description = "The AMI ID for the EC2 instance (e.g., Amazon Linux 2 AMI)."
  type        = string
  # Find latest Amazon Linux 2 AMI for us-east-1:
  # data "aws_ami" "amazon_linux_2" {
  #   most_recent = true
  #   owners      = ["amazon"]
  #   filter {
  #     name   = "name"
  #     values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  #   }
  #   filter {
  #     name   = "virtualization-type"
  #     values = ["hvm"]
  #   }
  # }
  # default = data.aws_ami.amazon_linux_2.id
  # Hardcoding a common AMI for demonstration, please update to a recent one or use a data source.
  default = "ami-053b0d53c279acc90" # Amazon Linux 2 AMI (HVM), SSD Volume Type - us-east-1 (as of Jan 2024)
}

variable "instance_type" {
  description = "The EC2 instance type."
  type        = string
  default     = "t3.micro"
}

variable "key_name" {
  description = "The name of the EC2 Key Pair to allow SSH access."
  type        = string
  # IMPORTANT: Replace with an existing key pair name in your AWS account.
  # Example: default = "my-ssh-key-pair"
  default     = "your-ec2-key-pair-name" # <<< IMPORTANT: CHANGE THIS
}

variable "allowed_ssh_cidrs" {
  description = "A list of CIDR blocks that are allowed to SSH into EC2 instances."
  type        = list(string)
  # IMPORTANT: For security, replace with your specific IP address or corporate VPN CIDR.
  # Example: default = ["203.0.113.0/24"]
  default     = ["0.0.0.0/0"] # <<< WARNING: Allowing SSH from anywhere is INSECURE for production. Restrict this!
}

variable "db_instance_type" {
  description = "The RDS DB instance type."
  type        = string
  default     = "db.t3.micro"
}

variable "db_name" {
  description = "The name of the database."
  type        = string
  default     = "appdb"
}

variable "db_username" {
  description = "The master username for the database."
  type        = string
  default     = "admin"
}

variable "db_password" {
  description = "The master password for the database."
  type        = string
  # WARNING: Hardcoding sensitive values like passwords is not recommended for production.
  # Use AWS Secrets Manager or Parameter Store for secure storage.
  default     = "ChangeMe123!" # <<< IMPORTANT: CHANGE THIS AND SECURELY MANAGE IT
  sensitive   = true
}

variable "state_bucket_name" {
  description = "The S3 bucket name for Terraform state."
  type        = string
  default     = "terraform-state-testprojectname-dev-${var.aws_region}" # Ensure this is globally unique
}

variable "state_dynamodb_table" {
  description = "The DynamoDB table name for Terraform state locking."
  type        = string
  default     = "terraform-state-lock-testprojectname-dev"
}

4. Outputs Definition (outputs.tf)

This file defines the values that will be outputted after Terraform applies the configuration, providing useful information about the deployed resources.


# outputs.tf

output "vpc_id" {
  description = "The ID of the VPC."
  value       = aws_vpc.main.id
}

output "public_subnet_ids" {
  description = "The IDs of the public subnets."
  value       = [for s in aws_subnet.public : s.id]
}

output "private_subnet_ids" {
  description = "The IDs of the private subnets."
  value       = [for s in aws_subnet.private : s.id]
}

output "ec2_instance_private_ip" {
  description = "The private IP address of the EC2 application server."
  value       = aws_instance.app_server.private_ip
}

output "rds_endpoint" {
  description = "The endpoint of the RDS database."
  value       = aws_db_instance.main_db.address
}

output "rds_port" {
  description = "The port of the RDS database."
  value       = aws_db_instance.main_db.port
}

output "s3_bucket_name" {
  description = "The name of the S3 bucket for application data."
  value       = aws_s3_bucket.application_data.id
}

5. Backend and Provider Configuration (versions.tf)

This file configures the required Terraform providers and the S3 backend for state management.


# versions.tf

terraform {
  required_version = ">= 1.0.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0" # Use a compatible version, check Terraform Registry for latest
    }
  }

  # S3 Backend Configuration
  # IMPORTANT: The S3 bucket and DynamoDB table for state locking must exist BEFORE terraform init.
  # Refer to the README.md for instructions on how to set them up.
  backend "s3" {
    bucket         = "terraform-state-testprojectname-dev-us-east-1" # Replace with your actual bucket name or use a variable
    key            = "development/test-project-name/terraform.tfstate"
    region         = "us-east-1" # Replace with your actual region or use a variable
    encrypt        = true
    dynamodb_table = "terraform-state-lock-testprojectname-dev" # Replace with your actual DynamoDB table name or use a variable
  }
}

provider "aws" {
  region = var.aws_region
}

# Data source for available availability zones
data "aws_availability_zones" "available" {
  state = "available"
}

6. Deployment and Management Instructions (README.md)

This README provides essential information and steps for deploying and managing your infrastructure using the generated Terraform configurations.


# Terraform Infrastructure for Test Project Name (Development Environment)

This repository contains Terraform configurations to provision core AWS infrastructure for the "Test Project Name" in a "Development" environment.

## Overview

The infrastructure defined in this configuration includes:
*   **VPC**: A dedicated Virtual Private Cloud with public and private subnets across multiple Availability Zones.
*   **Networking**: Internet Gateway, NAT Gateway, Route Tables for internet access and private outbound traffic.
*   **Compute**: A single EC2 instance for application hosting (in a private subnet).
*   **Storage**: An S3 bucket for application data, with versioning and public access blocked.
*   **Database**: An RDS PostgreSQL instance (in private subnets).
*   **Security**: Security Groups for EC2 and RDS to control network access.

## Prerequisites

Before you begin, ensure you have the following installed and configured:

1.  **AWS CLI**: Configured with credentials that have sufficient permissions to create and manage the specified AWS resources.
    *   `aws configure`
2.  **Terraform**: Version `1.0.0` or higher.
    *   `brew install terraform` (macOS) or refer to [Terraform Installation Guide](https://learn.hashicorp.com/terraform/getting-started/install.html).

## Setup Terraform State Backend (Manual One-Time Setup)

Terraform will store its state in an S3 bucket and use a DynamoDB table for state locking to prevent concurrent modifications. **These resources must be created manually (or via a separate, simpler Terraform config) before running `terraform init` for this project.**

### 1. Create S3 Bucket for Terraform State

The S3 bucket name must be globally unique.

aws s3api create-bucket --bucket terraform-state-testprojectname-dev-us-east-1 --region us-east-1 --create-bucket-configuration LocationConstraint=us-east-1

aws s3api put-bucket-versioning --bucket terraform-state-testprojectname-dev-us-east-1 --versioning-configuration Status=Enabled

aws s3api put-bucket-encryption --bucket terraform-state-testprojectname-dev-us-east-1 --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'


*(Adjust bucket name and region as needed, matching `versions.tf` and `variables.tf`.)*

### 2. Create DynamoDB Table for State Locking

aws dynamodb create-table \

--table-name terraform-state-lock-testprojectname-dev \

--attribute-definitions AttributeName=LockID,AttributeType=S \

--key-schema AttributeName=LockID,KeyType=HASH \

--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \

--region us-east-1


*(Adjust table name and region as needed, matching `versions.tf` and `variables.tf`.)*

## Deployment Steps

1.  **Clone the Repository**:

git clone <your-repo-url>

cd <your-repo-directory>



2.  **Review and Customize Variables**:
    Open `variables.tf` and update the default values, especially:
    *   `key_name`: Must be an existing EC2 Key Pair in your AWS account.
    *   `allowed_ssh_cidrs`: **CRITICAL** - Restrict this to your IP address or trusted CIDR range for security. `0.0.0.0/0` is highly insecure for production.
    *   `db_password`: **CRITICAL** - Change this to a strong, unique password. For production, consider using AWS Secrets Manager.

3.  **Initialize Terraform**:
    This command downloads the necessary providers and configures the S3 backend.

terraform init



4.  **Review the Plan**:
    This command shows you what Terraform will do without making any changes.

terraform plan


    Carefully review the output to ensure the planned changes align with your expectations.

5.  **Apply the Configuration**:
    This command applies the changes defined in your configuration to your AWS account.

terraform apply


    Type `yes` when prompted to confirm the changes.

## Accessing Resources

After successful deployment, Terraform will output key information about your resources.

*   **EC2 Instance Private IP**: Use this to access your application server from within the VPC (e.g., from a bastion host).
*   **RDS Endpoint**: Use this to connect to your PostgreSQL database from your EC2 instance.
*   **S3 Bucket Name**: The name of your S3 bucket for application data.

## Cleanup

To destroy all the resources provisioned by this Terraform configuration:

terraform destroy


Type `yes` when prompted to confirm the destruction.

**Note**: This will delete all provisioned resources. Ensure you have backed up any critical data before running `terraform destroy`. The S3 bucket and DynamoDB table used for Terraform state will **not** be destroyed by this command; you must delete them manually if no longer needed.

7. Key Considerations and Recommendations

This section provides critical advice for enhancing and managing your infrastructure.

7.1. Security Best Practices

  • Secrets Management:

* Action: Do NOT hardcode sensitive values (like db_password and key_name) in variables.tf or main.tf for production environments.

* Recommendation: Integrate with AWS Secrets Manager or AWS Systems Manager Parameter Store (with SecureString) to retrieve credentials at runtime.

* Example for RDS password (using data source):


        # In main.tf
        resource "aws_db_instance" "main_db" {
          # ...
          password = data.aws_secretsmanager_secret_version.db_credentials.secret_string
          # ...
        }

        # In a separate file (e.g., data.tf)
        data "aws_secretsmanager_secret_version" "db_credentials" {
          secret_id = "your-rds-secret-name" # Name of your secret in Secrets Manager
        }
  • Network Access Control:

* Action: Restrict allowed_ssh_cidrs in variables.tf to only trusted IP addresses or CIDR blocks (e.g., your office IP, VPN gateway). 0.0.0.0/0 is highly insecure.

* Recommendation: For SSH access to private EC2 instances, use AWS Systems Manager Session Manager (no open SSH ports needed) or a dedicated bastion host.

  • IAM Least Privilege:

* Recommendation: Create specific IAM roles for EC2 instances if they need to interact with other AWS services (e.g., S3, CloudWatch). Grant only the minimum necessary permissions.

  • S3 Bucket Policy:

* Recommendation: For production, implement stricter S3 bucket policies to control access, potentially integrating with IAM roles for specific application access. The current setup blocks all public access, which is a good start.

7.2. Cost Optimization

  • Instance Types:

* Recommendation: For development, t3.micro and db.t3.micro are cost-effective. Monitor usage patterns and scale up/down as needed for higher environments.

  • Auto Scaling:

* Recommendation: For production, consider replacing the single aws_instance with an aws_autoscaling_group behind an aws_lb (Application Load Balancer) to automatically scale compute resources based on demand and improve fault tolerance.

  • RDS Multi-AZ:

* Recommendation: For a Development environment, a single-AZ RDS

terraform_infrastructure_build.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}