As a professional AI assistant within PantheraHive, I have executed the "Terraform Infrastructure Builder" workflow for the "generate" step, focusing on your specified inputs.
The workflow has analyzed your request to build cloud infrastructure for "Test Project Name" in the "Development" environment on "AWS", utilizing "S3" for state management. Given the request for comprehensive output, a standard multi-tier web application architecture has been designed, including networking (VPC, subnets, gateways), compute (EC2), storage (S3), and a relational database (RDS).
Workflow Step: Generate
Category: Infrastructure
Workflow Name: Terraform Infrastructure Builder
Description: Generate production-ready Terraform configurations for cloud infrastructure including VPC, compute, storage, databases, and networking.
User Inputs:
Below are the generated Terraform configuration files (main.tf, variables.tf, outputs.tf) structured to create a comprehensive AWS infrastructure for your "Test Project Name" in the "Development" environment.
main.tfThis file defines the AWS resources, including VPC, subnets, gateways, security groups, an EC2 instance, an RDS PostgreSQL database, and an S3 bucket for application data. It also includes the necessary resources for the Terraform S3 backend with DynamoDB locking.
### 3. `outputs.tf` This file defines the output values that will be displayed after a successful `terraform apply`, providing useful information about the deployed infrastructure.
To successfully deploy and manage this infrastructure, please follow these recommendations:
* Security Group Rules: The SSH ingress rule (0.0.0.0/0) for the web server is highly insecure for production. Strongly recommend narrowing this down to specific trusted IP ranges or a dedicated bastion host's security group.
* EC2 Key Pair: Ensure you have an existing EC2 Key Pair in the specified AWS region. Update the ec2_key_pair_name variable in variables.tf with its name. If you don't have one, create it in the AWS console or using a separate Terraform resource.
* RDS Password: The `db
The "Terraform Infrastructure Builder" workflow has been successfully executed. Based on your inputs, a comprehensive set of Terraform configurations for an AWS Development environment has been generated. This output includes the core infrastructure code, variable definitions, output values, backend configuration, a detailed README, and critical recommendations for security, cost, and scalability.
This workflow generated production-ready Terraform configurations for your "Test Project Name" in the "Development" environment on AWS. The configuration includes essential networking (VPC, subnets, gateways), compute (EC2), storage (S3), and database (RDS) resources, with an S3 backend for state management.
User Inputs Confirmation:
Test Project NameAWSDevelopmentS3This file defines the core AWS infrastructure resources.
# main.tf
# --- VPC and Networking ---
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${var.project_name}-${var.environment}-VPC"
Project = var.project_name
Environment = var.environment
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.project_name}-${var.environment}-IGW"
Project = var.project_name
Environment = var.environment
}
}
# Public Subnets (for NAT Gateway, Load Balancers, etc.)
resource "aws_subnet" "public" {
count = length(var.public_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnet_cidrs[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true # EC2 instances launched here will get a public IP
tags = {
Name = "${var.project_name}-${var.environment}-PublicSubnet-${data.aws_availability_zones.available.names[count.index]}"
Project = var.project_name
Environment = var.environment
}
}
# Private Subnets (for EC2 instances, RDS, etc.)
resource "aws_subnet" "private" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnet_cidrs[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.project_name}-${var.environment}-PrivateSubnet-${data.aws_availability_zones.available.names[count.index]}"
Project = var.project_name
Environment = var.environment
}
}
resource "aws_eip" "nat_gateway" {
domain = "vpc"
tags = {
Name = "${var.project_name}-${var.environment}-NAT-EIP"
Project = var.project_name
Environment = var.environment
}
}
resource "aws_nat_gateway" "main" {
allocation_id = aws_eip.nat_gateway.id
subnet_id = aws_subnet.public[0].id # Place NAT Gateway in the first public subnet
tags = {
Name = "${var.project_name}-${var.environment}-NAT-Gateway"
Project = var.project_name
Environment = var.environment
}
}
# Route Table for Public Subnets
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "${var.project_name}-${var.environment}-PublicRouteTable"
Project = var.project_name
Environment = var.environment
}
}
resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
# Route Table for Private Subnets
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main.id
}
tags = {
Name = "${var.project_name}-${var.environment}-PrivateRouteTable"
Project = var.project_name
Environment = var.environment
}
}
resource "aws_route_table_association" "private" {
count = length(aws_subnet.private)
subnet_id = aws_subnet.private[count.index].id
route_table_id = aws_route_table.private.id
}
# --- Security Groups ---
resource "aws_security_group" "ec2_sg" {
name = "${var.project_name}-${var.environment}-EC2-SG"
description = "Allow SSH inbound traffic and all outbound"
vpc_id = aws_vpc.main.id
# Ingress: Allow SSH from specified CIDRs
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = var.allowed_ssh_cidrs
description = "Allow SSH from trusted IPs"
}
# Egress: Allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow all outbound traffic"
}
tags = {
Name = "${var.project_name}-${var.environment}-EC2-SG"
Project = var.project_name
Environment = var.environment
}
}
resource "aws_security_group" "rds_sg" {
name = "${var.project_name}-${var.environment}-RDS-SG"
description = "Allow inbound traffic from EC2 instances and all outbound"
vpc_id = aws_vpc.main.id
# Ingress: Allow traffic from EC2 Security Group on DB port
ingress {
from_port = 5432 # PostgreSQL default port
to_port = 5432
protocol = "tcp"
security_groups = [aws_security_group.ec2_sg.id]
description = "Allow PostgreSQL access from EC2 instances"
}
# Egress: Allow all outbound traffic
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow all outbound traffic"
}
tags = {
Name = "${var.project_name}-${var.environment}-RDS-SG"
Project = var.project_name
Environment = var.environment
}
}
# --- Compute (EC2 Instance) ---
resource "aws_instance" "app_server" {
ami = var.ami_id
instance_type = var.instance_type
key_name = var.key_name
subnet_id = aws_subnet.private[0].id # Launch in the first private subnet
vpc_security_group_ids = [aws_security_group.ec2_sg.id]
# For a development environment, you might want to easily SSH.
# For production, consider Session Manager or bastion hosts.
# user_data = file("install_nginx.sh") # Example: Run a script on startup
tags = {
Name = "${var.project_name}-${var.environment}-AppServer"
Project = var.project_name
Environment = var.environment
}
}
# --- Storage (S3 Bucket) ---
resource "aws_s3_bucket" "application_data" {
bucket = "${lower(replace(var.project_name, " ", "-"))}-${var.environment}-app-data-${var.aws_region}"
tags = {
Name = "${var.project_name}-${var.environment}-AppDataBucket"
Project = var.project_name
Environment = var.environment
}
}
resource "aws_s3_bucket_versioning" "application_data_versioning" {
bucket = aws_s3_bucket.application_data.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_public_access_block" "application_data_public_access_block" {
bucket = aws_s3_bucket.application_data.id
block_public_acls = true
block_public_and_cross_account_access = true
ignore_public_acls = true
restrict_public_buckets = true
}
# --- Database (RDS PostgreSQL) ---
resource "aws_db_subnet_group" "rds_subnet_group" {
name = "${var.project_name}-${var.environment}-rds-subnet-group"
subnet_ids = [for s in aws_subnet.private : s.id] # Use all private subnets
tags = {
Name = "${var.project_name}-${var.environment}-RDS-SubnetGroup"
Project = var.project_name
Environment = var.environment
}
}
resource "aws_db_instance" "main_db" {
allocated_storage = 20
engine = "postgres"
engine_version = "13.7"
instance_class = var.db_instance_type
db_name = var.db_name
username = var.db_username
password = var.db_password # WARNING: Hardcoding passwords is not recommended for production. Use Secrets Manager or Parameter Store.
parameter_group_name = "default.postgres13"
skip_final_snapshot = true # For development, to speed up deletion. Set to false for production.
publicly_accessible = false # Keep private
vpc_security_group_ids = [aws_security_group.rds_sg.id]
db_subnet_group_name = aws_db_subnet_group.rds_subnet_group.name
tags = {
Name = "${var.project_name}-${var.environment}-MainDB"
Project = var.project_name
Environment = var.environment
}
}
This file defines the input variables for your Terraform configuration, making it reusable and configurable.
# variables.tf
variable "project_name" {
description = "The name of the project."
type = string
default = "Test Project Name"
}
variable "environment" {
description = "The deployment environment (e.g., Development, Staging, Production)."
type = string
default = "Development"
}
variable "aws_region" {
description = "The AWS region to deploy resources into."
type = string
default = "us-east-1"
}
variable "vpc_cidr" {
description = "The CIDR block for the VPC."
type = string
default = "10.0.0.0/16"
}
variable "public_subnet_cidrs" {
description = "A list of CIDR blocks for the public subnets."
type = list(string)
default = ["10.0.1.0/24", "10.0.2.0/24"]
}
variable "private_subnet_cidrs" {
description = "A list of CIDR blocks for the private subnets."
type = list(string)
default = ["10.0.101.0/24", "10.0.102.0/24"]
}
variable "ami_id" {
description = "The AMI ID for the EC2 instance (e.g., Amazon Linux 2 AMI)."
type = string
# Find latest Amazon Linux 2 AMI for us-east-1:
# data "aws_ami" "amazon_linux_2" {
# most_recent = true
# owners = ["amazon"]
# filter {
# name = "name"
# values = ["amzn2-ami-hvm-*-x86_64-gp2"]
# }
# filter {
# name = "virtualization-type"
# values = ["hvm"]
# }
# }
# default = data.aws_ami.amazon_linux_2.id
# Hardcoding a common AMI for demonstration, please update to a recent one or use a data source.
default = "ami-053b0d53c279acc90" # Amazon Linux 2 AMI (HVM), SSD Volume Type - us-east-1 (as of Jan 2024)
}
variable "instance_type" {
description = "The EC2 instance type."
type = string
default = "t3.micro"
}
variable "key_name" {
description = "The name of the EC2 Key Pair to allow SSH access."
type = string
# IMPORTANT: Replace with an existing key pair name in your AWS account.
# Example: default = "my-ssh-key-pair"
default = "your-ec2-key-pair-name" # <<< IMPORTANT: CHANGE THIS
}
variable "allowed_ssh_cidrs" {
description = "A list of CIDR blocks that are allowed to SSH into EC2 instances."
type = list(string)
# IMPORTANT: For security, replace with your specific IP address or corporate VPN CIDR.
# Example: default = ["203.0.113.0/24"]
default = ["0.0.0.0/0"] # <<< WARNING: Allowing SSH from anywhere is INSECURE for production. Restrict this!
}
variable "db_instance_type" {
description = "The RDS DB instance type."
type = string
default = "db.t3.micro"
}
variable "db_name" {
description = "The name of the database."
type = string
default = "appdb"
}
variable "db_username" {
description = "The master username for the database."
type = string
default = "admin"
}
variable "db_password" {
description = "The master password for the database."
type = string
# WARNING: Hardcoding sensitive values like passwords is not recommended for production.
# Use AWS Secrets Manager or Parameter Store for secure storage.
default = "ChangeMe123!" # <<< IMPORTANT: CHANGE THIS AND SECURELY MANAGE IT
sensitive = true
}
variable "state_bucket_name" {
description = "The S3 bucket name for Terraform state."
type = string
default = "terraform-state-testprojectname-dev-${var.aws_region}" # Ensure this is globally unique
}
variable "state_dynamodb_table" {
description = "The DynamoDB table name for Terraform state locking."
type = string
default = "terraform-state-lock-testprojectname-dev"
}
This file defines the values that will be outputted after Terraform applies the configuration, providing useful information about the deployed resources.
# outputs.tf
output "vpc_id" {
description = "The ID of the VPC."
value = aws_vpc.main.id
}
output "public_subnet_ids" {
description = "The IDs of the public subnets."
value = [for s in aws_subnet.public : s.id]
}
output "private_subnet_ids" {
description = "The IDs of the private subnets."
value = [for s in aws_subnet.private : s.id]
}
output "ec2_instance_private_ip" {
description = "The private IP address of the EC2 application server."
value = aws_instance.app_server.private_ip
}
output "rds_endpoint" {
description = "The endpoint of the RDS database."
value = aws_db_instance.main_db.address
}
output "rds_port" {
description = "The port of the RDS database."
value = aws_db_instance.main_db.port
}
output "s3_bucket_name" {
description = "The name of the S3 bucket for application data."
value = aws_s3_bucket.application_data.id
}
This file configures the required Terraform providers and the S3 backend for state management.
# versions.tf
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0" # Use a compatible version, check Terraform Registry for latest
}
}
# S3 Backend Configuration
# IMPORTANT: The S3 bucket and DynamoDB table for state locking must exist BEFORE terraform init.
# Refer to the README.md for instructions on how to set them up.
backend "s3" {
bucket = "terraform-state-testprojectname-dev-us-east-1" # Replace with your actual bucket name or use a variable
key = "development/test-project-name/terraform.tfstate"
region = "us-east-1" # Replace with your actual region or use a variable
encrypt = true
dynamodb_table = "terraform-state-lock-testprojectname-dev" # Replace with your actual DynamoDB table name or use a variable
}
}
provider "aws" {
region = var.aws_region
}
# Data source for available availability zones
data "aws_availability_zones" "available" {
state = "available"
}
This README provides essential information and steps for deploying and managing your infrastructure using the generated Terraform configurations.
# Terraform Infrastructure for Test Project Name (Development Environment)
This repository contains Terraform configurations to provision core AWS infrastructure for the "Test Project Name" in a "Development" environment.
## Overview
The infrastructure defined in this configuration includes:
* **VPC**: A dedicated Virtual Private Cloud with public and private subnets across multiple Availability Zones.
* **Networking**: Internet Gateway, NAT Gateway, Route Tables for internet access and private outbound traffic.
* **Compute**: A single EC2 instance for application hosting (in a private subnet).
* **Storage**: An S3 bucket for application data, with versioning and public access blocked.
* **Database**: An RDS PostgreSQL instance (in private subnets).
* **Security**: Security Groups for EC2 and RDS to control network access.
## Prerequisites
Before you begin, ensure you have the following installed and configured:
1. **AWS CLI**: Configured with credentials that have sufficient permissions to create and manage the specified AWS resources.
* `aws configure`
2. **Terraform**: Version `1.0.0` or higher.
* `brew install terraform` (macOS) or refer to [Terraform Installation Guide](https://learn.hashicorp.com/terraform/getting-started/install.html).
## Setup Terraform State Backend (Manual One-Time Setup)
Terraform will store its state in an S3 bucket and use a DynamoDB table for state locking to prevent concurrent modifications. **These resources must be created manually (or via a separate, simpler Terraform config) before running `terraform init` for this project.**
### 1. Create S3 Bucket for Terraform State
The S3 bucket name must be globally unique.
aws s3api create-bucket --bucket terraform-state-testprojectname-dev-us-east-1 --region us-east-1 --create-bucket-configuration LocationConstraint=us-east-1
aws s3api put-bucket-versioning --bucket terraform-state-testprojectname-dev-us-east-1 --versioning-configuration Status=Enabled
aws s3api put-bucket-encryption --bucket terraform-state-testprojectname-dev-us-east-1 --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
*(Adjust bucket name and region as needed, matching `versions.tf` and `variables.tf`.)*
### 2. Create DynamoDB Table for State Locking
aws dynamodb create-table \
--table-name terraform-state-lock-testprojectname-dev \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--region us-east-1
*(Adjust table name and region as needed, matching `versions.tf` and `variables.tf`.)*
## Deployment Steps
1. **Clone the Repository**:
git clone <your-repo-url>
cd <your-repo-directory>
2. **Review and Customize Variables**:
Open `variables.tf` and update the default values, especially:
* `key_name`: Must be an existing EC2 Key Pair in your AWS account.
* `allowed_ssh_cidrs`: **CRITICAL** - Restrict this to your IP address or trusted CIDR range for security. `0.0.0.0/0` is highly insecure for production.
* `db_password`: **CRITICAL** - Change this to a strong, unique password. For production, consider using AWS Secrets Manager.
3. **Initialize Terraform**:
This command downloads the necessary providers and configures the S3 backend.
terraform init
4. **Review the Plan**:
This command shows you what Terraform will do without making any changes.
terraform plan
Carefully review the output to ensure the planned changes align with your expectations.
5. **Apply the Configuration**:
This command applies the changes defined in your configuration to your AWS account.
terraform apply
Type `yes` when prompted to confirm the changes.
## Accessing Resources
After successful deployment, Terraform will output key information about your resources.
* **EC2 Instance Private IP**: Use this to access your application server from within the VPC (e.g., from a bastion host).
* **RDS Endpoint**: Use this to connect to your PostgreSQL database from your EC2 instance.
* **S3 Bucket Name**: The name of your S3 bucket for application data.
## Cleanup
To destroy all the resources provisioned by this Terraform configuration:
terraform destroy
Type `yes` when prompted to confirm the destruction.
**Note**: This will delete all provisioned resources. Ensure you have backed up any critical data before running `terraform destroy`. The S3 bucket and DynamoDB table used for Terraform state will **not** be destroyed by this command; you must delete them manually if no longer needed.
This section provides critical advice for enhancing and managing your infrastructure.
* Action: Do NOT hardcode sensitive values (like db_password and key_name) in variables.tf or main.tf for production environments.
* Recommendation: Integrate with AWS Secrets Manager or AWS Systems Manager Parameter Store (with SecureString) to retrieve credentials at runtime.
* Example for RDS password (using data source):
# In main.tf
resource "aws_db_instance" "main_db" {
# ...
password = data.aws_secretsmanager_secret_version.db_credentials.secret_string
# ...
}
# In a separate file (e.g., data.tf)
data "aws_secretsmanager_secret_version" "db_credentials" {
secret_id = "your-rds-secret-name" # Name of your secret in Secrets Manager
}
* Action: Restrict allowed_ssh_cidrs in variables.tf to only trusted IP addresses or CIDR blocks (e.g., your office IP, VPN gateway). 0.0.0.0/0 is highly insecure.
* Recommendation: For SSH access to private EC2 instances, use AWS Systems Manager Session Manager (no open SSH ports needed) or a dedicated bastion host.
* Recommendation: Create specific IAM roles for EC2 instances if they need to interact with other AWS services (e.g., S3, CloudWatch). Grant only the minimum necessary permissions.
* Recommendation: For production, implement stricter S3 bucket policies to control access, potentially integrating with IAM roles for specific application access. The current setup blocks all public access, which is a good start.
* Recommendation: For development, t3.micro and db.t3.micro are cost-effective. Monitor usage patterns and scale up/down as needed for higher environments.
* Recommendation: For production, consider replacing the single aws_instance with an aws_autoscaling_group behind an aws_lb (Application Load Balancer) to automatically scale compute resources based on demand and improve fault tolerance.
* Recommendation: For a Development environment, a single-AZ RDS
\n