This document outlines the detailed Kubernetes configurations generated to support the robust, scalable, and observable deployment of your microservices. This deliverable covers essential aspects including deployment manifests, Helm charts, service mesh integration, advanced scaling policies, and comprehensive monitoring setups.
Kubernetes Deployment manifests are the foundational building blocks for deploying and managing stateless applications within your cluster. They define the desired state for your application, including the container image, resource requirements, and replica count.
Key Components & Considerations:
apiVersion, kind, metadata: Standard Kubernetes object definition.spec.replicas: Defines the desired number of identical pods running the application.spec.selector: Labels used by the Deployment to identify and manage its pods.spec.template.metadata.labels: Labels applied to the pods created by this Deployment, matching the selector.spec.template.spec.containers: * name: Unique name for the container within the pod.
* image: Docker image name and tag (e.g., my-registry/my-service:v1.0.0).
* ports: Container ports exposed (e.g., containerPort: 8080).
* resources:
* limits: Maximum CPU and memory the container can use. Essential for resource management and stability.
* requests: Minimum CPU and memory guaranteed to the container. Used by the scheduler.
* env: Environment variables passed to the container (e.g., database connection strings, API keys).
* livenessProbe: Defines how Kubernetes checks if your application is running correctly. If a probe fails, Kubernetes restarts the container.
* readinessProbe: Defines how Kubernetes checks if your application is ready to serve traffic. If a probe fails, Kubernetes removes the pod from service endpoints.
* securityContext: Defines privilege and access control settings for a Pod or Container.
spec.template.spec.volumes: Defines volumes for persistent data storage, configuration maps, or secrets.spec.strategy: Defines the deployment strategy (e.g., RollingUpdate for zero-downtime updates).Actionable Insight: Each microservice will have its dedicated Deployment manifest, tailored with appropriate resource requests/limits, health probes, and environment variables for its specific operational needs.
Helm charts are packages of pre-configured Kubernetes resources. They provide a standardized way to define, install, and upgrade even the most complex Kubernetes applications.
Benefits of Using Helm:
values.yaml files, enabling easy environment-specific adjustments.Standard Helm Chart Structure:
my-service-chart/ ├── Chart.yaml # Information about the chart (name, version, description) ├── values.yaml # Default configuration values for the chart ├── templates/ # Directory containing Kubernetes manifest templates │ ├── deployment.yaml # Template for the Kubernetes Deployment │ ├── service.yaml # Template for the Kubernetes Service │ ├── ingress.yaml # Template for the Kubernetes Ingress (if applicable) │ ├── configmap.yaml # Template for ConfigMaps │ └── _helpers.tpl # Helper templates/definitions (e.g., common labels) └── charts/ # Directory for chart dependencies (optional)
This document outlines a detailed marketing strategy for the "Kubernetes Deployment Planner" service, designed to generate Kubernetes deployment manifests, Helm charts, service meshes, scaling policies, and monitoring configurations for microservices. The strategy focuses on identifying the target audience, recommending effective channels, crafting compelling messages, and defining measurable Key Performance Indicators (KPIs).
The "Kubernetes Deployment Planner" addresses critical pain points in the microservices deployment lifecycle on Kubernetes. Our primary target audience consists of technical professionals and decision-makers within organizations leveraging or migrating to Kubernetes.
* Complexity & Manual Effort: Spending significant time writing and debugging complex Kubernetes YAML manifests, Helm charts, and service mesh configurations from scratch.
* Inconsistency: Difficulty in maintaining consistent configurations across multiple teams, projects, and environments, leading to "configuration drift" and errors.
* Learning Curve: Keeping up with the rapidly evolving Kubernetes ecosystem, service mesh technologies (e.g., Istio, Linkerd), and monitoring tools (e.g., Prometheus, Grafana).
* Time-to-Market: Slow deployment cycles due to manual configuration and validation steps.
* Security & Compliance: Ensuring all deployments adhere to security best practices and organizational compliance standards.
* Team Productivity: Observing engineers spending too much time on infrastructure configuration rather than core product development.
* Operational Costs: High operational overhead associated with managing complex Kubernetes deployments.
* Scalability Challenges: Difficulty scaling engineering teams and deployments efficiently without introducing errors or inconsistencies.
* Risk Mitigation: Concerns about security vulnerabilities, downtime, and compliance issues arising from misconfigured deployments.
* Talent Acquisition/Retention: Difficulty attracting and retaining talent for highly specialized Kubernetes roles.
A multi-channel approach is recommended to effectively reach and engage our diverse target audience.
* Focus: Thought leadership, practical guides, best practices, problem/solution narratives.
* Topics: "Simplifying Kubernetes YAML," "Automating Helm Chart Generation," "Implementing Service Mesh Best Practices," "Advanced Kubernetes Scaling Strategies," "Monitoring Microservices on K8s," "Reduce K8s Deployment Errors by X%."
* Deliverables: Regular blog posts, in-depth whitepapers on specific K8s challenges, customer success stories demonstrating ROI.
* Strategy: Optimize website and content for high-intent keywords related to Kubernetes deployment automation, Helm chart generation, service mesh configuration tools, K8s scaling policies, and monitoring solutions.
* Keywords: "Kubernetes deployment automation," "Helm chart generator," "Istio configuration tool," "Linkerd manifest generator," "HPA VPA automation," "K8s monitoring setup," "microservices deployment best practices."
* Google Ads: Target specific technical keywords and competitor terms.
* LinkedIn Ads: Target by job title (DevOps Engineer, Platform Engineer, SRE, CTO), industry (Software Development, IT Services), and skills (Kubernetes, Helm, Istio). Use clear calls-to-action (CTAs) for demo requests or whitepaper downloads.
* Focus: Share blog posts, industry news, product updates, engage in relevant discussions, and highlight success stories.
* LinkedIn: Professional networking, company page updates, sponsored content.
* Twitter: Real-time updates, community engagement, participation in relevant hashtags (#Kubernetes, #DevOps, #CloudNative).
* Reddit: Participate in subreddits like r/kubernetes, r/devops, r/cloudnative, providing value and subtly promoting the solution where appropriate.
* Strategy: Nurture leads generated from content downloads, webinars, and demo requests.
* Content: Product updates, new features, exclusive content, invitations to webinars, case studies, tailored solution briefs.
* Focus: Live demonstrations of the "Kubernetes Deployment Planner," technical deep-dives into specific features (e.g., Helm chart generation, service mesh integration), and best practices sessions.
* Topics: "Automating Your Entire K8s Stack," "From Zero to Production: K8s Deployments in Minutes," "Mastering Service Mesh with [Planner Name]."
* Presence: Sponsorships, speaking slots, booth presence for direct engagement with the target audience.
* Goal: Brand awareness, lead generation, thought leadership.
Our messaging will emphasize the value proposition of automation, standardization, efficiency, and reliability, tailored to the specific pain points of our target personas.
"Streamline your Kubernetes deployments with the Kubernetes Deployment Planner: Automated, standardized, and production-ready configurations for your microservices, from manifest to mesh."
* Message: "Reduce manual effort and accelerate deployment cycles by automatically generating production-grade Kubernetes manifests, Helm charts, and configurations."
* Benefit: Faster time-to-market, increased developer velocity, engineers focus on innovation, not YAML.
* Tagline: "Deploy faster. Innovate more."
* Message: "Enforce consistent configurations across all your microservices and environments, embedding best practices for reliability, security, and performance."
* Benefit: Eliminate configuration drift, reduce errors, improve system stability, simplify onboarding for new teams.
* Tagline: "Consistency built-in. Complexity out."
* Message: "Go beyond basic manifests. Generate complete configurations including service meshes (Istio, Linkerd), advanced scaling policies (HPA/VPA), and robust monitoring setups (Prometheus, Grafana)."
* Benefit: A single source for all your K8s deployment needs, reducing tool sprawl and integration headaches.
* Tagline: "Your complete K8s deployment co-pilot."
* Message: "Ensure your deployments meet security standards and compliance requirements with automatically generated configurations that adhere to industry best practices."
* Benefit: Mitigate risks, pass audits with confidence, secure your cloud-native applications from day one.
* Tagline: "Secure by design. Compliant by default."
* Message: "Empower your development teams to deploy with confidence, abstracting away Kubernetes complexity while providing granular control when needed."
* Benefit: Lower barrier to entry for Kubernetes, faster adoption, reduced need for deep K8s expertise across all teams.
* Tagline: "Kubernetes, simplified for everyone."
"For DevOps and Platform Engineers struggling with the complexity and inconsistency of Kubernetes deployments, the Kubernetes Deployment Planner is an intelligent automation platform that generates production-ready manifests, Helm charts, service mesh, scaling, and monitoring configurations. Unlike manual YAML creation or generic templates, our planner ensures standardization, embeds best practices, and significantly reduces deployment time and errors, allowing your teams to focus on innovation."
To measure the success of our marketing efforts, we will track a combination of marketing, sales, and product engagement metrics.
* Total unique visitors.
* Traffic sources (organic, paid, direct, referral, social).
* Bounce rate, time on page.
* Ranking for target keywords.
* Organic search impressions and clicks.
* Blog post views, shares, comments.
* Whitepaper/case study downloads.
* Webinar registrations and attendance rates.
* Follower growth.
* Impressions, likes, shares, comments on posts.
* Number of mentions across blogs, news, forums.
* Number of leads generated through content downloads, webinar registrations, etc.
* Conversion rate from website visitor to MQL.
* Number of MQLs converted into SQLs by the sales team.
* Conversion rate from MQL to SQL.
* Number of scheduled and completed product demonstrations.
* Conversion rate from SQL to demo.
* Total marketing and sales expenses divided by the number of new customers acquired.
* Average time from initial lead contact to deal closure.
Actionable Insight: A dedicated Helm chart will be developed for each microservice or logical group of services. This will include parameterized configurations for image versions, resource limits, environment variables, ingress rules, and scaling parameters, making deployments consistent and repeatable across environments.
Istio is an open-source service mesh that layers transparently onto existing distributed applications. It provides a uniform way to connect, secure, control, and observe services.
Key Capabilities & Configurations:
* VirtualService: Defines how requests are routed to services within the mesh, enabling features like A/B testing, canary deployments, and traffic shifting.
* DestinationRule: Configures policies that apply to traffic for a service after routing has occurred, such as load balancing algorithms, connection pool settings, and outlier detection.
* Gateway: Manages inbound and outbound traffic for the mesh, acting as an entry point for external traffic into the cluster.
* Mutual TLS (mTLS): Automatically encrypts communication between services.
* Authorization Policies: Controls who can access which services and under what conditions.
* Authentication Policies: Verifies the identity of clients and services.
* Telemetry: Collects metrics, logs, and traces for all service traffic.
* Distributed Tracing: Provides end-to-end visibility into requests across multiple services (integrates with Jaeger/Zipkin).
* Service Graph: Visualizes service dependencies and traffic flow (integrates with Kiali).
* Retries, Timeouts, Circuit Breakers: Configures policies to handle service failures gracefully.
Actionable Insight: Each microservice will be automatically injected with an Istio sidecar proxy upon deployment. VirtualService and DestinationRule resources will be created to manage traffic routing, enabling advanced deployment patterns and robust traffic control. Authorization policies will be configured to enforce mTLS and fine-grained access control between services.
To ensure optimal performance and resource utilization, appropriate scaling policies will be implemented for your microservices.
HPA automatically scales the number of pod replicas in a Deployment or ReplicaSet based on observed CPU utilization, memory utilization, or custom metrics.
Configuration Details:
apiVersion, kind, metadata: Standard Kubernetes object definition.spec.scaleTargetRef: References the Deployment, ReplicaSet, or StatefulSet to be scaled.spec.minReplicas: Minimum number of replicas to maintain.spec.maxReplicas: Maximum number of replicas allowed.spec.metrics: Defines the metrics to use for scaling decisions: * type: Resource: Scales based on CPU or memory utilization.
* resource.name: cpu or memory.
* target.type: Utilization (percentage of requested resource) or AverageValue (absolute value).
* target.averageUtilization / target.averageValue: The target value for the metric.
* type: Pods: Scales based on a metric across all pods (e.g., requests per second).
* type: Object: Scales based on a metric from a specific Kubernetes object (e.g., messages in a queue).
Actionable Insight: HPA will be configured for all stateless microservices, primarily using CPU utilization as the initial scaling metric. For services with specific performance indicators (e.g., message queue depth, request latency), custom metrics will be integrated via Prometheus Adapter to enable more precise scaling.
VPA automatically adjusts the CPU and memory requests and limits for containers in a pod, based on their actual usage. This helps in right-sizing resources and reducing costs.
Configuration Details:
apiVersion, kind, metadata: Standard Kubernetes object definition.spec.targetRef: References the Deployment, ReplicaSet, or StatefulSet to be scaled.spec.updatePolicy.updateMode: * Off: VPA only provides recommendations without applying them.
* Initial: VPA applies recommendations only when a pod is created.
* Recreate: VPA recreates pods to apply new recommendations (disruptive).
* Auto: VPA applies recommendations by recreating pods if necessary (most common for non-critical workloads).
spec.resourcePolicy.containerPolicies: Allows setting specific policies per container, such as minimum/maximum resources and controlled resources (cpu, memory).Actionable Insight: VPA will be deployed in "recommendation mode" (updateMode: Off) initially for all services to gather resource usage data and provide insights without immediate disruption. For stable, non-critical services, VPA can be configured for Initial or Auto mode to optimize resource allocation automatically. VPA is complementary to HPA, with HPA handling horizontal scaling and VPA optimizing vertical resource allocation.
Robust monitoring is crucial for maintaining the health and performance of your microservices. This section details the configuration of Prometheus and Grafana.
Prometheus is an open-source monitoring system with a time-series database. It is the de-facto standard for Kubernetes monitoring.
Key Configurations:
ServiceMonitor / PodMonitor: Custom Resource Definitions (CRDs) used by Prometheus Operator to discover and scrape metrics from services and pods. * selector: Defines which services/pods to scrape based on labels.
* endpoints: Specifies the port and path for metrics (e.g., /metrics on port http-metrics).
PrometheusRule: Defines alerting rules that Prometheus evaluates. When a rule's condition is met, an alert is fired.Alertmanager: Handles alerts sent by Prometheus, including deduplicating, grouping, and routing them to various notification channels (e.g., Slack, PagerDuty, email). * alertmanager.yaml: Configures receivers, routes, and inhibition rules.
Actionable Insight: For each microservice, a ServiceMonitor resource will be configured to automatically discover and scrape metrics from the /metrics endpoint (standard for Prometheus exposition). Essential PrometheusRules will be created to monitor critical metrics (e.g., error rates, latency, resource saturation) and trigger alerts via Alertmanager to the designated notification channels.
Grafana is an open-source platform for monitoring and observability, allowing you to query, visualize, alert on, and explore your metrics, logs, and traces.
Key Configurations:
* Cluster Overview: CPU, memory, network usage, pod status.
* Service-Specific Dashboards: Request rates, error rates, latency, resource consumption, active connections, database queries, etc.
* Istio Dashboards: Traffic flow, mTLS status, request metrics through the service mesh.
* Alerting Integration: Grafana can also trigger alerts based on dashboard panels, complementing Prometheus Alertmanager.
Actionable Insight: A set of standardized Grafana dashboards will be provided, including a cluster-wide overview and dedicated dashboards for each microservice. These dashboards will offer real-time visibility into application health, performance, and resource utilization, leveraging the metrics collected by Prometheus.
This comprehensive suite of configurations ensures that your microservices are deployed efficiently, scaled effectively, secured robustly, and monitored proactively, providing a solid foundation for operational excellence in Kubernetes.
Next Steps:
This document provides a comprehensive and detailed set of Kubernetes deployment artifacts and configurations for your microservices, designed for production-readiness, scalability, and observability. This output covers core Kubernetes manifests, Helm chart structures, service mesh integration, scaling policies, and monitoring configurations, presented with best practices and actionable examples.
This deliverable outlines the foundational components required to deploy, manage, scale, and monitor your microservices on Kubernetes. We've focused on creating a robust, maintainable, and observable environment.
We'll use a hypothetical user-service as an example to illustrate the core Kubernetes manifests. This service will handle user-related operations within your microservice architecture.
deployment.yaml)This manifest defines how your user-service pods are created and managed, ensuring desired replica counts and rolling updates.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-deployment
labels:
app: user-service
spec:
replicas: 3 # Start with 3 replicas for high availability
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
serviceAccountName: user-service-sa # Recommended for fine-grained permissions
containers:
- name: user-service
image: your-registry/user-service:v1.0.0 # Replace with your actual image
ports:
- containerPort: 8080 # The port your application listens on
envFrom:
- configMapRef:
name: user-service-config # Reference ConfigMap for non-sensitive config
- secretRef:
name: user-service-secrets # Reference Secret for sensitive config
resources:
requests: # Minimum resources required
memory: "128Mi"
cpu: "200m"
limits: # Maximum resources allowed
memory: "512Mi"
cpu: "500m"
livenessProbe: # Checks if the container is still running and healthy
httpGet:
path: /actuator/health/liveness # Example health endpoint
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
failureThreshold: 3
readinessProbe: # Checks if the container is ready to serve traffic
httpGet:
path: /actuator/health/readiness # Example readiness endpoint
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
securityContext: # Best practice for container security
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 10001 # Example non-root user ID
# imagePullSecrets: # Uncomment if using a private registry that requires authentication
# - name: regcred
service.yaml)This manifest exposes your user-service pods via a stable IP address and DNS name within the cluster.
apiVersion: v1
kind: Service
metadata:
name: user-service
labels:
app: user-service
spec:
selector:
app: user-service # Selects pods with this label
ports:
- protocol: TCP
port: 80 # The port clients will connect to
targetPort: 8080 # The port your application listens on inside the pod
type: ClusterIP # Exposes the service only within the cluster. For external access, use Ingress or a LoadBalancer service.
configmap.yaml)This manifest stores non-sensitive configuration data for your user-service, allowing for easy updates without redeploying the application image.
apiVersion: v1
kind: ConfigMap
metadata:
name: user-service-config
data:
APPLICATION_NAME: "User Service"
DATABASE_HOST: "database-cluster-ip" # Example database host
DATABASE_PORT: "5432"
LOG_LEVEL: "INFO"
secret.yaml)This manifest stores sensitive data (e.g., database credentials, API keys) securely. It's recommended to use a Secret management solution like HashiCorp Vault or Kubernetes External Secrets for production.
apiVersion: v1
kind: Secret
metadata:
name: user-service-secrets
type: Opaque # Or kubernetes.io/dockerconfigjson for image pull secrets
data:
DATABASE_USERNAME: "dXNlcm5hbWU=" # base64 encoded "username"
DATABASE_PASSWORD: "cGFzc3dvcmQ=" # base64 encoded "password"
API_KEY: "YWJjZGUxMjM0NQ==" # base64 encoded "abcde12345"
Note: For production environments, consider using Kubernetes External Secrets or a secrets management solution like HashiCorp Vault to inject secrets dynamically, rather than storing them directly in Git.
Helm streamlines the deployment and management of applications on Kubernetes by packaging them into "Charts." A Helm Chart defines all the Kubernetes resources for an application, along with templating capabilities and configuration management.
values.yaml for environment-specific configurations.
user-service-chart/
├── Chart.yaml # Information about the chart
├── values.yaml # Default configuration values for the chart
├── templates/ # Directory for Kubernetes manifest templates
│ ├── deployment.yaml # Deployment definition
│ ├── service.yaml # Service definition
│ ├── configmap.yaml # ConfigMap definition
│ ├── secret.yaml # Secret definition (consider external secrets for production)
│ ├── hpa.yaml # Horizontal Pod Autoscaler definition
│ ├── _helpers.tpl # Reusable template snippets
│ └── serviceaccount.yaml # Service Account definition
├── charts/ # Directory for chart dependencies (optional)
└── README.md # Documentation for the chart
user-service-chart/Chart.yaml
apiVersion: v2
name: user-service
description: A Helm chart for the User Service microservice
type: Application
version: 0.1.0 # Chart version
appVersion: "1.0.0" # Application version
user-service-chart/values.yaml
replicaCount: 3
image:
repository: your-registry/user-service
tag: v1.0.0
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
targetPort: 8080
config:
applicationName: "User Service"
databaseHost: "database-cluster-ip"
databasePort: "5432"
logLevel: "INFO"
secrets:
databaseUsername: "username" # Will be base64 encoded by Helm
databasePassword: "password"
apiKey: "abcde12345"
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
serviceAccount:
create: true
name: "user-service-sa"
livenessProbe:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
# HPA configuration (enabled: false by default, enable when needed)
autoscaling:
enabled: false
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 70
user-service-chart/templates/deployment.yaml (simplified example using values)
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "user-service.fullname" . }}
labels:
{{- include "user-service.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "user-service.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "user-service.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ include "user-service.serviceAccountName" . }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.targetPort }}
env:
- name: APPLICATION_NAME
value: {{ .Values.config.applicationName | quote }}
- name: DATABASE_HOST
value: {{ .Values.config.databaseHost | quote }}
- name: DATABASE_PORT
value: {{ .Values.config.databasePort | quote }}
- name: LOG_LEVEL
value: {{ .Values.config.logLevel | quote }}
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: {{ include "user-service.fullname" . }}-secrets
key: databaseUsername
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "user-service.fullname" . }}-secrets
key: databasePassword
resources:
{{- toYaml .Values.resources | nindent 10 }}
livenessProbe:
httpGet:
path: {{ .Values.livenessProbe.path }}
port: {{ .Values.livenessProbe.port }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
readinessProbe:
httpGet:
path: {{ .Values.readinessProbe.path }}
port: {{ .Values.readinessProbe.port }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
Note: The _helpers.tpl file would contain reusable Go template functions like include "user-service.fullname" . to generate consistent names and labels.
A service mesh like Istio provides traffic management, security, and observability features without requiring changes to application code.
user-serviceAssuming Istio is already installed in your cluster and the user-service namespace is enabled for Istio injection.
gateway.yaml (for external ingress)
This defines an Istio Gateway, allowing external traffic into the mesh.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: user-service-gateway
namespace: user-service-namespace # Replace with your namespace
spec:
selector:
istio: ingressgateway # Selects the default Istio ingress gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "user-api.your-domain.com" # Replace with your domain
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: user-service-tls-cert # Kubernetes Secret containing TLS certs
hosts:
- "user-api.your-domain.com"
virtualservice.yaml (traffic routing)
This routes traffic from the Gateway to your user-service.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: user-service-vs
namespace: user-service-namespace
spec:
hosts:
- "user-api.your-domain.com"
- "user-service" # Also allows internal routing within the mesh
gateways:
- user-service-gateway # Binds to the Gateway defined above
- mesh # Allows internal mesh traffic
http:
- match:
- uri:
prefix: /api/v1/users # Example path prefix
route:
- destination:
host: user-service # Refers to the Kubernetes Service name
port:
number: 80 # Service port
subset: v1 # Route to a specific subset (if defined in DestinationRule)
weight: 100
- route: # Default route for internal calls
- destination:
host: user-service
port:
number: 80
weight: 100
destinationrule.yaml (load balancing, subsets)
This defines policies that apply to traffic for a service after routing has occurred. It's crucial for advanced traffic management like canary deployments.
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: user-service-dr
namespace: user-service-namespace
spec:
host: user-service # Matches the Kubernetes Service