This document provides detailed, professional CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to be comprehensive, covering common stages such as linting, testing, building, and deployment, and are tailored for a generic web application scenario (e.g., Node.js based, containerized with Docker).
This deliverable provides ready-to-adapt CI/CD pipeline configurations, enabling you to automate your software development lifecycle efficiently. We've focused on clarity, best practices, and actionable examples for three leading CI/CD platforms: GitHub Actions, GitLab CI, and Jenkins. Each configuration includes stages for code quality (linting), validation (testing), artifact creation (building a Docker image), and environment promotion (deployment to staging and production).
A robust CI/CD pipeline typically orchestrates several key stages to ensure code quality, reliability, and rapid delivery.
* Purpose: Automatically checks code for stylistic errors, potential bugs, and adherence to coding standards without executing the code.
* Benefits: Improves code consistency, readability, and maintainability; catches issues early in the development cycle.
* Examples: ESLint (JavaScript), Flake8 (Python), Checkstyle (Java).
* Purpose: Executes various tests to validate the functionality, performance, and reliability of the application.
* Types:
* Unit Tests: Verify individual components or functions in isolation.
* Integration Tests: Ensure different modules or services work correctly together.
* End-to-End (E2E) Tests: Simulate user interaction to validate the entire application flow.
* Benefits: Reduces bugs, ensures new features don't break existing ones, provides confidence in changes.
* Purpose: Compiles source code, resolves dependencies, and packages the application into a deployable artifact. For containerized applications, this involves building a Docker image.
* Benefits: Creates consistent, versioned artifacts that can be deployed across environments.
* Purpose: Automates the release of the application to various environments (e.g., development, staging, production).
* Stages:
* Staging Deployment: Deploys to a pre-production environment for final testing and validation. Often fully automated.
* Production Deployment: Deploys to the live environment, making the application accessible to end-users. Often requires manual approval for critical releases.
* Benefits: Accelerates time-to-market, reduces manual errors, ensures consistent deployments.
For these configurations, we assume a common scenario:
npm for package management.GitHub Actions allows you to automate, customize, and execute your software development workflows directly in your repository.
File: .github/workflows/main.yml
--- ## 5. GitLab CI Configuration GitLab CI/CD is a powerful tool integrated directly into GitLab for continuous integration, delivery, and deployment. **File**: `.gitlab-ci.yml`
Workflow Description: Generate complete CI/CD pipeline configurations for GitHub Actions, GitLab CI, or Jenkins with testing, linting, building, and deployment stages.
This document presents a comprehensive analysis of the foundational infrastructure requirements for establishing a robust, scalable, and secure CI/CD pipeline. This analysis is crucial for ensuring that the generated pipeline configurations (in subsequent steps) are tailored, efficient, and aligned with modern DevOps best practices.
A successful CI/CD pipeline is built upon a well-architected infrastructure that supports every stage from code commit to production deployment. This initial analysis identifies the critical infrastructure components and considerations necessary to deliver automated, reliable, and frequent software releases. Our goal is to lay the groundwork for a pipeline that not only automates tasks but also enhances collaboration, improves code quality, and accelerates time-to-market.
The following components are fundamental to nearly any modern CI/CD pipeline. Understanding these areas is the first step in defining your specific needs.
* Options: GitHub, GitLab, Bitbucket, Azure DevOps Repos.
* Infrastructure Needs: Reliable hosting (cloud-managed is typical), integration with CI/CD tools via webhooks/APIs, access control.
* Options: GitHub Actions, GitLab CI, Jenkins, Azure DevOps Pipelines, CircleCI, Travis CI.
* Infrastructure Needs: Runner/agent infrastructure (self-hosted or cloud-managed), secure access to SCM, artifact storage, secret management.
* Options: Docker containers, Virtual Machines (VMs), language-specific SDKs/runtimes (Node.js, Python, Java, .NET, Go, Ruby).
* Infrastructure Needs: Consistent, reproducible environments; sufficient compute (CPU, RAM) and storage; pre-installed tools. Docker is highly recommended for consistency and isolation.
* Options: Nexus, Artifactory, AWS S3/ECR, Azure Blob Storage/ACR, GCP Cloud Storage/Artifact Registry, Docker Hub.
* Infrastructure Needs: High availability, scalability, secure access, versioning, retention policies.
* Options: Dedicated test environments (ephemeral or persistent), testing frameworks (JUnit, Pytest, Jest, Selenium, Cypress), test data management.
* Infrastructure Needs: Isolated environments, parallel test execution capabilities, reporting mechanisms.
* Options: Kubernetes (EKS, AKS, GKE, OpenShift), Virtual Machines (AWS EC2, Azure VMs, GCP Compute Engine), Serverless (AWS Lambda, Azure Functions, GCP Cloud Functions), Platform-as-a-Service (PaaS) (Heroku, AWS Elastic Beanstalk, Azure App Service), On-premise servers.
* Infrastructure Needs: Network access, secure credentials, scaling capabilities, monitoring hooks, rollback mechanisms.
* Options: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, GCP Secret Manager, CI/CD platform built-in secrets.
* Infrastructure Needs: Strong encryption, access control (least privilege), audit trails, integration with CI/CD platform.
* Options: Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Datadog, CloudWatch, Azure Monitor, GCP Operations.
* Infrastructure Needs: Centralized log aggregation, metric collection, alerting, dashboarding.
* Options: SAST (Static Application Security Testing - SonarQube), DAST (Dynamic Application Security Testing - OWASP ZAP), SCA (Software Composition Analysis - Snyk, Trivy, Aqua Security), container image scanning.
* Infrastructure Needs: Integration with CI/CD, reporting, policy enforcement.
Based on general best practices and typical client requirements, here's an analysis of critical infrastructure considerations:
* Need: CI/CD pipelines must scale horizontally to handle concurrent builds and deployments, especially in growing organizations with multiple teams and projects.
* Analysis: Cloud-native CI/CD platforms (GitHub Actions, GitLab CI's shared runners) offer inherent scalability. For self-hosted solutions (Jenkins), dynamic provisioning of build agents (e.g., using Kubernetes or cloud VMs) is essential. The artifact repository must also scale to handle increasing storage demands.
* Need: Pipeline infrastructure must be resilient to failures to ensure continuous delivery.
* Analysis: Distributed CI/CD platforms (like cloud-based ones) typically offer high availability out-of-the-box. For self-hosted Jenkins, implementing master-agent architecture with redundant masters and persistent storage is critical. Deployment targets (e.g., Kubernetes clusters) should be configured for high availability across multiple availability zones.
* Need: Protecting code, build artifacts, credentials, and deployment processes from unauthorized access or tampering.
* Analysis: Strict access control (RBAC), network segmentation, regular vulnerability scanning of build environments and container images, and dedicated secrets management solutions are non-negotiable. All communication should be encrypted (TLS).
* Need: Balancing performance and reliability with operational expenditure.
* Analysis: Cloud services offer pay-as-you-go models, but costs can escalate without proper management. Utilizing ephemeral build environments (e.g., Docker containers that are spun up and torn down), optimizing build times, and implementing intelligent resource provisioning for self-hosted runners can significantly reduce costs. Leveraging managed services where appropriate can reduce operational overhead.
* Need: Ease of managing, updating, and monitoring the CI/CD infrastructure.
* Analysis: Infrastructure as Code (IaC) principles (e.g., Terraform) should be applied to manage CI/CD infrastructure. Centralized logging and monitoring solutions provide critical insights into pipeline performance, failures, and resource utilization, enabling proactive maintenance and rapid troubleshooting.
Modern CI/CD infrastructure is rapidly evolving, driven by cloud adoption, containerization, and a strong focus on security and efficiency.
* Insight: A significant shift towards cloud-managed CI/CD services (e.g., GitHub Actions, GitLab CI, Azure DevOps) is observed. These platforms offer superior scalability, reduced operational overhead, and tighter integration with cloud ecosystems compared to traditional self-hosted solutions.
* Data Point: According to the Cloud Native Computing Foundation (CNCF) 2022 survey, 96% of organizations are using or evaluating containers, with Kubernetes as the orchestration layer for 89% of them, directly impacting CI/CD deployment targets.
* Insight: Docker and containerization have become the de-facto standard for creating consistent, reproducible, and isolated build environments. This eliminates "it works on my machine" issues and simplifies dependency management.
* Data Point: Docker reported over 14 billion image pulls monthly, indicating pervasive adoption in development and CI/CD workflows.
* Insight: Integrating security scanning (SAST, DAST, SCA, secret scanning) directly into the CI pipeline is a critical trend. Identifying vulnerabilities early significantly reduces remediation costs and risks.
* Data Point: A Snyk report showed that fixing vulnerabilities in production costs 30x more than fixing them during development.
* Insight: Managing not just application code but also infrastructure (CI/CD runners, deployment targets, network configurations) as code using tools like Terraform, CloudFormation, or Pulumi ensures consistency, version control, and auditability.
* Insight: Extending IaC to deployments, where the desired state of infrastructure and applications is declared in Git and continuously synchronized by an automated operator (e.g., Argo CD, Flux CD). This enhances traceability and simplifies rollbacks.
Based on the analysis and current industry trends, we recommend the following for optimal CI/CD infrastructure:
* Recommendation: Choose a single, primary SCM (e.g., GitHub or GitLab) and leverage its integrated CI/CD capabilities (GitHub Actions or GitLab CI) for seamless integration, reduced context switching, and often lower operational overhead.
* Rationale: Reduces complexity, leverages existing platform features, and simplifies access control.
* Recommendation: Utilize Docker containers for all build and test stages. Define build environments using Dockerfiles to ensure consistency across all pipeline runs.
* Rationale: Guarantees reproducible builds, isolates dependencies, and simplifies environment management.
* Recommendation: Adopt a dedicated artifact repository manager (e.g., Nexus, Artifactory) or cloud-native services (AWS ECR/S3, Azure ACR/Blob, GCP Artifact Registry) for storing all build outputs, including Docker images.
* Rationale: Provides a single source of truth for binaries, enables robust versioning, and enhances security by controlling access to deployable assets.
* Recommendation: Utilize a dedicated secrets management solution (e.g., HashiCorp Vault, cloud-native secret managers) tightly integrated with your CI/CD platform. Avoid hardcoding secrets.
* Rationale: Enhances security by centralizing, encrypting, and auditing access to sensitive credentials.
* Recommendation: Manage all deployment targets (Kubernetes clusters, VMs, networking) and CI/CD runner infrastructure using IaC tools (e.g., Terraform).
* Rationale: Ensures environment consistency, enables version control of infrastructure, facilitates rapid provisioning, and supports disaster recovery.
* Recommendation: Integrate security scanning tools (SAST, SCA, DAST, container image scanning) directly into your CI pipeline stages. Configure gates to fail builds if critical vulnerabilities are detected.
* Rationale: Catches security issues early, reduces costs, and builds security into the development lifecycle.
* Recommendation: Implement centralized logging and monitoring for both the CI/CD pipeline (runner health, build times, success/failure rates) and the deployed applications.
* Rationale: Provides visibility into pipeline performance, helps in rapid troubleshooting, and ensures application health post-deployment.
To generate the most effective and tailored CI/CD pipeline configurations, we require specific details about your project. Please provide the following information:
* Are you currently using GitHub, GitLab, Bitbucket, Azure DevOps, or something else?
* Is it cloud-hosted or self-hosted?
* Do you have a preference for GitHub Actions, GitLab CI, or Jenkins? (If no preference, we will recommend based on SCM and other factors).
* Are there existing Jenkins instances or runners that need to be leveraged?
* Programming Languages (e.g., Python, Java, Node.js, Go, .NET, Ruby).
* Frameworks (e.g., Spring Boot, React, Angular, Django, Flask).
* Build Tools (e.g., Maven, Gradle, npm, Yarn, Pip, Go Modules).
* Cloud Provider: AWS, Azure, GCP, On-premise?
* Environment Type: Kubernetes (EKS, AKS, GKE, OpenShift), VMs, Serverless (Lambda, Azure Functions), PaaS (App Service, Elastic Beanstalk)?
* Are these environments already provisioned? If so, what are the details (cluster names, VM types, etc.)?
* Do you have
yaml
stages:
- lint
- test
- build
- deploy_staging
- deploy_production
variables:
NODE_VERSION: '18.x'
DOCKER_IMAGE_NAME: $CI_REGISTRY_IMAGE # Uses GitLab's built-in image name variable
DOCKER_REGISTRY: $CI_REGISTRY # Uses GitLab's built-in registry variable
default:
image: node:${NODE_VERSION}-alpine # Base image for linting and testing
tags:
- docker # Use a GitLab Runner configured with the 'docker' executor
cache:
paths:
- node_modules/
lint_job:
stage: lint
script:
- npm ci
- npm run lint
test_job:
stage: test
script:
- npm ci
- npm test
build_docker_image:
stage: build
image: docker:latest # Use a Docker image for building Docker images
services:
- docker:dind # Docker in Docker for building images
variables:
DOCKER_HOST: tcp://docker:2375 # Required for dind
DOCKER_TLS_CERTDIR: "" # Required for dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY # Built-in GitLab CI/CD variables
- docker build -t $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA -t $DOCKER_IMAGE_NAME:latest .
- docker push $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA
- docker push $DOCKER_IMAGE_NAME:latest
rules:
- if: $CI_COMMIT_BRANCH == "main" || $CI_PIPELINE_SOURCE == "merge_request_event"
deploy_staging_job:
stage: deploy_staging
image: alpine/git # A lightweight image for deployment scripts
script:
- echo "Deploying $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA to Staging..."
# Replace this with your actual deployment commands.
# Access secrets via GitLab CI/CD Variables (Settings -> CI/CD -> Variables).
# Example: Using SSH to deploy
# -
This document provides detailed and professional CI/CD pipeline configurations for various platforms, including GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to be comprehensive, incorporating essential stages such as linting, testing, building, and deployment, and are tailored to facilitate efficient and reliable software delivery.
This deliverable aims to equip your team with robust CI/CD pipeline templates, enabling rapid implementation of automated software delivery workflows. Each configuration is presented with clear explanations, example code, and customization notes to ensure seamless integration with your existing development practices and infrastructure. We have focused on creating modular, scalable, and secure pipelines that can be adapted to a wide range of application types and deployment targets.
We have generated example CI/CD pipeline configurations for the following popular platforms:
Each pipeline template includes the following core stages:
Before diving into platform-specific configurations, let's detail the purpose and best practices for each core stage:
* Run early in the pipeline to fail fast.
* Integrate with pre-commit hooks for immediate feedback.
* Use language-specific linters (e.g., ESLint for JavaScript, Black/Flake8 for Python, Checkstyle for Java, golangci-lint for Go).
* Configure linters with strict rules suitable for your team's standards.
* Unit Tests: Test individual components or functions in isolation.
* Integration Tests: Test the interaction between different components or services.
* End-to-End (E2E) Tests: Simulate user scenarios across the entire application stack.
* Security Scans (SAST/DAST): Static/Dynamic Application Security Testing (optional, but highly recommended).
* Run unit tests first as they are fastest.
* Ensure comprehensive test coverage.
* Generate test reports for visibility (e.g., JUnit XML, Cobertura).
* Integrate security scanning tools where applicable.
* Use consistent build environments (e.g., Docker images for build agents).
* Cache dependencies to speed up subsequent builds.
* Tag artifacts/images with meaningful versions (e.g., Git commit SHA, semantic version).
* Store build artifacts in a secure, accessible repository (e.g., Docker Registry, S3 bucket, Nexus).
* Automated Staging: Automatically deploy to a staging environment upon successful build and test.
* Manual Production Approval: Require manual approval for production deployments.
* Immutable Infrastructure: Deploy new instances rather than updating existing ones.
* Rollback Strategy: Ensure a clear and tested rollback plan.
* Environment Variables & Secrets: Manage environment-specific configurations and sensitive data securely (e.g., using vault, Kubernetes secrets, cloud secrets managers).
* Monitoring & Health Checks: Integrate post-deployment checks and monitoring.
GitHub Actions provides a flexible way to automate workflows directly within your GitHub repository. Workflows are defined in YAML files (.github/workflows/your-workflow.yml).
main branch, pull requests, manual dispatch..github/workflows/main.yml)
name: CI/CD Pipeline
on:
push:
branches:
- main
pull_request:
branches:
- main
workflow_dispatch: # Allows manual trigger
env:
NODE_VERSION: '18.x' # Specify Node.js version
AWS_REGION: 'us-east-1' # Specify your AWS region
ECR_REPOSITORY: 'my-node-app' # Your ECR repository name
S3_BUCKET_NAME: 'my-static-website-bucket' # Your S3 bucket name for static assets
jobs:
lint:
name: Lint Code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run Lint
run: npm run lint # Assumes a 'lint' script in package.json
test:
name: Run Tests
runs-on: ubuntu-latest
needs: lint # This job depends on lint job
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run Unit Tests
run: npm test # Assumes a 'test' script in package.json
env:
CI: true # Prevents interactive watch mode
- name: Upload test results (optional)
if: always()
uses: actions/upload-artifact@v4
with:
name: test-results
path: junit.xml # Or whatever your test reporter outputs
build-and-push-image:
name: Build & Push Docker Image
runs-on: ubuntu-latest
needs: test # This job depends on test job
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push Docker image
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }} # Use commit SHA as image tag
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "IMAGE_URL=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_ENV
- name: Output Image URL
run: echo "Docker image pushed: ${{ env.IMAGE_URL }}"
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build-and-push-image
environment: staging # Define a 'staging' environment in GitHub repo settings
if: github.ref == 'refs/heads/main' # Only deploy main branch to staging
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
# Example: Deploying static assets to S3
- name: Deploy static assets to S3
run: |
# Assuming your build process creates a 'dist' directory with static files
aws s3 sync ./dist s3://${{ env.S3_BUCKET_NAME }}/ --delete
# Example: Deploying container image to EKS (Kubernetes)
- name: Update K8s deployment for staging
uses: actions-hub/kubectl@master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_STAGING }} # Kubeconfig for staging cluster
with:
args: set image deployment/my-app my-app-container=${{ env.IMAGE_URL }} -n my-namespace-staging
- name: Verify Staging Deployment
run: echo "Staging deployment initiated for image ${{ env.IMAGE_URL }}"
# Add actual verification steps here, e.g., curl to check health endpoint
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: deploy-staging # Production deployment depends on staging being successful
environment: production # Define a 'production' environment in GitHub repo settings
if: github.ref == 'refs/heads/main' # Only deploy main branch to production
# Requires manual approval in GitHub Environments
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
# Example: Deploying container image to EKS (Kubernetes)
- name: Update K8s deployment for production
uses: actions-hub/kubectl@master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_PRODUCTION }} # Kubeconfig for production cluster
with:
args: set image deployment/my-app my-app-container=${{ env.IMAGE_URL }} -n my-namespace-production
- name: Verify Production Deployment
run: echo "Production deployment initiated for image ${{ env.IMAGE_URL }}"
# Add actual verification steps here, e.g., curl to check health endpoint
NODE_VERSION in env to match your project.npm ci, npm run lint, npm test, npm run build to your project's specific commands.ECR_REPOSITORY and the docker build command if you're not using Docker or have a different registry. * AWS S3: Change S3_BUCKET_NAME and the aws s3 sync source path (./dist).
* Kubernetes (EKS): Update deployment/my-app, my-app-container, and my-namespace-staging/production to match your Kubernetes deployment specifics. Ensure KUBE_CONFIG_STAGING and KUBE_CONFIG_PRODUCTION secrets are configured.
* Other Targets: Replace AWS/Kubernetes steps with commands for your target environment (e.g., Azure App Service, Google Cloud Run, Heroku, SSH deployment).
GitLab CI/CD is tightly integrated with GitLab repositories, using a .gitlab-ci.yml file at the root of your project.
main branch, merge requests, scheduled pipelines..gitlab-ci.yml)
stages:
- lint
- test
- build
- deploy:staging
- deploy:production
variables:
NODE_VERSION: '18.x'
AWS_REGION: 'us-east-1'
ECR_REPOSITORY: 'my-node-app'
S3_BUCKET_NAME: 'my-static-website-bucket'
DOCKER_HOST: tcp://docker:2375 # Required for Docker-in-Docker
DOCKER_TLS_CERT
\n