This document provides detailed and actionable CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to automate the entire software delivery lifecycle, encompassing linting, testing, building, and deployment stages for a typical full-stack web application.
This deliverable provides ready-to-use boilerplate CI/CD pipeline configurations tailored for three popular platforms: GitHub Actions, GitLab CI, and Jenkins. Each configuration is structured to support a modern development workflow, ensuring code quality, reliability, and efficient deployment. The examples are comprehensive, covering common stages from code commit to production deployment.
To provide concrete and actionable examples, the following assumptions have been made regarding the application stack and deployment target. These can be easily customized to fit your specific project requirements.
* Linting: ESLint
* Testing: Jest
* Building: npm run build generates static assets
* Linting: Flake8 or Black
* Testing: Pytest
* Building: Poetry/Pipenv/pip for dependency management, no specific build step beyond dependency installation and perhaps a simple setup.py if packaging.
.
├── frontend/
│ ├── package.json
│ └── ...
├── backend/
│ ├── requirements.txt (or pyproject.toml)
│ └── ...
├── Dockerfile
├── docker-compose.yml
└── ...
This document provides a comprehensive analysis of the infrastructure needs required to support a robust and efficient DevOps CI/CD pipeline. This foundational analysis is crucial for designing and implementing configurations for GitHub Actions, GitLab CI, or Jenkins that incorporate testing, linting, building, and deployment stages.
The successful implementation of a modern CI/CD pipeline hinges on a well-defined and adequately provisioned infrastructure. This analysis identifies the core components, key considerations, and best practices for establishing a scalable, secure, reliable, and cost-effective infrastructure. While specific choices will depend on the project's unique requirements, this report outlines a common set of infrastructure elements and strategic recommendations to ensure the pipeline can effectively automate software delivery from code commit to production deployment across various environments.
When planning the infrastructure for a DevOps pipeline, several critical factors must be evaluated to ensure optimal performance, security, and maintainability:
Based on the general requirement for a "DevOps Pipeline Generator," the following core infrastructure components are typically required:
* Purpose: Host application source code, manage version control, facilitate collaboration (e.g., pull requests, branching strategies).
* Examples: GitHub, GitLab (self-hosted or cloud), Bitbucket, Azure DevOps Repos.
* Purpose: Define, trigger, and manage pipeline workflows (build, test, lint, deploy). Orchestrate agents and tasks.
* Examples: GitHub Actions, GitLab CI, Jenkins, Azure DevOps Pipelines.
* Purpose: Provide isolated and consistent environments for compiling code, running unit/integration/end-to-end tests, and linting.
* Examples: Docker containers (for consistent environments), Virtual Machines (VMs), cloud-hosted build agents (e.g., GitHub Actions runners, GitLab shared runners, Jenkins agents).
* Purpose: Securely store and version build artifacts (e.g., JARs, WARs, npm packages) and Docker images.
* Examples: Docker Hub, Amazon Elastic Container Registry (ECR), Azure Container Registry (ACR), Google Container Registry (GCR), Nexus Repository Manager, Artifactory.
* Purpose: Environments where the application will be deployed (e.g., development, staging, production).
* Examples:
* Cloud-based:
* Container Orchestration: Kubernetes (Amazon EKS, Azure AKS, Google GKE), AWS ECS, Azure Container Apps.
* Virtual Machines: AWS EC2, Azure Virtual Machines, Google Compute Engine.
* Serverless: AWS Lambda, Azure Functions, Google Cloud Functions, AWS Fargate.
* Platform as a Service (PaaS): AWS Elastic Beanstalk, Azure App Service, Google App Engine.
* On-Premise: VMs, bare metal servers, on-premise Kubernetes clusters.
* Purpose: Securely store and manage sensitive information (API keys, database credentials, environment variables) used by the pipeline and applications.
* Examples: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, Google Secret Manager.
* Purpose: Collect, aggregate, analyze, and visualize logs and metrics from the pipeline and deployed applications for performance, health, and security insights.
* Examples: Prometheus & Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Datadog, CloudWatch (AWS), Azure Monitor, Google Cloud Operations.
* Purpose: Integrate security checks throughout the pipeline (shift-left security).
* Examples:
* Static Application Security Testing (SAST): SonarQube, Checkmarx.
* Software Composition Analysis (SCA): Snyk, Trivy, Dependabot, Renovate.
* Dynamic Application Security Testing (DAST): OWASP ZAP.
* Container Image Scanning: Trivy, Clair, Aqua Security.
* Purpose: Define and provision infrastructure components declaratively, ensuring consistency and repeatability.
* Examples: Terraform, AWS CloudFormation, Azure Resource Manager (ARM) Templates/Bicep, Google Cloud Deployment Manager, Pulumi.
* GitHub Actions/GitLab CI: Offer tight integration with their SCM, YAML-based configuration (Infrastructure as Code for pipelines), and managed runners, reducing operational overhead. Ideal for cloud-native projects.
* Jenkins: Highly customizable, extensible, and suitable for complex, on-premise, or hybrid environments. Requires more management overhead for agents and scaling.
* Cloud: Offers unparalleled scalability, global reach, and a vast array of managed services (Kubernetes, Serverless, PaaS) that simplify operations.
* On-premise: May be required for specific data sovereignty, compliance, or existing infrastructure investments.
* Kubernetes: Dominant for microservices architectures, providing robust orchestration, scaling, and self-healing capabilities.
* Serverless: Ideal for event-driven, stateless workloads, offering extreme scalability and pay-per-execution cost models.
yaml
name: CI/CD Pipeline
on:
push:
branches:
- main
pull_request:
branches:
- main
workflow_dispatch: # Allows manual triggering
env:
PROJECT_NAME: my-webapp
AWS_REGION: us-east-1
ECR_REPOSITORY: ${{ vars.ECR_REPOSITORY_NAME || format('{0}-repo', env.PROJECT_NAME) }} # Use repository name from VARS or generate default
S3_BUCKET: ${{ vars.S3_BUCKET_NAME || format('{0}-static-assets', env.PROJECT_NAME) }} # Use bucket name from VARS or generate default
jobs:
lint:
name: Lint Code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js for Frontend Linting
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install Frontend Dependencies
working-directory: ./frontend
run: npm ci
- name: Run Frontend Lint
working-directory: ./frontend
run: npm run lint
- name: Set up Python for Backend Linting
uses: actions/setup-python@v5
with:
python-version: '3.9'
- name: Install Backend Dependencies
working-directory: ./backend
run: |
python -m pip install --upgrade pip
pip install flake8 black # Or poetry install, pipenv install
pip install -r requirements.txt
- name: Run Backend Lint (Flake8)
working-directory: ./backend
run: flake8 .
- name: Run Backend Lint (Black)
working-directory: ./backend
run: black --check .
test:
name: Run Tests
runs-on: ubuntu-latest
needs: lint # Ensure linting passes before testing
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js for Frontend Testing
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install Frontend Dependencies
working-directory: ./frontend
run: npm ci
- name: Run Frontend Tests
working-directory: ./frontend
run: npm test -- --coverage # Example with coverage
- name: Set up Python for Backend Testing
uses: actions/setup-python@v5
with:
python-version: '3.9'
- name: Install Backend Dependencies
working-directory: ./backend
run: |
python -m pip install --upgrade pip
pip install pytest # Or poetry install, pipenv install
pip install -r requirements.txt
- name: Run Backend Tests
working-directory: ./backend
run: pytest ./backend # Adjust path if needed
build:
name: Build Application and Docker Image
runs-on: ubuntu-latest
needs: test # Ensure tests pass before building
outputs:
image_tag: ${{ steps.set_image_tag.outputs.image_tag }}
steps:
- name: Checkout code
uses: actions/checkout@v4
# --- Build Frontend ---
- name: Setup Node.js for Frontend Build
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install Frontend Dependencies
working-directory: ./frontend
run: npm ci
- name: Build Frontend
working-directory: ./frontend
run: npm run build
- name: Archive Frontend Build Artifacts
uses: actions/upload-artifact@v4
with:
name: frontend-build
path: frontend/dist # Adjust to your build output directory
# --- Build Backend (No specific "build" step for Python beyond dependency installation) ---
# For Python, typically the "build" is just ensuring dependencies are available or packaging into a wheel.
# For this example, we'll assume the Dockerfile handles Python dependencies.
# --- Build Docker Image ---
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Amazon ECR
uses: docker/login-action@v3
with:
registry: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com
username: ${{ secrets.AWS_ACCESS_KEY_ID }}
password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Set Docker Image Tag
id: set_image_tag
run: |
TIMESTAMP=$(date +%Y%m%d%H%M%S)
GIT_SHA=${{ github.sha }}
IMAGE_TAG="${{ env.PROJECT_NAME }}:${TIMESTAMP}-${GIT_SHA::7}"
echo "image_tag=$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Build and Push Docker Image to ECR
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com/${{ env.ECR_REPOSITORY }}:${{ steps.set_image_tag.outputs.image_tag }}
${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com/${{ env.ECR_REPOSITORY }}:latest
build-args: |
NODE_ENV=production # Example build arg
PYTHONUNBUFFERED=1
deploy:
name: Deploy Application
runs-on: ubuntu-latest
needs: build # Ensure build passes before deploying
environment: production # Use a GitHub environment for protection rules and secrets
steps:
- name: Checkout code
uses: actions/checkout@v4
# --- Deploy Frontend Static Assets to S3 ---
- name: Download Frontend Build Artifacts
uses: actions/download-artifact@v4
with:
name: frontend-build
path: frontend-build-output
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Sync Frontend to S3
run: aws s3 sync frontend-build-output/dist s3://${{ env.S3_BUCKET }} --delete # Adjust path if needed
# --- Deploy Containerized Application to Server (Example via SSH) ---
- name: Deploy to Server via SSH
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
echo "Deploying new Docker image..."
# Login to ECR on the remote server
aws ecr get-login-password --region ${{ env.AWS_REGION }} | docker login --username AWS --password-stdin ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com
# Pull the latest image
docker pull ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com/${{ env.ECR_REPOSITORY }}:${{ needs.build.outputs.image_tag }}
# Stop existing container (if any)
docker stop ${{ env.PROJECT_NAME }} || true
docker rm ${{ env.PROJECT_NAME }} || true
# Run new container
docker run -d --name ${{ env.PROJECT_NAME }} -p 80:80 \
This document provides detailed and actionable CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to establish robust continuous integration and continuous deployment practices for a typical web application, encompassing essential stages such as linting, testing, building, and deployment.
Automated CI/CD pipelines are critical for modern software development, enabling faster, more reliable, and consistent delivery of applications. This deliverable outlines example configurations for three leading CI/CD platforms, tailored to a generic containerized application (e.g., Node.js, Python, Java application packaged with Docker) to demonstrate a full lifecycle. Each configuration is modular, allowing for easy adaptation to your specific technology stack and deployment targets.
Regardless of the platform, a well-structured CI/CD pipeline typically follows these core stages:
Below are detailed configurations for GitHub Actions, GitLab CI, and Jenkins, demonstrating the common stages for a generic Dockerized application.
GitHub Actions provides a flexible way to automate workflows directly within your GitHub repository.
File: .github/workflows/main.yml
name: CI/CD Pipeline
on:
push:
branches:
- main
- develop
pull_request:
branches:
- main
- develop
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js (example for a Node.js app)
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint # Assumes 'lint' script in package.json
- name: Run Unit and Integration Tests
run: npm test # Assumes 'test' script in package.json
build-and-push-docker:
needs: lint-test # Ensure linting and tests pass before building
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Login to Docker Hub (or other registry)
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
# For AWS ECR, GCP GCR, etc., use respective login actions or configure directly.
- name: Build Docker image
id: docker_build
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ secrets.DOCKER_REGISTRY_URL }}/my-app:${{ github.sha }}
${{ secrets.DOCKER_REGISTRY_URL }}/my-app:latest # Push 'latest' on main branch
build-args: |
APP_ENV=production
# For multi-arch builds: platforms: linux/amd64,linux/arm64
- name: Output Docker image name
run: echo "Image built: ${{ steps.docker_build.outputs.digest }}"
deploy-to-environment:
needs: build-and-push-docker # Ensure image is built before deployment
runs-on: ubuntu-latest
environment:
name: ${{ github.ref == 'refs/heads/main' && 'production' || 'staging' }}
url: https://my-app-${{ github.ref_name }}.example.com
# Conditional deployment based on branch
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Deploy to Kubernetes (example)
if: github.ref == 'refs/heads/main'
uses: azure/k8s-set-context@v3 # Or use actions for other cloud providers
with:
method: kubeconfig
kubeconfig: ${{ secrets.KUBE_CONFIG_PROD }} # Production Kubeconfig
# Add a step to apply Kubernetes manifests or Helm charts
- name: Apply Kubernetes manifests (Production)
if: github.ref == 'refs/heads/main'
run: |
kubectl apply -f k8s/deployment.yml --record
kubectl rollout status deployment/my-app-deployment
env:
KUBECONFIG: ${{ secrets.KUBE_CONFIG_PROD }}
- name: Deploy to Staging (example)
if: github.ref == 'refs/heads/develop'
uses: azure/k8s-set-context@v3
with:
method: kubeconfig
kubeconfig: ${{ secrets.KUBE_CONFIG_STAGING }} # Staging Kubeconfig
- name: Apply Kubernetes manifests (Staging)
if: github.ref == 'refs/heads/develop'
run: |
kubectl apply -f k8s/staging-deployment.yml --record
kubectl rollout status deployment/my-app-staging
env:
KUBECONFIG: ${{ secrets.KUBE_CONFIG_STAGING }}
- name: Trigger external deployment (e.g., Lambda, Serverless)
if: github.ref == 'refs/heads/feature-branch' # Example for feature branches
run: echo "Deployment for feature branch triggered via webhook..."
# Add curl or other commands to trigger specific deployments
Key Assumptions & Customization for GitHub Actions:
package.json scripts: Assumes npm run lint and npm test scripts exist. Adjust for your language/framework (e.g., mvn test, pytest).Dockerfile must be present at the root of your repository.${{ secrets.DOCKER_REGISTRY_URL }} with your registry (e.g., docker.io, ghcr.io, your-aws-account.dkr.ecr.eu-west-1.amazonaws.com). * DOCKER_USERNAME, DOCKER_PASSWORD: For Docker Hub or other registries.
* KUBE_CONFIG_PROD, KUBE_CONFIG_STAGING: Kubernetes cluster credentials. Store these securely in GitHub repository secrets.
kubectl apply commands are placeholders. Replace them with commands specific to your deployment target (e.g., Helm charts, AWS CLI, Azure CLI, Google Cloud SDK, serverless deployment tools).environment block helps manage environment-specific variables, secrets, and provides deployment protection rules.GitLab CI is deeply integrated with GitLab repositories, providing powerful CI/CD capabilities directly within your project.
File: .gitlab-ci.yml
image: docker:latest # Use Docker executor for building images
variables:
DOCKER_HOST: tcp://docker:2375 # Required for 'docker in docker'
DOCKER_TLS_CERTDIR: "" # Disable TLS for DIND for simplicity, adjust for production
stages:
- lint
- test
- build
- deploy
.node_template: &node_base
image: node:18-alpine # Base image for Node.js tasks
before_script:
- npm ci --cache .npm --prefer-offline # Install dependencies
lint:
<<: *node_base
stage: lint
script:
- npm run lint # Assumes 'lint' script in package.json
cache:
key: ${CI_COMMIT_REF_SLUG}-node-modules
paths:
- .npm/
- node_modules/
policy: pull-push
test:
<<: *node_base
stage: test
script:
- npm test # Assumes 'test' script in package.json
cache:
key: ${CI_COMMIT_REF_SLUG}-node-modules
paths:
- .npm/
- node_modules/
policy: pull
build_docker_image:
stage: build
services:
- docker:dind # Docker in Docker service
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest # Tag 'latest'
- docker push $CI_REGISTRY_IMAGE:latest
rules:
- if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH == "develop"
deploy_staging:
stage: deploy
image: alpine/helm:3.12.3 # Example: using Helm for deployment
script:
- echo "Deploying to Staging environment..."
- apk add --no-cache kubectl # Install kubectl if needed
- kubectl config set-context --current --namespace=staging
- kubectl config set-cluster kubernetes --server="$KUBE_URL_STAGING" --certificate-authority="$KUBE_CA_CERT_STAGING"
- kubectl config set-credentials gitlab --token="$KUBE_TOKEN_STAGING"
- kubectl config set-context kubernetes --cluster=kubernetes --user=gitlab
- kubectl config use-context kubernetes
- helm upgrade --install my-app-staging ./helm/my-app -f ./helm/my-app/values-staging.yaml --set image.tag=$CI_COMMIT_SHA
environment:
name: staging
url: https://my-app-staging.example.com
rules:
- if: $CI_COMMIT_BRANCH == "develop"
deploy_production:
stage: deploy
image: alpine/helm:3.12.3 # Example: using Helm for deployment
script:
- echo "Deploying to Production environment..."
- apk add --no-cache kubectl
- kubectl config set-context --current --namespace=production
- kubectl config set-cluster kubernetes --server="$KUBE_URL_PROD" --certificate-authority="$KUBE_CA_CERT_PROD"
- kubectl config set-credentials gitlab --token="$KUBE_TOKEN_PROD"
- kubectl config set-context kubernetes --cluster=kubernetes --user=gitlab
- kubectl config use-context kubernetes
- helm upgrade --install my-app ./helm/my-app -f ./helm/my-app/values-prod.yaml --set image.tag=$CI_COMMIT_SHA
environment:
name: production
url: https://my-app.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: manual # Manual deployment to production
Key Assumptions & Customization for GitLab CI:
package.json scripts: Assumes npm run lint and npm test scripts exist.Dockerfile must be present at the root of your repository.$CI_REGISTRY_USER, $CI_REGISTRY_PASSWORD, $CI_REGISTRY_IMAGE are built-in GitLab CI variables for its integrated container registry. * KUBE_URL_STAGING, KUBE_CA_CERT_STAGING, KUBE_TOKEN_STAGING: Kubernetes credentials for staging.
* KUBE_URL_PROD, KUBE_CA_CERT_PROD, KUBE_TOKEN_PROD: Kubernetes credentials for production.
* Store these securely in GitLab project CI/CD Variables (masked and protected).
kubectl and helm commands are examples. Adjust them to your specific deployment strategy.rules: Use rules for conditional job execution based on branch, tags, or other criteria.manual jobs: For critical environments like production, consider when: manual to require a manual\n