This document provides comprehensive, detailed, and professional CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. Each configuration includes essential stages: linting, testing, building, and deployment, designed to ensure code quality, reliability, and efficient delivery.
For demonstration purposes, the examples will primarily focus on a Node.js application. However, the principles and structure are easily adaptable to other technology stacks (e.g., Python, Java, .NET, Go).
Before diving into specific configurations, let's define the core stages included in these pipelines:
* Purpose: Static code analysis to identify programmatic errors, bugs, stylistic errors, and suspicious constructs. It helps enforce coding standards and maintain code quality.
* Tools: ESLint (JavaScript), Flake8/Pylint (Python), Checkstyle (Java), RuboCop (Ruby), etc.
* Purpose: Verify that the code behaves as expected. This stage typically includes different types of tests.
* Types:
* Unit Tests: Test individual components or functions in isolation.
* Integration Tests: Test the interaction between different components or services.
* End-to-End (E2E) Tests: Simulate real user scenarios to test the entire application flow.
* Tools: Jest/Mocha/Cypress (JavaScript), Pytest (Python), JUnit (Java), NUnit (.NET), GoConvey (Go), etc.
* Purpose: Compile source code, resolve dependencies, and package the application into a deployable artifact. This may include transpilation, minification, and creating Docker images.
* Output: Executables, JAR files, WAR files, Docker images, static web assets, etc.
* Tools: npm/yarn (Node.js), Maven/Gradle (Java), pip (Python), dotnet build (.NET), Docker.
* Purpose: Release the built application artifact to a target environment (e.g., development, staging, production). This can involve updating servers, deploying to cloud services, or pushing Docker images to a registry.
* Strategies: Blue/Green, Canary, Rolling Updates, Immutable Deployments.
* Target Environments: AWS EC2/S3/Lambda, Azure App Services/Kubernetes, Google Cloud Run/GKE, Heroku, Netlify, Kubernetes clusters, etc.
GitHub Actions provides a flexible way to automate workflows directly within your GitHub repository.
.yml) within the .github/workflows/ directory of your repository.on: Defines the events that trigger the workflow.jobs: A workflow run is made up of one or more jobs. Each job runs on a fresh instance of the virtual environment.steps: A job contains a sequence of tasks called steps. Steps can run commands, set up tasks, or run an action.uses: Runs a community or custom action.run: Executes a command-line script.This example assumes a Node.js application that uses ESLint for linting, Jest for testing, and will deploy static assets to an AWS S3 bucket for a frontend application, or a Docker image to a registry for a backend service.
File: .github/workflows/main.yml
name: Node.js CI/CD Pipeline
on:
push:
branches:
- main
- develop
pull_request:
branches:
- main
- develop
workflow_dispatch: # Allows manual trigger
env:
NODE_VERSION: '18.x' # Specify Node.js version
AWS_REGION: 'us-east-1'
S3_BUCKET_NAME: 'your-app-frontend-bucket' # For static website deployment
ECR_REPOSITORY_NAME: 'your-app-backend-repo' # For Docker image deployment
jobs:
lint:
name: Lint Code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm' # Cache node_modules to speed up builds
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint
test:
name: Run Tests
needs: lint # This job depends on the 'lint' job completing successfully
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run Unit and Integration Tests
run: npm test
# Optional: Run E2E tests if configured (e.g., using Cypress)
# - name: Install Cypress dependencies
# run: npm install cypress
# - name: Run E2E tests
# run: npm run cypress:run
build:
name: Build Application
needs: test # This job depends on the 'test' job completing successfully
runs-on: ubuntu-latest
outputs:
# Output build artifacts path for deployment stage (if not using Docker)
build_path: ${{ steps.get_build_path.outputs.path }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build frontend/backend
run: npm run build
# For a frontend app, this might generate a 'dist' folder.
# For a backend app, this might compile TypeScript to JavaScript.
- name: Upload build artifact (for static deployments like S3)
uses: actions/upload-artifact@v4
with:
name: build-artifact
path: build/ # Or 'dist/', depending on your build output
retention-days: 5
# Optional: Build and push Docker image (for backend services)
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push Docker image
run: |
docker build -t ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY_NAME }}:${{ github.sha }} .
docker push ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY_NAME }}:${{ github.sha }}
# Set output for image tag for later deployment
id: docker_build_push
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ env.ECR_REPOSITORY_NAME }}
IMAGE_TAG: ${{ github.sha }}
deploy:
name: Deploy to Environment
needs: build # This job depends on the 'build' job completing successfully
runs-on: ubuntu-latest
environment: # Use environments for protection rules and secrets
name: production # Or 'staging', 'development'
url: https://your-app.com # Optional: URL to environment
if: github.ref == 'refs/heads/main' # Deploy only from 'main' branch
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
# Deployment for static frontend applications (e.g., to S3)
- name: Download build artifact
uses: actions/download-artifact@v4
with:
name: build-artifact
path: build/ # Download to the same path as it was uploaded
- name: Deploy to S3
run: |
aws s3 sync build/ s3://${{ env.S3_BUCKET_NAME }}/ --delete
aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} --paths "/*" # Invalidate CloudFront cache
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_REGION }}
# OR: Deployment for backend services (e.g., updating an ECS service or Kubernetes deployment)
# - name: Update ECS service (example)
# run: |
# aws ecs update-service \
# --cluster your-ecs-cluster \
# --service your-ecs-service \
# --force-new-deployment \
# --task-definition $(aws ecs register-task-definition \
# --family your-task-definition-family \
# --container-definitions '[{"name":"your-container-name","image":"${{ needs.build.outputs.ECR_REGISTRY }}/${{ env.ECR_REPOSITORY_NAME }}:${{ github.sha }}"}]' \
# | jq -r .taskDefinition.taskDefinitionArn) # This is a simplified example, usually you'd update a full task definition JSON
# env:
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# AWS_DEFAULT_REGION: ${{ env.AWS_REGION }}
# OR: Deploy to a generic server via SSH (example)
# - name: Deploy via SSH
# uses: appleboy/ssh-action@v1.0.3
# with:
# host: ${{ secrets.SSH_HOST }}
# username: ${{ secrets.SSH_USERNAME }}
# key: ${{ secrets.SSH_PRIVATE_KEY }}
# script: |
# cd /var/www/your-app
# sudo rsync -avzh --delete build/ . # Copy new build artifacts
# sudo systemctl restart your-app-service # Restart service
Workflow: DevOps Pipeline Generator
Step: gemini → analyze_infrastructure_needs
User Input: Generate detailed professional output for: DevOps Pipeline Generator
This document outlines a comprehensive analysis of the infrastructure needs critical for generating a robust and efficient CI/CD pipeline. The goal of this initial step is to identify the core components, environments, and strategic considerations that will directly influence the design and configuration of your automated build, test, and deployment processes. A thorough understanding of these needs ensures the generated pipeline is not only functional but also optimized for performance, security, scalability, and cost-effectiveness.
This analysis serves as a foundational assessment, highlighting key decision points and information required to tailor the subsequent pipeline generation to your specific project and organizational requirements.
To effectively generate a CI/CD pipeline, we must consider the interplay of several infrastructure components:
This section details the critical factors influencing pipeline infrastructure design:
The nature of your application and its underlying technologies profoundly impacts pipeline design:
The environment where your application will be deployed is a primary driver for pipeline configuration:
* Kubernetes (K8s): Requires kubectl or Helm for deployments, often leveraging container registries.
* Serverless: Involves deploying code packages to FaaS platforms (e.g., AWS Lambda, Azure Functions).
* Virtual Machines (VMs) / Bare Metal: Often uses configuration management tools (Ansible, Chef) or custom scripts for deployment.
* Platform as a Service (PaaS): (e.g., Heroku, Google App Engine, Azure App Service) typically has simplified deployment models.
Leveraging existing tools or organizational preferences can streamline adoption:
Integrating security early and consistently is paramount:
Designing for future growth and responsiveness:
Balancing performance and features with budget constraints:
Several trends are shaping modern CI/CD infrastructure:
Based on the analysis of infrastructure needs and current industry trends, we recommend the following for optimal pipeline generation:
*
* Installs dependencies and runs the npm run build command.
* For static sites: Uploads the generated build/ (or dist/) directory as an artifact.
* For containerized apps: Builds a Docker image and pushes it to Amazon ECR (or Docker Hub/GHCR).
* For static sites: Downloads the build artifact and syncs it to an S3 bucket, then invalidates a CloudFront distribution cache.
* For containerized apps: (Commented out example) Updates an AWS ECS service with the newly pushed Docker image.
* Includes an if condition to only deploy from the main branch.
* Uses GitHub Environments for better management of deployment secrets and access.
secrets.AWS_ACCESS_KEY_ID, secrets.AWS_SECRET_ACCESS_KEY, secrets.CLOUDFRONT_DISTRIBUTION_ID, secrets.SSH_HOST, etc., with your actual secrets configured in your GitHub repository settings.env.NODE_VERSION as needed.path: build/ to path: dist/ or your specific build output directory.deploy job to your specific hosting provider (Azure, GCP, Heroku, Netlify, custom servers, Kubernetes, etc.).on block to suit your branching strategy and preferred triggers.actions/setup-node action includes caching for node_modules which significantly speeds up subsequent runs.GitLab CI/CD is an integral part of GitLab, offering powerful and highly configurable pipelines directly within your repository.
.gitlab-ci.yml at the root of your repository.stages: Defines the order of jobs.jobs: Each job has a unique name and defines a set of steps.script: The core of a job, containing shell commands to execute.image: Specifies the Docker image to use as the base for the job's environment.artifacts: Specifies files to be attached to the job and passed between stages.cache: Defines files and directories to cache between job runsThis document serves as the final deliverable for the "DevOps Pipeline Generator" workflow, presenting detailed, validated, and professionally documented CI/CD pipeline configurations tailored for your project. We have generated robust pipelines for GitHub Actions, GitLab CI, and Jenkins, encompassing essential stages such as linting, testing, building, and multi-environment deployment.
Our goal is to provide you with actionable, production-ready templates that can be seamlessly integrated into your existing development workflows, accelerating your release cycles and enhancing software quality.
The generated pipelines are designed with best practices in mind, focusing on automation, reliability, and security. Each pipeline configuration is structured to provide a clear, sequential flow that transforms your source code into deployable artifacts and then deploys them to target environments.
Key Principles Applied:
All generated pipelines follow a similar logical flow, adapted to the specific syntax and features of each CI/CD platform:
* Unit Tests: Verifies individual components/functions of the code.
* Integration Tests: Checks the interaction between different parts of the system.
* Staging Environment: Deploys the built artifact to a pre-production environment for further testing and validation.
* Production Environment: Deploys the validated artifact to the live production environment, often with a manual approval step.
Below are the detailed configurations for GitHub Actions, GitLab CI, and Jenkins, complete with explanations and customization guidelines.
GitHub Actions provides a flexible and powerful way to automate workflows directly within your GitHub repository.
File Location: .github/workflows/main.yml
Key Features:
Generated Configuration Structure (Example for a Node.js application with Docker):
# .github/workflows/main.yml
name: CI/CD Pipeline
on:
push:
branches:
- main
- develop
pull_request:
branches:
- main
- develop
env:
DOCKER_IMAGE_NAME: your-app-name
DOCKER_REGISTRY: ghcr.io/${{ github.repository_owner }} # Or your preferred registry
jobs:
lint_and_test:
name: Lint & Test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18' # Specify your Node.js version
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint # Assumes a 'lint' script in package.json
- name: Run Unit Tests
run: npm test # Assumes a 'test' script in package.json
- name: Run Integration Tests
# Add your integration test command here if applicable
# run: npm run test:integration
build_and_push_docker:
name: Build & Push Docker Image
runs-on: ubuntu-latest
needs: lint_and_test # This job depends on lint_and_test
if: success() && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop')
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Registry
uses: docker/login-action@v3
with:
registry: ${{ env.DOCKER_REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} # Use GITHUB_TOKEN for GHCR, or a custom secret for others
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:latest
${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Store Docker Image Tag as Artifact
uses: actions/upload-artifact@v4
with:
name: docker-image-tag
path: ./.docker_image_tag # Assuming you write the tag to a file
deploy_staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build_and_push_docker
if: success() && github.ref == 'refs/heads/develop'
environment:
name: Staging
url: ${{ steps.deploy.outputs.url }} # Optional: URL from deployment tool
steps:
- name: Download Docker Image Tag
uses: actions/download-artifact@v4
with:
name: docker-image-tag
path: .
- name: Read Docker Image Tag
id: get_tag
run: echo "IMAGE_TAG=$(cat ./.docker_image_tag)" >> $GITHUB_ENV # Adjust if artifact is different
- name: Configure AWS Credentials (Example)
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1 # Specify your AWS region
- name: Deploy to Staging (e.g., ECS, Kubernetes via Helm)
id: deploy
run: |
# Replace with your actual deployment commands
echo "Deploying ${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ env.IMAGE_TAG }} to Staging"
# Example: helm upgrade --install your-app-staging ./helm-chart --set image.tag=${{ env.IMAGE_TAG }}
# Example: kubectl apply -f k8s/staging-deployment.yml --record
deploy_production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: deploy_staging
if: success() && github.ref == 'refs/heads/main'
environment:
name: Production
url: ${{ steps.deploy.outputs.url }}
steps:
- name: Download Docker Image Tag
uses: actions/download-artifact@v4
with:
name: docker-image-tag
path: .
- name: Read Docker Image Tag
id: get_tag
run: echo "IMAGE_TAG=$(cat ./.docker_image_tag)" >> $GITHUB_ENV
- name: Manual Approval for Production Deployment
uses: trstringer/manual-approval@v1
with:
secret: ${{ secrets.APPROVER_TOKEN }} # A PAT with repo access, or use GitHub environments' approval rules
approvers: 'your-github-username,another-username' # Comma-separated list of GitHub usernames
minimum-approvals: 1
issue-title: "Approve Production Deployment for ${{ github.sha }}"
issue-body: "Please approve the deployment of version ${{ env.IMAGE_TAG }} to Production."
exclude-pull-requests: true
- name: Configure AWS Credentials (Example)
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Deploy to Production (e.g., ECS, Kubernetes via Helm)
id: deploy
run: |
echo "Deploying ${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ env.IMAGE_TAG }} to Production"
# Example: helm upgrade --install your-app-prod ./helm-chart --set image.tag=${{ env.IMAGE_TAG }}
# Example: kubectl apply -f k8s/production-deployment.yml --record
Customization Points for GitHub Actions:
on: Trigger: Adjust branches, events (e.g., workflow_dispatch for manual runs).env: Variables: Update DOCKER_IMAGE_NAME, DOCKER_REGISTRY, and any other environment-specific variables.runs-on:: Choose your desired runner (e.g., ubuntu-latest, windows-latest, macos-latest).actions/setup-node@v4: Change node-version to your project's requirement. For other languages, use actions/setup-python, actions/setup-java, etc.npm run lint, npm test: Update these commands to match your project's linting and testing scripts.context, tags. Ensure DOCKER_REGISTRY and secrets.GITHUB_TOKEN (for GHCR) or a specific registry secret are correctly configured. * Replace aws-actions/configure-aws-credentials@v4 with credentials for your cloud provider (Azure, GCP, etc.).
* Update deployment commands (helm upgrade, kubectl apply, aws ecs deploy, az webapp deploy, etc.) to match your deployment strategy and environment.
* Configure GitHub Environments for staging and production, including approval rules.
GitLab CI is deeply integrated into GitLab, offering a comprehensive solution for continuous integration and delivery.
File Location: .gitlab-ci.yml
Key Features:
stages and jobs model.rules for conditional job execution.Generated Configuration Structure (Example for a Python application with Docker):
# .gitlab-ci.yml
image: docker:latest # Use a Docker image with Docker CLI for building images
services:
- docker:dind # Docker-in-Docker for building images
variables:
DOCKER_TLS_CERTDIR: "/certs" # Required for Docker-in-Docker
DOCKER_IMAGE_NAME: your-python-app
DOCKER_REGISTRY: $CI_REGISTRY # Use GitLab's built-in registry
stages:
- lint
- test
- build
- deploy
cache:
paths:
- .venv/ # Cache Python virtual environment
lint:
stage: lint
image: python:3.9-slim-buster # Use a Python image for linting
before_script:
- python -m venv .venv
- source .venv/bin/activate
- pip install flake8
- pip install -r requirements.txt
script:
- flake8 . --max-line-length=120
rules:
- if: $CI_COMMIT_BRANCH
test:
stage: test
image: python:3.9-slim-buster
before_script:
- python -m venv .venv
- source .venv/bin/activate
- pip install -r requirements.txt
- pip install pytest
script:
- pytest # Assumes pytest is configured
rules:
- if: $CI_COMMIT_BRANCH
build_docker_image:
stage: build
script:
- docker build -t "$DOCKER_REGISTRY/$DOCKER_IMAGE_NAME:$CI_COMMIT_SHORT_SHA" .
- docker push "$DOCKER