This deliverable provides detailed, professional, and actionable CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. Each configuration includes essential stages such as linting, testing, building, and deployment, designed for a typical web application development workflow.
A robust Continuous Integration/Continuous Delivery (CI/CD) pipeline is fundamental for modern software development, enabling automated code quality checks, rapid builds, and streamlined deployments. This document offers example configurations to kickstart your DevOps journey, focusing on common patterns and best practices across popular platforms.
For the purpose of these examples, we will assume a generic Node.js application that follows these steps:
Key Assumptions:
package.json file..eslintrc.js).jest.config.js).Dockerfile exists at the project root for containerization.Before diving into platform-specific configurations, consider these universal principles:
GitHub Actions allows you to automate, customize, and execute your software development workflows directly in your repository. Workflows are defined by a YAML file in the .github/workflows directory.
.github/workflows/main.yml)
#### 3.2. Explanation of Stages and Features
* **`on` Trigger:** Defines when the workflow runs (e.g., `push` to `main`/`develop`, `pull_request` to `main`/`develop`).
* **`env` Global Environment Variables:** Sets variables accessible across all jobs.
* **`jobs`:**
* **`lint-and-test`:**
* `runs-on: ubuntu-latest`: Specifies the runner environment.
* `actions/checkout@v3`: Checks out your repository code.
* `actions/setup-node@v3`: Configures the Node.js environment with caching.
* `npm ci`: Installs dependencies cleanly.
* `npm run lint` / `npm test`: Executes linting and testing scripts defined in `package.json`.
* **`build-and-push-docker`:**
* `needs: lint-and-test`: Ensures this job only runs if `lint-and-test` succeeds.
* `if`: Conditional execution based on the branch.
* `docker/setup-buildx-action@v2`: Configures Docker Buildx for advanced build features.
* `docker/login-action@v2`: Logs into the specified Docker registry using GitHub Secrets.
* `docker/build-push-action@v4`: Builds the Docker image and pushes it to the registry, tagging with both `github.sha` (unique commit hash) and `github.ref_name` (branch name). Utilizes GitHub Actions cache for faster builds.
* **`deploy`:**
* `needs: build-and-push-docker`: Ensures deployment only happens after a successful image build.
* `if`: Conditional execution, typically only for the `main` branch for production deployments.
* `environment: production`: Links the job to a GitHub Environment, allowing for protection rules (e.g., manual approval, specific reviewers).
* `aws-actions/configure-aws-credentials@v2`: Example for setting up AWS CLI. Similar actions exist for Azure, GCP, etc.
* **Deployment Placeholder:** This section needs to be replaced with your specific deployment commands (e.g., `kubectl`, `aws cli`, Helm charts, Terraform).
* `sarisia/actions-slack-notify@v1.2.1`: Optional step for sending notifications to Slack.
#### 3.3. Best Practices for GitHub Actions
* **Secrets Management:** Store sensitive information (API keys, tokens) in GitHub Secrets. Never hardcode them.
* **Reusable Workflows:** For complex or repetitive logic, create reusable workflows.
* **Matrix Strategies:** Use `strategy.matrix` to run jobs across different versions, operating systems, or configurations.
* **Artifacts:** Use `actions/upload-artifact` and `actions/download-artifact` to pass files between jobs or store build outputs.
* **Self-Hosted Runners:** For specific hardware requirements or on-premise environments, use self-hosted runners.
* **Conditional Logic:** Leverage `if` conditions for fine-grained control over job and step execution.
* **Environments:** Use GitHub Environments for deployment protection rules and environment-specific secrets.
---
### 4. GitLab CI Configuration
GitLab CI/CD is built directly into GitLab, using a `.gitlab-ci.yml` file in the root of your repository to define your pipeline.
#### 4.1. Example GitLab CI Pipeline (`.gitlab-ci.yml`)
Workflow: DevOps Pipeline Generator
Step: gemini → analyze_infrastructure_needs
Description: This analysis identifies the core infrastructure requirements and considerations for generating robust and efficient CI/CD pipelines, covering common platforms like GitHub Actions, GitLab CI, and Jenkins, with a focus on testing, linting, building, and deployment stages.
The success of a CI/CD pipeline heavily relies on a well-defined and adequately provisioned infrastructure. This initial analysis focuses on identifying the critical infrastructure pillars and common considerations across various CI/CD platforms. Without specific project details, this report outlines a comprehensive framework for assessing infrastructure needs, providing a foundation for subsequent pipeline generation.
Our goal is to ensure the generated pipelines are not only functional but also performant, secure, scalable, and cost-effective, aligning with modern DevOps best practices.
Regardless of the chosen CI/CD platform, several fundamental infrastructure categories must be addressed to support the various stages of a pipeline:
* Purpose: Execute pipeline jobs (e.g., compile code, run tests, lint, build Docker images).
* Considerations: CPU, RAM, OS (Linux, Windows, macOS), availability of specific tools/runtimes.
* Purpose: Store artifacts (build outputs), cache dependencies, logs, temporary files, Docker images.
* Considerations: Performance, capacity, retention policies, cost, accessibility.
* Purpose: Allow CI/CD runners to fetch code, download dependencies, push artifacts, communicate with internal services, and deploy to target environments.
* Considerations: Bandwidth, latency, firewall rules, private network access, proxy configurations.
* Purpose: Securely store and access sensitive information (API keys, database credentials, access tokens) required during pipeline execution and deployment.
* Considerations: Encryption, access control (IAM roles), audit trails, integration with secret management systems.
* Purpose: Observe pipeline health, performance, identify bottlenecks, and troubleshoot failures.
* Considerations: Centralized logging, metrics collection, alerting mechanisms, dashboarding.
* Purpose: The environment(s) where the built application will be deployed.
* Considerations: Cloud provider (AWS, Azure, GCP), Kubernetes clusters, virtual machines, serverless platforms, on-premise servers.
Each CI/CD platform offers distinct approaches and capabilities regarding infrastructure management.
* GitHub-hosted Runners: Managed by GitHub, offering various OS (Ubuntu, Windows, macOS) and pre-installed tools. Ideal for public repositories or projects with standard requirements. Scalability is handled by GitHub.
* Self-hosted Runners: Deployable on user-managed infrastructure (VMs, containers, on-premise). Necessary for private network access, custom hardware, specific software requirements, or cost optimization at scale. Requires user to manage OS, patching, and scaling.
* Artifacts: Stored by GitHub for a defined retention period (default 90 days). Capacity limits apply.
* Caches: GitHub Actions caching mechanism for dependencies (e.g., npm, Maven) to speed up subsequent runs.
* Container Registry: Integration with GitHub Container Registry (GHCR) or external registries (Docker Hub, AWS ECR, Azure CR, GCP Artifact Registry).
* GitHub Secrets: Encrypted environment variables for sensitive data, scoped to repository or organization.
* OpenID Connect (OIDC): Securely authenticate and authorize workflows to cloud providers (AWS, Azure, GCP) without long-lived credentials.
* GitLab.com Shared Runners: Managed by GitLab, offering a pool of shared compute resources. Suitable for public and private projects on GitLab.com, with usage quotas.
* Specific Runners: User-managed gitlab-runner instances, deployable on VMs, Kubernetes, or Docker. Essential for private network access, custom environments, or dedicated performance. Offers greater control over scaling, environment, and cost.
* Job Artifacts: Stored on GitLab's object storage (GitLab.com) or user-configured storage (self-managed GitLab). Configurable retention.
* Dependency Proxy/Package Registry: Built-in features to cache and manage dependencies and packages within GitLab.
* Container Registry: Integrated Docker registry within GitLab, allowing storage of Docker images directly alongside repositories.
* CI/CD Variables: Encrypted variables (project or group level) for sensitive data.
* HashiCorp Vault Integration: Native integration for external secret management.
* Cloud Provider IAM: Leveraging cloud provider IAM roles for deployment.
* Jenkins Master: The central orchestrator, managing jobs, plugins, and agents. Requires stable compute (VM, container) with sufficient resources.
* Jenkins Agents (Nodes): Distributed workers executing jobs. Can be provisioned as static VMs, dynamically provisioned containers (Docker, Kubernetes), or cloud instances (EC2, Azure VM, GCP Compute Engine). Offers maximum flexibility in environment setup.
* Master Storage: For Jenkins configuration, job history, plugin data. Requires persistent, performant storage.
* Agent Workspace: Temporary storage for job execution.
* Artifacts: Stored on master or external artifact repositories (e.g., Nexus, Artifactory, S3).
* Caches: Plugin-dependent caching mechanisms.
* Credentials Plugin: Securely stores various credential types (username/password, SSH keys, secret text).
* Plugins for External Secret Managers: Integration with HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, etc.
* Role-Based Access Control (RBAC): Granular control over who can access and configure Jenkins.
* Horizontal Scaling: Adding more runners/agents to handle increased load.
* Vertical Scaling: Increasing resources of existing runners/agents for larger, more intensive tasks.
* Elasticity: Dynamically provisioning and de-provisioning runners/agents based on demand (e.g., Kubernetes-based runners, cloud auto-scaling groups).
kubectl or client libraries) or GitOps tools (Argo CD, Flux CD).To generate a truly effective and tailored CI/CD pipeline, the following specific information is required:
yaml
stages:
- lint
- test
- build
- deploy
variables:
NODE_VERSION: "18.x"
DOCKER_IMAGE_NAME: $CI_REGISTRY_IMAGE # Automatically uses GitLab's built-in registry
DOCKER_TAG: $CI_COMMIT_SHA # Tag with commit SHA for uniqueness
DOCKER_BRANCH_TAG: $CI_COMMIT_REF_SLUG # Tag with branch name (slugified)
default:
image: node:${NODE_VERSION}-alpine # Use a lightweight Node.js image by default
before_script:
- npm ci --cache .npm --prefer-offline # Install dependencies, use cache
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
- node_modules/
lint_job:
stage: lint
script:
- npm run lint
artifacts:
when: always
reports:
junit: gl-lint-report.xml # Example if your linter can generate JUnit reports
allow_failure: false # Fail the pipeline if linting fails
test_job:
stage: test
script:
- npm test
artifacts:
when: always
reports:
junit: gl-test-report.xml # For displaying test results in GitLab UI
allow_failure: false # Fail the pipeline if tests fail
build_docker_image:
stage: build
image: docker:latest # Use a Docker image that has Docker daemon
services:
- docker:dind # Docker-in-Docker service for building images
variables:
DOCKER_TLS_CERTDIR: "/certs" # Required for Docker-in-Docker
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t ${DOCKER_IMAGE_NAME}:${DOCKER_TAG} -t ${DOCKER_IMAGE_NAME}:${DOCKER_BRANCH_TAG} .
- docker push ${DOCKER_IMAGE_NAME}:${DOCKER_TAG}
- docker push ${DOCKER_IMAGE_NAME}:${DOCKER_BRANCH_TAG}
only:
- main
- develop
# Optionally, store the Docker image as a GitLab artifact (though pushing to registry is primary)
# artifacts:
# paths:
# - image.tar # If you want to save the image locally
# expire_in: 1 week
deploy_staging:
stage: deploy
image: alpine/git # A lightweight image with git for deployment scripts
script:
- echo "Deploying ${DOCKER_IMAGE_NAME}:${DOCKER_BRANCH_TAG} to Staging environment..."
# Replace with your actual staging deployment commands, e.g.,
# - /usr/bin/kubectl config use-context my-staging-cluster
# - /usr/bin/kubectl set image deployment/my-app my-container=${DOCKER_IMAGE_NAME}:${DOCKER_BRANCH_TAG} -n staging
- echo "Deployment to Staging initiated."
environment:
name: staging
url: https://staging.example.com
only:
- develop # Deploy develop branch to staging
when: manual # Manual deployment to staging
deploy_production:
stage: deploy
image: alpine/git
script:
- echo
This document provides detailed and professional CI/CD pipeline configurations for your project, leveraging industry-standard platforms: GitHub Actions, GitLab CI, and Jenkins. Our aim is to equip you with robust, scalable, and maintainable pipelines that encompass linting, testing, building, and deployment stages, ensuring high-quality software delivery.
A Continuous Integration/Continuous Delivery (CI/CD) pipeline automates the steps in your software delivery process, from code commit to deployment. This automation reduces manual errors, speeds up delivery, and improves code quality.
Key Stages of a CI/CD Pipeline:
This document will provide template configurations for each platform, along with explanations and best practices, allowing you to adapt them to your specific project needs.
Regardless of the chosen platform, several core principles and components are crucial for effective CI/CD:
push, pull_request, merge_request)..github/workflows/*.yml, .gitlab-ci.yml, Jenkinsfile) stored alongside your source code. This ensures versioning, reviewability, and consistency.node_modules, Maven repositories) to speed up subsequent pipeline runs.main branch, or on a tag).GitHub Actions allows you to automate, customize, and execute your software development workflows directly in your repository.
.github/workflows/main.yml (or any other descriptive name).
name: CI/CD Pipeline
on:
push:
branches:
- main
- develop
pull_request:
branches:
- main
- develop
jobs:
lint:
name: Lint Code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
test:
name: Run Tests
runs-on: ubuntu-latest
needs: lint # This job depends on 'lint' job completion
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Run unit and integration tests
run: npm test
build:
name: Build Application
runs-on: ubuntu-latest
needs: test # This job depends on 'test' job completion
outputs:
image_tag: ${{ steps.set_image_tag.outputs.tag }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Generate image tag
id: set_image_tag
run: echo "tag=${GITHUB_SHA::7}" >> $GITHUB_OUTPUT # Using short SHA as tag
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: your-docker-repo/your-app:${{ steps.set_image_tag.outputs.tag }},your-docker-repo/your-app:latest
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Upload build artifact (e.g., JAR/WAR if not Dockerized)
uses: actions/upload-artifact@v4
with:
name: application-artifact
path: ./path/to/your/build/output
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build # This job depends on 'build' job completion
environment: Staging # Link to GitHub Environment for secrets/protection rules
if: github.ref == 'refs/heads/develop' # Only deploy staging on 'develop' branch pushes
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download application artifact
uses: actions/download-artifact@v4
with:
name: application-artifact
path: ./downloaded-artifact-path # Only if not Dockerized
- name: Log in to cloud provider (e.g., AWS ECR/GCP GKE)
# Example for AWS ECR:
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Update Kubernetes deployment on Staging
run: |
aws eks update-kubeconfig --name your-eks-cluster-staging --region us-east-1
kubectl config use-context arn:aws:eks:us-east-1:123456789012:cluster/your-eks-cluster-staging
kubectl set image deployment/your-app your-app-container=your-docker-repo/your-app:${{ needs.build.outputs.image_tag }} -n your-staging-namespace
kubectl rollout status deployment/your-app -n your-staging-namespace
env:
KUBECONFIG: /home/runner/.kube/config # Ensure kubectl uses the correct config
- name: Notify Slack on Staging Deployment
uses: rtCamp/action-slack-notify@v2
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}
SLACK_MESSAGE: "Application deployed to Staging (${{ needs.build.outputs.image_tag }})"
SLACK_COLOR: good
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: deploy-staging # Optional: Ensure staging deploy passed
environment: Production # Link to GitHub Environment for secrets/protection rules
if: github.ref == 'refs/heads/main' # Only deploy production on 'main' branch pushes
# Optional: Add an 'environment' for manual approval before deployment
# environment:
# name: Production
# url: https://your-prod-app.com
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Log in to cloud provider (e.g., AWS ECR/GCP GKE)
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Update Kubernetes deployment on Production
run: |
aws eks update-kubeconfig --name your-eks-cluster-production --region us-east-1
kubectl config use-context arn:aws:eks:us-east-1:123456789012:cluster/your-eks-cluster-production
kubectl set image deployment/your-app your-app-container=your-docker-repo/your-app:${{ needs.build.outputs.image_tag }} -n your-production-namespace
kubectl rollout status deployment/your-app -n your-production-namespace
env:
KUBECONFIG: /home/runner/.kube/config
- name: Notify Slack on Production Deployment
uses: rtCamp/action-slack-notify@v2
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}
SLACK_MESSAGE: "Application deployed to Production (${{ needs.build.outputs.image_tag }})"
SLACK_COLOR: good
on: Defines the events that trigger the workflow.jobs: A workflow run is made up of one or more jobs. Jobs run in parallel by default, but you can define dependencies using needs.runs-on: Specifies the type of machine to run the job on (e.g., ubuntu-latest).steps: A sequence of tasks within a job. Can be uses (reusable actions) or run (shell commands).secrets: Securely store sensitive information. Accessed via ${{ secrets.NAME }}.environment: Link jobs to specific GitHub Environments for protection rules (e.g., manual approval, required reviewers) and environment-specific secrets.actions/upload-artifact and actions/download-artifact for passing files between jobs or storing them.actions/cache can significantly speed up dependency installation.GitLab CI is an integrated part of GitLab, allowing you to define your CI/CD pipelines directly within your GitLab repository.
.gitlab-ci.yml in the root of your repository.
stages:
- lint
- test
- build
- deploy_staging
- deploy_production
variables:
DOCKER_IMAGE_NAME: your-docker-repo/your-app
DOCKER_TAG_SHA: $CI_COMMIT_SHORT_SHA
DOCKER_TAG_LATEST: latest
cache:
paths:
- node_modules/ # Example for Node.js projects
lint_job:
stage: lint
image: node:18-alpine
script:
- npm ci
- npm run lint
tags:
- docker-runner # Optional: Use specific runners if available
test_job:
stage: test
image: node:18-alpine
script:
- npm ci
- npm test
tags:
- docker-runner
artifacts:
when: always
reports:
junit: junit.xml # Example for JUnit test reports
build_job:
stage: build
image: docker:latest
services:
- docker:dind # Docker-in-Docker for building images
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY # For GitLab Container Registry
- docker build -t $DOCKER_IMAGE_NAME:$DOCKER_TAG_SHA .
- docker push $DOCKER_IMAGE_NAME:$DOCKER_TAG_SHA
- docker tag $DOCKER_IMAGE_NAME:$DOCKER_TAG_SHA $DOCKER_IMAGE_NAME:$DOCKER_TAG_LATEST
- docker push $DOCKER_IMAGE_NAME:$DOCKER_TAG_LATEST
# Example for non-Dockerized artifact:
# - npm run build
# - mv build/ dist/ # Move compiled artifacts
artifacts:
paths:
- dist/ # Store non-Dockerized build artifacts
expire_in: 1 week
tags:
- docker-runner
deploy_staging_job:
stage: deploy_staging
image: alpine/helm:3.10.0 # Or any image with kubectl, aws-cli, gcloud-cli
script:
# Example for AWS EKS deployment
- apk add --no-cache curl python3 py3-pip aws-cli # Install necessary tools
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws eks update-kubeconfig --name your-eks-cluster-staging --region $AWS_DEFAULT_REGION
- kubectl config use-context arn:aws:eks:$AWS_DEFAULT_REGION:123456789012:cluster/