This document provides comprehensive and detailed CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to be robust templates, incorporating essential stages such as linting, testing, building, and deployment. They are tailored to demonstrate best practices and serve as a solid foundation for your specific application requirements.
The objective of this step is to deliver actionable CI/CD pipeline configurations that can be directly integrated into your version control system or CI/CD server. Each configuration example is structured to be modular and easy to understand, allowing for straightforward customization to fit your project's technology stack, testing frameworks, and deployment targets.
For these examples, we will consider a generic Node.js application that utilizes:
npm build (for frontend assets, if applicable) and Docker image buildBefore diving into the specific configurations, it's crucial to understand the common principles that guide effective CI/CD pipeline design:
.github/workflows, .gitlab-ci.yml, Jenkinsfile).lint, test, build, deploy) that execute sequentially.node_modules, Maven repositories) should be cached to accelerate job execution.dev, staging, production), often requiring manual approval for critical environments.GitHub Actions allows you to automate, customize, and execute your software development workflows directly in your repository. Workflows are defined in YAML files (.github/workflows/*.yml).
github-ci-cd.yml
**Key Features & Customization for GitHub Actions:**
* **Triggers (`on`):** Configured for `push` to `main` and `develop` branches, and `pull_request` events.
* **Environment Variables (`env`):** Define common variables like `NODE_VERSION`, `DOCKER_IMAGE_NAME`, and `DOCKER_REGISTRY`.
* **Jobs (`jobs`):**
* `lint`: Installs dependencies and runs `npm run lint`.
* `test`: Installs dependencies and runs `npm test`. `needs: lint` ensures sequential execution.
* `build`: Builds a Docker image, logs into GitHub Container Registry (`ghcr.io`), and pushes the tagged image. The tag is dynamically generated based on the branch and commit SHA.
* `deploy`: Conditionally runs only on `main` branch pushes. It uses AWS credentials (stored as GitHub Secrets) to update Kubeconfig and then applies Kubernetes manifests, replacing the image tag dynamically.
* **Secrets:** `GITHUB_TOKEN` is automatically provided for GitHub Container Registry. `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` must be configured as repository secrets.
* **Artifacts:** Test results can be uploaded as artifacts for later inspection.
* **Environments:** The `deploy` job uses a GitHub Environment (`Production`) for added protection and visibility, including optional manual approval.
---
### 4. GitLab CI Configuration
GitLab CI/CD is built into GitLab and uses a `.gitlab-ci.yml` file at the root of your repository to define your pipeline.
#### `.gitlab-ci.yml`
Workflow: DevOps Pipeline Generator
Step: gemini → analyze_infrastructure_needs
This document outlines the initial analysis of infrastructure needs for generating your comprehensive CI/CD pipeline. The goal of this step is to identify the foundational components and environmental considerations crucial for designing an effective, scalable, and secure DevOps pipeline using GitHub Actions, GitLab CI, or Jenkins.
Given the broad request for a "DevOps Pipeline Generator," this analysis focuses on defining the critical data points required to tailor the pipeline to your specific application, team, and operational context. Without these specifics, we cannot precisely define the compute, storage, networking, and security infrastructure required. This deliverable therefore serves as a structured framework to gather that essential information.
A robust CI/CD pipeline relies on several interconnected infrastructure components. Understanding these categories is the first step in identifying your specific requirements:
* Managed Runners: Provided by the CI/CD platform (e.g., GitHub-hosted runners, GitLab.com shared runners).
* Self-Hosted Runners: Virtual machines, containers, or Kubernetes clusters managed by you, offering more control and custom environments.
* Cloud Storage: (e.g., AWS S3, Azure Blob Storage, Google Cloud Storage) for cost-effective, scalable, and durable storage.
* Dedicated Artifact Repositories: (e.g., JFrog Artifactory, Sonatype Nexus) for advanced artifact management, versioning, and security.
* Cloud-Native Registries: (e.g., Amazon ECR, Azure Container Registry, Google Container Registry, GitLab Container Registry).
* Self-Hosted Registries: (e.g., Docker Registry).
* Virtual Machines (VMs): (e.g., AWS EC2, Azure VMs, GCP Compute Engine, On-Premise Servers).
* Container Orchestration Platforms: (e.g., Kubernetes - EKS, AKS, GKE, OpenShift).
* Serverless Platforms: (e.g., AWS Lambda, Azure Functions, Google Cloud Functions).
* Platform-as-a-Service (PaaS): (e.g., AWS Elastic Beanstalk, Azure App Service, Google App Engine, Heroku).
* Cloud Secret Managers: (e.g., AWS Secrets Manager, Azure Key Vault, GCP Secret Manager).
* Dedicated Solutions: (e.g., HashiCorp Vault).
* CI/CD Platform Integrations: (e.g., GitHub Secrets, GitLab CI/CD Variables with masked/protected flags, Jenkins Credentials).
* Cloud-Native: (e.g., AWS CloudWatch, Azure Monitor, GCP Cloud Logging/Monitoring).
* Third-Party: (e.g., Prometheus, Grafana, ELK Stack, Datadog, Splunk).
To precisely analyze your infrastructure needs, we require detailed information across several critical dimensions. These factors directly impact the choice of tools, architecture, and cost:
* Application Type: (e.g., Web Application (frontend/backend), API, Microservice, Mobile Backend, Data Processing, Machine Learning Model, Static Site).
* Programming Languages & Frameworks: (e.g., Node.js, Python, Java (Spring Boot), .NET Core, Go, Ruby on Rails, PHP, React, Angular, Vue.js).
* Database Technologies: (e.g., PostgreSQL, MySQL, MongoDB, Redis, DynamoDB).
* Containerization: Is your application already containerized (Docker, Podman)? Are you planning to containerize it?
* Build System: (e.g., Maven, Gradle, npm, yarn, pip, Go Modules).
* Cloud Provider(s): (e.g., AWS, Azure, Google Cloud Platform (GCP), Multi-Cloud, Hybrid Cloud, On-Premise).
* Target Environment: Development, Staging, Production. Do these environments have different infrastructure needs?
* Existing Infrastructure: What services are you currently using? (e.g., specific Kubernetes clusters, existing VMs, PaaS services).
* Primary Choice: GitHub Actions, GitLab CI, or Jenkins.
* Secondary/Backup Choice: If any.
* Rationale: Why is this platform preferred (e.g., existing ecosystem, team familiarity, specific features)?
* Expected Build Frequency: How many builds per day/week?
* Peak Load Requirements: Are there specific times when build/deployment activity will be high?
* Deployment Frequency: How often do you expect to deploy to production?
* Geographic Distribution: Are your users or deployment targets geographically distributed?
* Industry Regulations: (e.g., HIPAA, GDPR, PCI DSS, SOC 2).
* Security Scans: Requirements for static application security testing (SAST), dynamic application security testing (DAST), software composition analysis (SCA), vulnerability scanning.
* Access Control: Specific requirements for who can trigger builds, approve deployments, or access secrets.
* Team Size & Expertise: Level of familiarity with CI/CD tools, cloud platforms, and infrastructure as code.
* Maintenance & Support: Desired level of operational overhead (managed services vs. self-managed).
* Budget Constraints: Any specific budget limitations for infrastructure and tooling.
Analyzing your infrastructure needs also involves considering current industry trends and best practices to ensure a future-proof and efficient pipeline:
Based on common best practices and the typical needs of a modern DevOps pipeline, we offer these preliminary recommendations:
To proceed with generating your tailored CI/CD pipeline configuration, we require the specific details outlined in Section 3. Please provide comprehensive answers to the following:
* What is the primary programming language and framework of your application?
* Is the application containerized (e.g., Docker)? If not, is containerization planned?
* What database technologies are you using or planning to use?
* What is your primary cloud provider (AWS, Azure, GCP, On-Premise, Hybrid)?
* What is your target deployment environment (e.g., Kubernetes, VMs, Serverless, PaaS)?
* Are there any existing infrastructure components (e.g., specific Kubernetes clusters, existing artifact repositories) that must be integrated?
* Please confirm your preferred CI/CD platform: GitHub Actions, GitLab CI, or Jenkins.
* Briefly state why this platform is preferred.
* Are there any specific industry regulations or compliance requirements (e.g., HIPAA, PCI DSS) that your pipeline and infrastructure must adhere to?
* Do you require specific security scanning tools to be integrated (SAST, DAST, SCA)?
* What are your estimated daily/weekly build and deployment frequencies?
* What is your team's familiarity level with the chosen CI/CD platform and cloud provider?
* Are there any specific budget constraints that should influence infrastructure choices?
Upon receiving this information, we will proceed to Step 2: Define Pipeline Stages & Logic, where we will translate your requirements into a detailed pipeline structure and logic, ready for configuration generation.
yaml
stages:
- lint
- test
- build
- deploy
variables:
NODE_VERSION: '18.x'
DOCKER_IMAGE_NAME: 'my-nodejs-app'
DOCKER_REGISTRY: '${CI_REGISTRY}' # GitLab Container Registry
default:
image: node:${NODE_VERSION}-alpine # Default image for all jobs unless overridden
before_script:
- npm ci --cache .npm --prefer-offline # Use npm ci for clean installs and cache
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
- node_modules/
lint_job:
stage: lint
script:
- echo "Running ESLint..."
- npm run lint # Assuming a 'lint' script in package.json
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH == "develop"
test_job:
stage: test
script:
- echo "Running Jest Tests..."
- npm test # Assuming a 'test' script in package.json
artifacts:
when: always
reports:
junit: junit.xml # Path to your test results file (e.g., generated by jest-junit)
paths:
- coverage/ # Optional: Upload coverage reports
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH == "develop"
build_docker_image:
stage: build
image: docker:latest # Use a Docker image that has Docker client installed
services:
- docker:dind # Docker-in-Docker service for building images
variables:
DOCKER_TLS_CERTDIR: "/certs" # Required for dind service
script:
- echo "Building Docker image..."
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- >
if [ "$CI_COMMIT_BRANCH" == "main" ]; then
IMAGE_TAG="latest"
elif [ "$CI_COMMIT_BRANCH" == "develop" ]; then
IMAGE_TAG="develop-${CI_COMMIT_SHORT_SHA}"
else
IMAGE_TAG="${CI_COMMIT_SHORT_SHA}"
fi
- docker build -t ${DOCKER_REGISTRY}/${DOCKER_IMAGE_NAME}:${IMAGE_TAG} .
- docker push ${DOCKER_REGISTRY}/${DOCKER_IMAGE_NAME}:${IMAGE_TAG}
- echo "DOCK
This document provides detailed and professional CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. Each configuration includes essential stages such as linting, testing, building, and deployment, designed to ensure code quality, reliability, and efficient delivery. These templates are robust, extensible, and serve as a strong foundation for your project's automation needs.
This deliverable provides validated and documented CI/CD pipeline configurations tailored for modern software development workflows. We have generated example pipeline definitions for three leading CI/CD platforms: GitHub Actions, GitLab CI, and Jenkins. Each configuration is structured to support a typical application lifecycle, encompassing:
These configurations are designed for clarity, maintainability, and security, leveraging platform-specific features like secrets management, caching, and environment controls.
Below are the detailed pipeline configurations for each specified platform. These examples assume a generic application (e.g., a Node.js or Python application, but the structure is adaptable to most languages/frameworks).
.github/workflows/main.yml)This GitHub Actions workflow is triggered on pushes to main and pull requests. It includes linting, testing, building, and conditional deployment stages.
name: CI/CD Pipeline
on:
push:
branches:
- main
pull_request:
branches:
- main
env:
NODE_VERSION: '18.x' # Example: Specify Node.js version
# Add other environment variables as needed, e.g., for Python, Java, Go
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm' # Or 'yarn', 'pnpm'
- name: Install dependencies
run: npm ci # Or yarn install --frozen-lockfile
- name: Run Linter (e.g., ESLint)
run: npm run lint # Or your specific lint command
test:
needs: lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run Unit and Integration Tests
run: npm test # Or your specific test command
# Example: Upload test results for reporting
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: ./test-results.xml # Adjust path to your test report file
build:
needs: test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build Application
run: npm run build # Or your specific build command (e.g., webpack, maven, go build)
- name: Archive Build Artifacts
uses: actions/upload-artifact@v4
with:
name: application-build-${{ github.sha }}
path: ./dist # Adjust path to your build output directory
deploy-staging:
needs: build
runs-on: ubuntu-latest
environment: Staging # GitHub Environments for protection rules and secrets
if: github.ref == 'refs/heads/main' # Only deploy main branch to staging
steps:
- name: Download Build Artifacts
uses: actions/download-artifact@v4
with:
name: application-build-${{ github.sha }}
path: ./build-output
- name: Deploy to Staging Environment
run: |
echo "Deploying to Staging..."
# Example: Use a deployment script or tool
# Your actual deployment commands here, e.g.,
# aws s3 sync ./build-output s3://${{ secrets.AWS_S3_BUCKET_STAGING }}
# kubectl apply -f kubernetes/staging.yaml
# ssh user@staging-server "deploy.sh"
echo "Deployment to Staging complete."
env:
# Access environment-specific secrets
STAGING_API_KEY: ${{ secrets.STAGING_API_KEY }}
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment: Production # GitHub Environments for protection rules and secrets
if: github.ref == 'refs/heads/main' # Only deploy main branch
# Requires manual approval in GitHub UI if 'Production' environment is configured for it
steps:
- name: Download Build Artifacts
uses: actions/download-artifact@v4
with:
name: application-build-${{ github.sha }}
path: ./build-output
- name: Deploy to Production Environment
run: |
echo "Deploying to Production..."
# Your actual production deployment commands here
echo "Deployment to Production complete."
env:
# Access environment-specific secrets
PROD_API_KEY: ${{ secrets.PROD_API_KEY }}
.gitlab-ci.yml)This GitLab CI pipeline defines stages for linting, testing, building, and deployment, with specific rules for branches and environments.
image: node:18 # Example: Specify a base Docker image
variables:
# Add global variables here
NPM_CACHE_DIR: "$CI_PROJECT_DIR/.npm" # Cache directory for npm
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
- node_modules/
policy: pull-push # Cache dependencies between jobs
stages:
- lint
- test
- build
- deploy
.install_dependencies: &install_dependencies_template
before_script:
- npm config set cache ${NPM_CACHE_DIR}
- npm ci --cache ${NPM_CACHE_DIR} --prefer-offline # Use cached dependencies
lint_job:
stage: lint
script:
- *install_dependencies_template
- npm run lint # Your specific lint command
tags:
- docker # Example: Use a specific runner tag
test_job:
stage: test
script:
- *install_dependencies_template
- npm test # Your specific test command
artifacts:
when: always
reports:
junit:
- junit.xml # Adjust path to your test report file (e.g., generated by Jest, Mocha)
tags:
- docker
build_job:
stage: build
script:
- *install_dependencies_template
- npm run build # Your specific build command
artifacts:
paths:
- dist/ # Adjust path to your build output directory
expire_in: 1 week
tags:
- docker
deploy_staging_job:
stage: deploy
image: alpine/git:latest # Example: A lightweight image for deployment scripts
script:
- echo "Deploying to Staging..."
- # Your actual deployment commands here
- # Example:
- # apk add --no-cache python3 py3-pip
- # pip install awscli
- # aws s3 sync ./dist s3://$AWS_S3_BUCKET_STAGING
- echo "Deployment to Staging complete."
environment:
name: staging
url: https://staging.example.com
rules:
- if: '$CI_COMMIT_BRANCH == "main"' # Only deploy main branch
variables:
# Access environment-specific variables defined in GitLab CI/CD settings
STAGING_API_KEY: $STAGING_API_KEY
tags:
- docker
deploy_production_job:
stage: deploy
image: alpine/git:latest
script:
- echo "Deploying to Production..."
- # Your actual production deployment commands here
- echo "Deployment to Production complete."
environment:
name: production
url: https://app.example.com
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
when: manual # Requires manual approval in GitLab UI
variables:
# Access environment-specific variables defined in GitLab CI/CD settings
PROD_API_KEY: $PROD_API_KEY
tags:
- docker
Jenkinsfile)This Jenkins Pipeline uses Groovy DSL for a Jenkinsfile to define stages for a declarative pipeline, including agent selection, build steps, and conditional deployment.
// Jenkinsfile (Declarative Pipeline)
pipeline {
agent {
# Example: Use a Docker agent for consistency
docker {
image 'node:18-alpine' // Or 'maven:3-jdk-11', 'python:3.9-slim'
args '-v /var/run/docker.sock:/var/run/docker.sock' // If Docker-in-Docker is needed
}
// Alternatively, use a label for a specific Jenkins agent:
// agent { label 'my-build-agent' }
}
environment {
// Global environment variables
NODE_OPTIONS = '--max-old-space-size=4096'
# Add other environment variables as needed
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Lint') {
steps {
sh 'npm ci' // Install dependencies
sh 'npm run lint' // Your specific lint command
}
}
stage('Test') {
steps {
sh 'npm test' // Your specific test command
# Example: Publish JUnit test results
junit '**/target/surefire-reports/*.xml' // Adjust path for your test reports
}
}
stage('Build') {
steps {
sh 'npm run build' // Your specific build command
# Archive artifacts
archiveArtifacts artifacts: 'dist/**/*', fingerprint: true // Adjust path
}
}
stage('Deploy to Staging') {
when {
branch 'main' // Only deploy main branch to staging
}
steps {
script {
echo "Downloading artifacts for deployment..."
// Retrieve archived artifacts (if not already available in workspace)
// This step is often not needed if artifacts are directly in workspace
// sh 'cp -r $(find . -name "dist" -type d) .' // Example to bring 'dist' to root
echo "Deploying to Staging..."
// Your actual deployment commands here
// Example:
// withCredentials([usernamePassword(credentialsId: 'aws-deploy-user', usernameVariable: 'AWS_ACCESS_KEY_ID', passwordVariable: 'AWS_SECRET_ACCESS_KEY')]) {
// sh 'aws s3 sync ./dist s3://staging-app-bucket'
// }
// sh 'ssh -i /var/lib/jenkins/.ssh/id_rsa user@staging-server "deploy_script.sh"'
echo "Deployment to Staging complete."
}
}
}
stage('Deploy to Production') {
when {
branch 'main' // Only deploy main branch to production
}
steps {
script {
// Manual approval step
input message: 'Proceed with deployment to Production?', ok: 'Deploy'
echo "Downloading artifacts for deployment..."
// Your artifact retrieval if needed
echo "Deploying to Production..."
// Your actual production deployment commands here
echo "Deployment to Production complete."
}
}
}
}
post {
always {
cleanWs() // Clean up workspace after pipeline execution
}
success {
echo 'Pipeline finished successfully!'
// Example: Send Slack notification
// slackSend channel: '#devops-alerts', message: "Pipeline ${env.JOB_NAME} Build ${env.BUILD_NUMBER} SUCCESS"
}
failure {
echo 'Pipeline failed!'
// slackSend channel: '#devops-alerts', message: "Pipeline ${env.JOB_NAME} Build ${
\n