This document provides comprehensive, detailed, and professional CI/CD pipeline configurations tailored for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to automate the software delivery process, incorporating essential stages such as linting, testing, building, and deployment to both staging and production environments.
The examples provided are based on a common scenario: a Node.js application that is containerized using Docker and deployed to a Kubernetes cluster. While specific commands are provided for this technology stack, the structure and principles are broadly applicable and can be adapted to other languages, frameworks, and deployment targets.
package.json, src/, test/, and a Dockerfile at the root.npm run lint).npm test).kubectl access and a kubeconfig).main branch for production deployments and feature branches for development. Pull Requests (GitHub) / Merge Requests (GitLab) trigger CI.GitHub Actions provides a flexible way to automate workflows directly within your GitHub repository. The pipeline is defined in a YAML file within the .github/workflows/ directory.
.github/workflows/ci-cd.yml**Key GitHub Actions Concepts Used:** * **`on`**: Defines when the workflow runs (push to `main`, pull requests to `main`). * **`env`**: Defines environment variables for the entire workflow. * **`jobs`**: Independent units of work, run in parallel by default unless dependencies (`needs`) are specified. * **`runs-on`**: Specifies the runner environment (e.g., `ubuntu-latest`). * **`steps`**: A sequence of tasks within a job. * **`uses`**: Reuses existing actions from the GitHub Marketplace (e.g., `actions/checkout@v4`, `docker/login-action@v3`). * **`run`**: Executes shell commands. * **`secrets`**: Securely access sensitive information (e.g., `secrets.KUBECONFIG_STAGING`). These must be configured in your GitHub repository settings. * **`environment`**: Links a job to an environment, allowing for environment-specific secrets and manual approval gates. * **`if`**: Conditional execution of jobs. --- ### 2. GitLab CI Configuration GitLab CI/CD is tightly integrated with GitLab repositories, using a `.gitlab-ci.yml` file to define your pipeline. * **File Location**: `.gitlab-ci.yml`
Workflow Step: 1 of 3: gemini → analyze_infrastructure_needs
Description: Generate complete CI/CD pipeline configurations for GitHub Actions, GitLab CI, or Jenkins with testing, linting, building, and deployment stages.
This document outlines the critical infrastructure and operational considerations required to design and implement a robust, efficient, and secure DevOps CI/CD pipeline. As the first step in generating your tailored pipeline configuration, this analysis focuses on gathering essential information about your current environment, application stack, deployment targets, and operational preferences. A thorough understanding of these elements is paramount to creating a pipeline that perfectly aligns with your technical requirements and business objectives.
The primary goal of this "Analyze Infrastructure Needs" step is to systematically identify all technical and non-technical factors that will influence the structure, tools, and configurations of your CI/CD pipeline. By detailing these needs, we can ensure the generated pipeline is:
To generate an effective CI/CD pipeline, we require detailed insights across several key domains. Below are the critical areas we need to assess, along with specific questions to guide the information gathering process.
The choice of CI/CD platform is foundational. This section identifies which platform best suits your existing ecosystem and team's expertise.
* Do you have a strong preference for GitHub Actions, GitLab CI, Jenkins, or another platform?
* What are the primary reasons for this preference (e.g., existing investment, team familiarity, specific features, cost model)?
* Where is your source code hosted (e.g., GitHub, GitLab, Bitbucket, Azure DevOps Repos)?
* What is your primary branching strategy (e.g., GitFlow, GitHub Flow, GitLab Flow, Trunk-Based Development)?
* Are there any existing CI/CD tools or build servers currently in use that need to be integrated or considered for migration?
* Are there specific notification channels (e.g., Slack, Microsoft Teams, email) for pipeline status updates?
Understanding your application's core technologies is crucial for configuring appropriate build, test, and linting steps.
* What type of application are we building (e.g., Web Application, API Service, Mobile Backend, Desktop Application, Microservice, Serverless Function, Static Site)?
* Examples: Node.js (React, Angular, Vue), Python (Django, Flask), Java (Spring Boot, Quarkus), .NET (C#, ASP.NET Core), Go, Ruby (Rails), PHP (Laravel, Symfony), etc.
* Which build tools are used (e.g., npm, yarn, Maven, Gradle, pip, Poetry, make, Docker Compose, webpack)?
* What testing frameworks are employed (e.g., Jest, Pytest, JUnit, NUnit, RSpec, Cypress, Selenium)?
* Are unit, integration, end-to-end (E2E) tests, or performance tests part of the current workflow?
* Are there specific linters or code quality analysis tools in use (e.g., ESLint, Black, Flake8, RuboCop, Checkstyle, SonarQube)?
* Are there defined quality gates (e.g., minimum test coverage, maximum cyclomatic complexity)?
* Is the application containerized (e.g., Docker)? If so, are Dockerfiles already in place?
* What is the desired container registry (e.g., Docker Hub, AWS ECR, Azure Container Registry, GitLab Container Registry, GitHub Container Registry)?
Detailed information about your deployment targets is essential for configuring deployment stages.
* Which cloud provider(s) are used (e.g., AWS, Azure, GCP, DigitalOcean, Vultr)?
* Are there any on-premise deployments?
* Where will the application be deployed (e.g., Virtual Machines/EC2, Kubernetes/EKS/AKS/GKE, Serverless/Lambda/Azure Functions/Cloud Run, PaaS/Elastic Beanstalk/Azure App Service, On-premise servers)?
* Are there multiple environments (e.g., Dev, Staging, Production)?
* Is IaC used for provisioning infrastructure (e.g., Terraform, CloudFormation, ARM Templates, Pulumi)? If so, where are these configurations stored?
* What is the desired deployment strategy (e.g., Rolling Update, Blue/Green, Canary, Immutable)?
* How are sensitive credentials and secrets managed for deployment (e.g., AWS Secrets Manager, Azure Key Vault, HashiCorp Vault, Kubernetes Secrets, environment variables)?
* Are database schema migrations required as part of the deployment process? If so, what tools are used (e.g., Flyway, Liquibase, Alembic, Django Migrations)?
Integrating security and quality checks throughout the pipeline is a modern best practice.
* Are Static Application Security Testing (SAST) tools (e.g., SonarQube, Snyk, Checkmarx) required?
* Are Dependency Scanning tools (e.g., Snyk, OWASP Dependency-Check) required?
* Are Container Image Scanning tools (e.g., Clair, Trivy, Aqua Security) required?
* Are Dynamic Application Security Testing (DAST) tools (e.g., OWASP ZAP, Burp Suite) required, possibly in a later stage?
* Are there specific industry compliance standards (e.g., PCI DSS, HIPAA, GDPR, SOC 2) that the pipeline needs to support or demonstrate adherence to?
* What are the key metrics or criteria that must be met for a build to progress to the next stage (e.g., 80% test coverage, zero critical security vulnerabilities)?
Efficient artifact management and caching significantly impact pipeline performance.
* Where should build artifacts (e.g., compiled binaries, Docker images, npm packages) be stored (e.g., AWS S3, Azure Blob Storage, JFrog Artifactory, GitLab/GitHub Package Registry)?
* Is there a need for dependency caching to speed up builds (e.g., npm cache, Maven local repo, pip cache)?
Integration with existing observability platforms ensures visibility into application health post-deployment.
* Are there existing monitoring tools that the deployment process should integrate with (e.g., Prometheus, Grafana, Datadog, New Relic, Splunk)?
* How are application logs currently collected and analyzed (e.g., ELK Stack, Splunk, CloudWatch Logs, Azure Monitor Logs)?
Understanding the team's capabilities and operational preferences helps tailor the pipeline for usability.
* What is the size and general CI/CD expertise level of the development and operations teams?
* Who will be responsible for maintaining and evolving the CI/CD pipeline?
* What are the desired build and deployment times? Are there specific performance bottlenecks to address?
Modern DevOps pipelines are evolving rapidly. Our analysis incorporates current industry best practices and trends:
.github/workflows, .gitlab-ci.yml, Jenkinsfile) for version control, collaboration, and simplified management.Based on the general scope of "DevOps Pipeline Generator," here are initial recommendations to ensure a successful pipeline generation:
.github/workflows, .gitlab-ci.yml, Jenkinsfile). This promotes version control, collaboration, and self-documentation.To proceed with generating your bespoke CI/CD pipeline configuration, we require your input on the questions detailed in Section 3 ("Key Areas of Infrastructure Assessment").
Please provide specific answers and details for each category. The more precise and comprehensive your responses, the more accurately and effectively we can configure your pipeline.
Once we receive your detailed infrastructure needs, we will move to Step 2: Define Pipeline Stages & Logic, where we will translate these requirements into a concrete pipeline structure, including specific jobs, steps, and conditional logic.
yaml
stages:
- lint
- test
- build
- deploy_staging
- deploy_production
variables:
# Common variables for the pipeline
NODE_VERSION: "18.x"
DOCKER_REGISTRY: $CI_REGISTRY # Uses GitLab's built-in registry
IMAGE_NAME: $CI_REGISTRY_IMAGE # Automatically set by GitLab, e.g., registry.gitlab.com/user/repo
KUBECTL_VERSION: "1.28.0" # Specify kubectl version
cache:
paths:
- node_modules/
.install_dependencies: &install_dependencies_template
before_script:
- apt-get update && apt-get install -y curl
- curl -fsSL https://deb.nodesource.com/setup_$NODE_VERSION | bash -
- apt-get install -y nodejs
- node -v
- npm -v
- npm ci # Clean install to ensure consistent dependencies
lint_job:
stage: lint
image: node:$NODE_VERSION-alpine # Use a specific Node.js image
<<: *install_dependencies_template
script:
- npm run lint
rules:
- if: $CI_COMMIT_BRANCH == "main" || $CI_PIPELINE_SOURCE == "merge_request_event"
test_job:
stage: test
image: node:$NODE_VERSION-alpine
<<: *install_dependencies_template
script:
- npm test
rules:
- if: $CI_COMMIT_BRANCH == "main" || $CI_PIPELINE_SOURCE == "merge_request_event"
build_docker_image_job:
stage: build
image: docker:latest # Use a Docker image with Docker CLI
services:
- docker:dind # Docker-in-Docker service
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_NAME:$CI_COMMIT_SHORT_SHA -t $IMAGE_NAME:latest .
- docker push $IMAGE_NAME:$CI_COMMIT_SHORT_SHA
- docker push $IMAGE_NAME:latest
rules:
- if: $CI_COMMIT_BRANCH == "main" # Only build main branch for deployment
deploy_staging_job:
stage: deploy_staging
image: alpine/helm:3.10.0 # Or any image with kubectl, e.g., lachlanevenson/k8s-kubectl
before_script:
# Install kubectl if not
Project: DevOps Pipeline Generator
Step: 3 of 3 - validate_and_document
Date: October 26, 2023
Deliverable: Detailed CI/CD Pipeline Configurations
This document provides a comprehensive set of CI/CD pipeline configurations tailored for your project, covering GitHub Actions, GitLab CI, and Jenkins (Declarative Pipeline). Each configuration is designed to automate the software delivery lifecycle, incorporating essential stages such as linting, testing, building, and deployment.
Our goal is to deliver robust, maintainable, and secure pipeline definitions that can be directly integrated into your version control system or Jenkins instance. Each section includes a detailed overview, key features, the generated configuration code, validation notes, and guidance for usage and customization.
We have generated pipeline configurations based on a common scenario: a Node.js application that builds a Docker image and deploys it. This scenario allows us to demonstrate a wide range of common CI/CD practices applicable to many modern applications.
Common Pipeline Stages:
Scenario: Node.js application, Dockerized, deploying to a generic server/cloud service.
Overview:
This GitHub Actions workflow automates the CI/CD process for a Node.js application. It leverages GitHub's native CI/CD capabilities to lint, test, build a Docker image, push it to Docker Hub, and trigger a deployment.
Key Features:
main and pull requests.main branch pushes.Generated Configuration Code (.github/workflows/ci-cd.yml):
name: Node.js Docker CI/CD
on:
push:
branches:
- main
pull_request:
branches:
- main
env:
DOCKER_IMAGE_NAME: my-node-app
DOCKER_REGISTRY: docker.io # Or your specific registry like ghcr.io, public.ecr.aws/<account-id>
jobs:
build_and_test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20' # Specify your Node.js version
- name: Cache Node.js modules
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint # Assuming you have a 'lint' script in package.json
- name: Run tests
run: npm test -- --coverage # Assuming you have a 'test' script in package.json
- name: Build Docker image
run: docker build -t ${{ env.DOCKER_REGISTRY }}/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} .
- name: Log in to Docker Hub
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
- name: Push Docker image to Docker Hub
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
run: |
docker push ${{ env.DOCKER_REGISTRY }}/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}
docker tag ${{ env.DOCKER_REGISTRY }}/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} ${{ env.DOCKER_REGISTRY }}/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:latest
docker push ${{ env.DOCKER_REGISTRY }}/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:latest
deploy:
runs-on: ubuntu-latest
needs: build_and_test
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
steps:
- name: Checkout code (for deployment scripts if needed)
uses: actions/checkout@v4
- name: Deploy to Staging/Production
uses: appleboy/ssh-action@v1.0.0
with:
host: ${{ secrets.DEPLOY_HOST }}
username: ${{ secrets.DEPLOY_USERNAME }}
key: ${{ secrets.DEPLOY_SSH_KEY }}
script: |
# Example deployment commands:
# 1. Login to Docker registry on the server
echo "${{ secrets.DOCKER_TOKEN }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin ${{ env.DOCKER_REGISTRY }}
# 2. Pull the latest image
docker pull ${{ env.DOCKER_REGISTRY }}/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:latest
# 3. Stop and remove old container (if exists)
docker stop ${{ env.DOCKER_IMAGE_NAME }} || true
docker rm ${{ env.DOCKER_IMAGE_NAME }} || true
# 4. Run new container
docker run -d --name ${{ env.DOCKER_IMAGE_NAME }} -p 80:3000 ${{ env.DOCKER_REGISTRY }}/${{ github.repository_owner }}/${{ env.DOCKER_IMAGE_NAME }}:latest
# 5. Clean up old images
docker image prune -f
echo "Deployment complete for ${{ env.DOCKER_IMAGE_NAME }}"
Validation Notes:
@v4, @v1.0.0) to prevent unexpected changes.needs keyword ensures the deploy job runs only after build_and_test succeeds.docker stop || true to be robust against containers not existing.if conditions to only run on main branch pushes.Usage & Customization:
.github/workflows/ directory in your repository and save the content as ci-cd.yml.package.json Scripts: Ensure your package.json contains lint and test scripts.Dockerfile exists in your repository root. * DOCKER_USERNAME: Your Docker Hub username.
* DOCKER_TOKEN: A Docker Hub Personal Access Token with push permissions.
* DEPLOY_HOST: IP address or hostname of your deployment server.
* DEPLOY_USERNAME: SSH username for the deployment server.
* DEPLOY_SSH_KEY: Private SSH key for authentication to the deployment server.
DOCKER_IMAGE_NAME and DOCKER_REGISTRY as needed.script in the Deploy to Staging/Production step to match your specific deployment strategy (e.g., Kubernetes, AWS ECS, serverless).Scenario: Node.js application, Dockerized, deploying to a generic server/cloud service.
Overview:
This GitLab CI pipeline automates the CI/CD process for a Node.js application, leveraging GitLab's built-in container registry and robust pipeline features. It defines stages for linting, testing, building, pushing to the GitLab Container Registry, and deploying.
Key Features:
build_test, deploy).cache for Node.js modules and before_script for Docker layer caching.main).Generated Configuration Code (.gitlab-ci.yml):
image: docker:latest # Use a Docker image with Docker CLI pre-installed
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "" # Disable TLS for Docker-in-Docker to avoid certificate issues in some runners
DOCKER_IMAGE_NAME: $CI_REGISTRY_IMAGE # Uses GitLab's built-in variable for image name
NODE_VERSION: '20' # Specify your Node.js version
services:
- docker:dind # Required for Docker-in-Docker operations
stages:
- build_test
- deploy
cache:
paths:
- node_modules/
key:
files:
- package-lock.json
.npm_base: &npm_base
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY" # Login to GitLab Container Registry
- apk add --no-cache nodejs npm # Install Node.js and npm in the Docker container (for lint/test)
- npm ci
lint:
stage: build_test
extends: .npm_base
script:
- npm run lint # Assuming you have a 'lint' script in package.json
rules:
- if: $CI_COMMIT_BRANCH
test:
stage: build_test
extends: .npm_base
script:
- npm test -- --coverage # Assuming you have a 'test' script in package.json
artifacts:
when: always
reports:
junit: junit.xml # Example for JUnit XML test reports
paths:
- coverage/
rules:
- if: $CI_COMMIT_BRANCH
build_docker_image:
stage: build_test
script:
- docker build -t $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA -t $DOCKER_IMAGE_NAME:latest .
- docker push $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA
- docker push $DOCKER_IMAGE_NAME:latest
rules:
- if: $CI_COMMIT_BRANCH == "main"
deploy_staging:
stage: deploy
image: alpine/git # A lightweight image with git and ssh client
before_script:
- apk add --no-cache openssh-client rsync # Install SSH client for deployment
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - # Add your SSH private key
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -H "$DEPLOY_HOST" >> ~/.ssh/known_hosts # Add host to known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- echo "Deploying to Staging environment..."
- ssh "$DEPLOY_USER@$DEPLOY_HOST" "
docker login -u \\"$CI_REGISTRY_USER\\" -p \\"$CI_REGISTRY_PASSWORD\\" \\"$CI_REGISTRY\\";
docker pull $DOCKER_IMAGE_NAME:latest;
docker stop $DOCKER_IMAGE_NAME || true;
docker rm $DOCKER_IMAGE_NAME || true;
docker run -d --name $DOCKER_IMAGE_NAME -p 80:3000 $DOCKER_IMAGE_NAME:latest;
docker image prune -f;
echo 'Deployment complete for $DOCKER_IMAGE_NAME to Staging
\n