This document provides comprehensive, detailed CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to be production-ready templates, incorporating essential stages such as linting, testing, building, and deployment. Each section includes an example configuration, a breakdown of its components, and guidance for customization to fit your specific project needs.
This deliverable provides ready-to-use CI/CD pipeline configurations, tailored for three leading platforms: GitHub Actions, GitLab CI/CD, and Jenkins. These templates serve as a robust foundation for automating your software delivery process, ensuring code quality, reliability, and efficient deployment.
Before diving into platform-specific configurations, it's crucial to understand the universal principles that underpin a robust CI/CD pipeline:
GitHub Actions provides a flexible, event-driven automation platform directly within your GitHub repository. Workflows are defined using YAML files (.github/workflows/*.yml).
This GitHub Actions workflow is triggered on push to the main branch and on pull_request events. It includes stages for linting, testing, building a Docker image, and deploying to a generic cloud provider (e.g., AWS ECR/ECS, Azure Container Registry/App Service, Google Cloud Run).
github-ci-cd.yml Example
#### Explanation and Customization
* **`on:`**: Defines triggers. `push` to `main` and `pull_request` against `main`.
* **`env:`**: Global environment variables for the workflow. Customize `PROJECT_NAME`, `DOCKER_IMAGE_NAME`, and `DOCKER_REGISTRY`.
* **`jobs:`**:
* **`lint`**:
* **Purpose**: Enforce code style and identify potential issues early.
* **Customization**: Replace `npm ci` and `npm run lint` with commands relevant to your technology stack (e.g., `pip install -r requirements.txt && flake8 .`, `go fmt ./... && golangci-lint run`).
* **`test`**:
* **Purpose**: Verify functionality and prevent regressions.
* **Customization**: Replace `npm ci` and `npm test` with your project's test runner commands (e.g., `pytest`, `mvn test`, `go test ./...`).
* **`build`**:
* **Purpose**: Create a deployable artifact, typically a Docker image for modern applications.
* **Customization**:
* **`DOCKER_REGISTRY`**: Update to your preferred registry (e.g., Docker Hub, AWS ECR, Azure CR, Google CR).
* **`secrets.GH_TOKEN`**: For GitHub Container Registry (GHCR), `GH_TOKEN` usually works. For other registries, you'll need specific credentials (e.g., `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` for ECR).
* **`docker/build-push-action`**: Adjust `context` if your Dockerfile is not in the root, and add `build-args` if needed.
* **`deploy`**:
* **Purpose**: Push the built artifact to a production environment.
* **`needs: build`**: Ensures deployment only happens if the build is successful.
* **`environment: production`**: GitHub Environments allow for protection rules (e.g., manual approval, required reviewers) and environment-specific secrets.
* **`if:`**: Ensures deployment only runs on pushes to `main`, not on pull requests.
* **Deployment Steps**: The example shows an AWS ECS deployment. **You MUST replace this with your actual deployment strategy.** This could involve:
* **Cloud Provider CLI**: AWS CLI, Azure CLI, gcloud CLI.
* **Infrastructure as Code (IaC)**: Terraform, Pulumi, AWS CloudFormation, Azure ARM Templates, Google Cloud Deployment Manager.
* **Kubernetes**: `kubectl apply -f your-deployment.yaml`.
* **Serverless**: AWS Lambda, Azure Functions, Google Cloud Functions.
* **PaaS**: Heroku, Vercel, Netlify.
* **Secrets**: Ensure all sensitive information (API keys, credentials) is stored in GitHub Secrets.
### 3. GitLab CI/CD Configuration
GitLab CI/CD is tightly integrated with GitLab repositories, using a `.gitlab-ci.yml` file in the root of your project. It offers powerful features like Docker-in-Docker, services, and artifact management.
#### Overview
This GitLab CI/CD pipeline is designed for a typical application, with stages for linting, testing, building a Docker image, and deploying to a Kubernetes cluster or a generic cloud environment.
#### `.gitlab-ci.yml` Example
Workflow Step: gemini → analyze_infrastructure_needs
This document provides a comprehensive analysis of the foundational infrastructure considerations required to design and implement robust, efficient, and scalable CI/CD pipelines. This initial assessment is crucial for tailoring the pipeline configurations (GitHub Actions, GitLab CI, or Jenkins) to your specific operational environment, technology stack, and business objectives.
To generate optimal CI/CD pipeline configurations, a clear understanding of the underlying infrastructure is paramount. This analysis identifies key areas such as Source Code Management (SCM), target deployment environments, artifact management, security, and monitoring.
Our preliminary assessment indicates a need for a modular, scalable, and secure infrastructure foundation that can support modern development practices like containerization, Infrastructure as Code (IaC), and automated testing. Recommendations focus on leveraging cloud-native capabilities where applicable, standardizing tools, and integrating security throughout the pipeline. The next steps involve gathering specific details about your current environment and preferences to refine these recommendations and proceed with pipeline generation.
The goal of this step is to lay the groundwork for effective CI/CD pipeline generation. By thoroughly analyzing infrastructure needs, we ensure that the generated pipelines are not only functional but also optimized for performance, security, and maintainability within your specific ecosystem. This analysis will guide decisions regarding runner types, deployment strategies, artifact storage, and integration with existing tools.
A robust CI/CD pipeline relies on a well-defined and integrated infrastructure. We've broken down the critical components into categories:
* Platform: GitHub, GitLab, Bitbucket, Azure DevOps, or an on-premise solution. (User input suggests GitHub Actions or GitLab CI, implying GitHub or GitLab as primary SCM).
* Branching Strategy: GitFlow, GitHub Flow, GitLab Flow, Trunk-Based Development. This impacts trigger configurations and release management.
* Access Control: Granular permissions for repositories and branches.
* Webhooks: Essential for triggering CI/CD pipelines on code changes.
* Platform Choice: GitHub Actions, GitLab CI, or Jenkins.
* Runner Type:
* Managed/Shared Runners: Provided by the CI/CD platform (e.g., GitHub-hosted runners, GitLab Shared Runners). Convenient for quick starts, but can have limitations on resources, custom tooling, and security for sensitive workloads.
* Self-Hosted/Private Runners: Dedicated machines (VMs, containers) within your infrastructure. Offers greater control over environment, resources, security, and custom tooling.
* Containerized Runners (e.g., Docker in Docker): Ideal for isolating build environments and ensuring consistency.
* Scalability: How will runners scale to meet demand during peak development cycles? (Auto-scaling groups, Kubernetes operators).
* Network Access: What internal and external resources do runners need to access (e.g., artifact repositories, deployment targets, databases)?
* Virtual Machines (VMs): AWS EC2, Azure VMs, Google Compute Engine, on-premise servers. Requires provisioning, configuration management (e.g., Ansible, Chef, Puppet), and potentially an agent for deployment.
* Container Orchestration: Kubernetes (EKS, AKS, GKE, OpenShift), Docker Swarm. Requires containerization of applications (Dockerfiles), Helm charts or Kubernetes manifests for deployment.
* Serverless: AWS Lambda, Azure Functions, Google Cloud Functions. Requires specific deployment packages and API Gateway configurations.
* Platform as a Service (PaaS): Heroku, AWS Elastic Beanstalk, Azure App Service, Google App Engine. Simplifies deployment but offers less control.
* On-Premise vs. Cloud: Hybrid cloud strategies are also common.
* Container Registry: Docker Hub, AWS ECR, Azure Container Registry, Google Container Registry, GitLab Container Registry. Essential for containerized applications.
* Package Managers/Repositories: Nexus, Artifactory, npm registry, Maven Central. For language-specific packages.
* Object Storage: AWS S3, Azure Blob Storage, Google Cloud Storage. For generic binaries, logs, and static assets.
* Versioning & Immutability: Ensure artifacts are uniquely versioned and immutable once published.
* Retention Policies: Define how long artifacts are stored.
* Infrastructure as Code (IaC): Terraform, CloudFormation, Azure Resource Manager, Pulumi. Essential for provisioning and managing environments reproducibly.
* Configuration Management: Ansible, Chef, Puppet, SaltStack. For configuring VMs and installed software.
* Environment Parity: Strive for environments that are as similar as possible to minimize discrepancies.
* Secrets Management: Dedicated solutions for each environment.
* Secrets Management Tools: AWS Secrets Manager, Azure Key Vault, Google Secret Manager, HashiCorp Vault, Kubernetes Secrets (with encryption).
* Least Privilege: CI/CD pipelines and deployment agents should only have the minimum necessary permissions.
* Static Application Security Testing (SAST): Integrate tools (e.g., SonarQube, Snyk, Checkmarx) into the CI pipeline.
* Dynamic Application Security Testing (DAST): Run against deployed applications in staging.
* Container Image Scanning: Scan Docker images for vulnerabilities (e.g., Clair, Trivy, Snyk Container).
* Supply Chain Security: Verifying dependencies and build processes.
* Logging Platforms: ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Datadog, Sumo Logic, AWS CloudWatch Logs, Azure Monitor Logs, Google Cloud Logging.
* Monitoring Tools: Prometheus, Grafana, Datadog, New Relic, AppDynamics, AWS CloudWatch, Azure Monitor, Google Cloud Monitoring.
* Alerting: Define thresholds and notification channels (Slack, PagerDuty, email).
* Traceability: Distributed tracing (e.g., Jaeger, Zipkin) for microservices architectures.
* Database Type: Relational (PostgreSQL, MySQL, SQL Server), NoSQL (MongoDB, Cassandra, DynamoDB).
* Managed Services: AWS RDS, Azure SQL Database, Google Cloud SQL. Simplifies operations.
* Database Migrations: Tools like Flyway or Liquibase for schema versioning and automated application.
* Data Seeding: Strategies for populating development/test environments.
* Virtual Private Clouds (VPCs)/Virtual Networks: Network isolation in cloud environments.
* Security Groups/Network Security Groups: Firewall rules for instances/containers.
* Load Balancers/API Gateways: For distributing traffic and securing endpoints.
* VPN/Direct Connect: For secure connectivity between on-premise and cloud resources.
* DNS Management: For service discovery.
Based on this analysis and common industry best practices, we recommend the following strategic approaches:
To proceed with generating tailored CI/CD pipeline configurations, we require specific details about your current environment and preferences. Please provide information on the following:
* Which platform do you primarily use (GitHub, GitLab, Bitbucket, other)?
* Are your repositories public or private?
* Which platform would you like to use for the pipelines (GitHub Actions, GitLab CI, Jenkins)?
* If Jenkins, is it self-hosted or cloud-managed?
* Programming Languages (e.g., Python, Node.js, Java, Go, .NET, Ruby).
* Frameworks (e.g., React, Angular, Spring Boot, Django, Flask).
* Build Tools (
yaml
image: docker:latest # Default image for all jobs, can be overridden per job
variables:
# General variables
PROJECT_NAME: my-app
DOCKER_IMAGE_NAME: $CI_REGISTRY_IMAGE # Automatically set to your project's registry image path
DOCKER_REGISTRY: $CI_REGISTRY # Automatically set to your GitLab project's registry
stages:
- lint
- test
- build
- deploy
cache:
paths:
- node_modules/ # Example for Node.js projects
- .cache/pip/ # Example for Python projects
key: ${CI_COMMIT_REF_SLUG} # Cache per branch/tag
lint_job:
stage: lint
image: node:18-alpine # Example for Node.js linting
script:
- npm ci # Or yarn install, pip install, etc.
- npm run lint # Or your project's lint command
only:
- merge_requests
- main
test_job:
stage: test
image: node:18-alpine # Example for Node.js testing
script:
- npm ci
- npm test
only:
- merge_requests
- main
artifacts:
when: always
reports:
junit: junit.xml # Example for JUnit test reports
build_job:
stage: build
image: docker:latest # Use docker:latest for building Docker images
services:
- docker:dind # Docker in Docker for building images
Project Deliverable: Complete CI/CD Pipeline Configurations
This document provides a detailed, validated set of CI/CD pipeline configurations tailored for your project, covering GitHub Actions, GitLab CI, and Jenkins. Each pipeline is designed to automate the software delivery lifecycle, incorporating essential stages such as linting, testing, building, and deployment, ensuring a robust, efficient, and reliable development workflow.
We understand the critical role of streamlined CI/CD processes in modern software development. This deliverable presents three distinct, fully-featured CI/CD pipeline configurations, each optimized for a specific platform: GitHub Actions, GitLab CI, and Jenkins. These pipelines are built upon industry best practices, focusing on maintainability, security, and performance.
The configurations provided are comprehensive examples for a typical web application (e.g., Node.js), demonstrating how to integrate various stages from code commit to deployment. They serve as a robust starting point, designed for easy adaptation to your specific application requirements and deployment targets.
For each platform, a complete CI/CD pipeline has been generated, encompassing the following core stages:
The specific implementation details vary by platform, leveraging each system's unique features and syntax.
Our generation process includes a rigorous validation and documentation phase to ensure the quality and usability of the provided configurations:
.yml, Jenkinsfile) has been syntactically validated against its respective platform's schema and best practices.Platform: GitHub Actions
File: .github/workflows/main.yml
This pipeline is designed for projects hosted on GitHub, leveraging GitHub Actions' native integration for a seamless CI/CD experience.
This GitHub Actions workflow automates the build, test, and deployment process for a Node.js application. It triggers on pushes to the main branch and pull requests, and includes dedicated steps for linting, testing, building a Docker image, and deploying it to a cloud provider.
# .github/workflows/main.yml
name: CI/CD Pipeline
on:
push:
branches:
- main
pull_request:
branches:
- main
workflow_dispatch: # Allows manual triggering
env:
NODE_VERSION: '18.x'
DOCKER_IMAGE_NAME: my-nodejs-app
AWS_REGION: us-east-1 # Example for AWS deployment
jobs:
lint:
name: Lint Code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
- name: Restore Node modules cache
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint
test:
name: Run Tests
runs-on: ubuntu-latest
needs: lint # This job depends on 'lint' completing successfully
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
- name: Restore Node modules cache
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: npm ci
- name: Run Jest tests
run: npm test
build_and_push_docker_image:
name: Build & Push Docker Image
runs-on: ubuntu-latest
needs: test # This job depends on 'test' completing successfully
if: github.ref == 'refs/heads/main' # Only run on pushes to main branch
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: docker/login-action@v3
with:
registry: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Verify image push
run: echo "Docker image pushed: ${{ steps.docker_build.outputs.digest }}"
deploy:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build_and_push_docker_image # This job depends on 'build_and_push_docker_image'
if: github.ref == 'refs/heads/main' # Only run on pushes to main branch
environment:
name: Staging
url: https://staging.example.com # Example URL for environment
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Deploy to AWS ECS (Example)
# Replace with your actual deployment steps (e.g., update ECS service, deploy to Kubernetes, etc.)
run: |
echo "Deploying image ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} to Staging"
# Example: Update an ECS service task definition
# aws ecs update-service --cluster my-cluster --service my-service --task-definition my-task-definition --force-new-deployment
echo "Deployment to Staging initiated successfully."
push to main, pull_request to main, and workflow_dispatch for manual runs.env block for common variables, enhancing readability and maintainability.needs keyword ensures jobs run in a specific order (lint -> test -> build -> deploy).actions/cache is used to cache node_modules, significantly speeding up subsequent runs.${{ secrets.SECRET_NAME }}).if statements control job execution based on branch (github.ref == 'refs/heads/main').docker/setup-buildx-action and docker/build-push-action for efficient Docker image building and pushing to ECR.aws-actions/configure-aws-credentials simplifies authentication with AWS services.deploy job utilizes GitHub Environments, providing deployment protection rules and tracking.main.yml file in the .github/workflows/ directory of your GitHub repository. * Navigate to your GitHub repository -> Settings -> Secrets and variables -> Actions -> New repository secret.
* Add the following secrets: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_ACCOUNT_ID.
* Update env.DOCKER_IMAGE_NAME and env.AWS_REGION to match your project.
* Modify the deploy job's run script to reflect your actual deployment commands (e.g., for ECS, EKS, Lambda, etc.).
package.json with lint and test scripts, and a Dockerfile for containerization.Platform: GitLab CI
File: .gitlab-ci.yml
This pipeline is tailored for projects hosted on GitLab, leveraging its integrated CI/CD capabilities.
This GitLab CI pipeline automates the linting, testing, Docker image build, and deployment for a Node.js application. It is configured to run on pushes to feature branches and main, and includes distinct stages for each CI/CD phase.
# .gitlab-ci.yml
image: node:18-alpine # Base image for all jobs
variables:
DOCKER_IMAGE_NAME: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG # Example: project/my-nodejs-app/main
DOCKER_IMAGE_TAG: $CI_COMMIT_SHA
# AWS_REGION: us-east-1 # Uncomment and set if deploying to AWS
stages:
- lint
- test
- build
- deploy
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull # Default cache policy
lint_job:
stage: lint
script:
- npm ci
- npm run lint
cache:
policy: push # Push cache after installing dependencies
test_job:
stage: test
script:
- npm ci
- npm test
dependencies:
- lint_job # Ensures linting passes before testing
cache:
policy: pull # Pull cache for testing
build_docker_image_job:
stage: build
image: docker:20.10.16-dind-alpine3.16 # Docker-in-Docker for building images
services:
- docker:20.10.16-dind-alpine3.16
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t "$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG" .
- docker push "$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG"
rules:
- if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_TAG # Only build and push on main or tags
dependencies:
- test_job
deploy_staging_job:
stage: deploy
image: python:3.9-slim-buster # Example image for AWS CLI, adjust as needed
script:
- pip install awscli # Install AWS CLI
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID # Use GitLab CI variables
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- export AWS_DEFAULT_REGION=${AWS_REGION:-us-east-1} # Use default or variable
- echo "Deploying image $DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG to Staging"
# Replace with your actual deployment steps (e.g., update ECS service, deploy to K8s)
# aws ecs update-service --cluster my-cluster --service my-service --task-definition my-task
\n