This document provides detailed and professional CI/CD pipeline configurations tailored for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to establish robust and automated workflows for your software development lifecycle, encompassing linting, testing, building, and deployment stages. Each pipeline example is structured to be immediately actionable, with clear placeholders for your specific project details.
All generated pipelines adhere to a standard set of stages to ensure code quality, reliability, and efficient delivery. These stages are:
GitHub Actions provides powerful and flexible CI/CD capabilities directly within your GitHub repository.
File Location: .github/workflows/main.yml
Example Configuration (Node.js Application):
--- ### 3. GitLab CI/CD Configuration GitLab CI/CD is tightly integrated with GitLab repositories, offering a seamless experience for building, testing, and deploying. **File Location**: `.gitlab-ci.yml` **Example Configuration (Node.js Application)**:
Workflow: DevOps Pipeline Generator
Step: gemini → analyze_infrastructure_needs
Description: Generate complete CI/CD pipeline configurations for GitHub Actions, GitLab CI, or Jenkins with testing, linting, building, and deployment stages.
This initial phase focuses on a comprehensive analysis of your existing and desired infrastructure to lay a robust foundation for your CI/CD pipeline. A well-designed pipeline is intrinsically linked to the underlying infrastructure it operates on and interacts with. Without a clear understanding of these elements, the generated pipeline configurations may not be optimal, secure, or scalable.
Our goal in this step is to identify key infrastructure components, assess their current state, and define the requirements for a future-proof CI/CD ecosystem. This analysis will directly inform the selection of the appropriate CI/CD platform (GitHub Actions, GitLab CI, Jenkins) and dictate the specific configurations for build environments, testing, artifact storage, and deployment targets.
To generate an effective CI/CD pipeline, we must analyze the following critical infrastructure pillars:
* Current State: Identify your primary SCM (e.g., GitHub, GitLab, Bitbucket, Azure DevOps Repos).
* Integration Needs: How tightly integrated should the CI/CD pipeline be with your SCM (e.g., webhook triggers, status updates, pull request checks)?
* Data Insight: GitHub and GitLab offer native CI/CD solutions (GitHub Actions, GitLab CI) that provide seamless integration, often simplifying setup and management compared to external tools like Jenkins.
* Platform Choice: Do you have a preference for GitHub Actions, GitLab CI, or Jenkins?
* GitHub Actions/GitLab CI: Often preferred for cloud-native projects due to ease of setup, managed runners, and strong SCM integration.
* Jenkins: Highly customizable, suitable for complex on-premise environments, hybrid clouds, or when extensive plugin ecosystems are required. Requires self-managed infrastructure for runners.
* Runner Requirements:
* Managed vs. Self-hosted: Will you leverage managed runners (e.g., GitHub Actions hosted runners, GitLab.com shared runners) or require self-hosted runners/agents (e.g., on VMs, Kubernetes clusters, on-premise servers)?
* Operating System: Linux, Windows, macOS?
* Resource Allocation: CPU, RAM, disk space for builds.
* Scalability: How many concurrent builds are expected? What are the peak load requirements?
* Trend: The adoption of managed CI/CD services (GitHub Actions, GitLab CI) is increasing due to reduced operational overhead and inherent scalability. However, self-hosted runners remain crucial for specific compliance, network, or performance needs.
* Application Technology Stack:
* Languages: Java (Maven/Gradle), Node.js (npm/yarn), Python (pip), Go, .NET, Ruby, PHP, etc.
* Frameworks: Spring Boot, React, Angular, Django, etc.
* Build Tools: Specific compilers, package managers, dependency resolution tools.
* Containerization: Will Docker be used for consistent build environments? (Highly recommended trend).
* Environment Variables & Configuration: How are build-time configurations managed?
* Types of Tests: Unit, Integration, End-to-End (E2E), Performance, Security (SAST/DAST).
* Testing Frameworks: JUnit, Jest, Pytest, Cypress, Selenium, JMeter, SonarQube, etc.
* Test Environments: Are dedicated environments needed for integration or E2E tests? (e.g., temporary Docker containers, ephemeral cloud environments).
* Reporting: How should test results be collected and displayed?
* Type of Artifacts: Docker images, compiled binaries, JARs, NuGet packages, npm packages.
* Storage Location:
* Container Registries: Docker Hub, GitHub Container Registry, GitLab Container Registry, AWS ECR, Azure Container Registry, Google Container Registry.
* Binary Repositories: Nexus, Artifactory, AWS S3, Azure Blob Storage.
* Retention Policies: How long should artifacts be stored?
* Environment Types: Development, Staging, Production.
* Deployment Platforms:
* Container Orchestration: Kubernetes (EKS, AKS, GKE, OpenShift).
* Virtual Machines: AWS EC2, Azure VMs, Google Compute Engine, on-premise VMs.
* Serverless: AWS Lambda, Azure Functions, Google Cloud Functions.
* Platform as a Service (PaaS): AWS Elastic Beanstalk, Azure App Service, Google App Engine, Heroku.
* Deployment Tools: kubectl, Helm, Terraform, Ansible, AWS CLI, Azure CLI, Serverless Framework.
* Deployment Strategies: Rolling updates, Blue/Green, Canary deployments.
* Secrets Storage: How are sensitive credentials (API keys, database passwords) managed and injected into the pipeline? (e.g., CI/CD platform built-in secrets, HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, Kubernetes Secrets).
* Access Control: Who has access to pipeline configurations and secrets?
* Security Scanning: Integration of SAST/DAST tools, dependency scanning, container image scanning.
* Compliance: Any specific regulatory compliance requirements (e.g., GDPR, HIPAA, SOC 2)?
* Existing Tools: Do you have existing observability tools (e.g., Prometheus, Grafana, ELK Stack, Splunk, Datadog, New Relic) that the pipeline should integrate with for deployment metrics, logs, or alerts?
* Pipeline Health: How will the health and performance of the CI/CD pipeline itself be monitored?
Beyond the technical pillars, several organizational and strategic factors will influence the optimal infrastructure choices:
* Data Insight: Organizations with frequent commits and multiple development teams will require highly scalable CI/CD runner infrastructure to avoid build queues and ensure rapid feedback cycles. Cloud-native solutions often excel here.
* Data Insight: Managed cloud CI/CD services offer pay-as-you-go models, reducing upfront capital expenditure. Self-hosted solutions, while requiring initial investment, can offer cost efficiencies at extreme scale or for specific use cases.
* Data Insight: Leveraging existing team skills (e.g., if a team is already proficient in Jenkins, migrating to a new platform might introduce a learning curve) and integrating with tools already in use can accelerate adoption and reduce friction.
* Data Insight: Industries with stringent compliance requirements (e.g., finance, healthcare) may necessitate on-premise or highly controlled self-hosted runner environments and specific secrets management solutions.
* Data Insight: For organizations operating across multiple cloud providers or a mix of cloud and on-premise, solutions like Jenkins with its distributed architecture or cloud-agnostic tools like Terraform become more relevant.
Based on general industry trends and best practices, we offer the following initial recommendations, which will be refined upon receiving your specific input:
* Trend: These platforms offer excellent developer experience, deep SCM integration, managed runners, and robust security features out-of-the-box, significantly reducing operational overhead.
* Trend: Containerization ensures consistent, isolated, and reproducible build environments across all stages and prevents "it works on my machine" issues. It also simplifies dependency management.
* Trend: Never hardcode credentials. Utilize the CI/CD platform's built-in secrets management or integrate with dedicated tools like HashiCorp Vault or cloud provider secret services.
* Trend: IaC ensures idempotent, version-controlled, and auditable infrastructure provisioning, making deployments reliable and repeatable.
To proceed with generating detailed and accurate CI/CD pipeline configurations, we require the following specific information from your team. Please provide as much detail as possible for each point:
* Primary Application Language(s) & Framework(s): (e.g., Python/Django, Java/Spring Boot, Node.js/React, Go, .NET Core)
* Database Technologies: (e.g., PostgreSQL, MySQL, MongoDB, DynamoDB)
* Any specific build tools or compilers required: (e.g., Maven, Gradle, npm, yarn, pipenv, dotnet CLI)
* Which SCM platform do you currently use? (e.g., GitHub.com, GitHub Enterprise, GitLab.com, GitLab Self-Managed, Bitbucket Cloud, Azure DevOps Repos)
* Do you have a strong preference for GitHub Actions, GitLab CI, or Jenkins?
* If Jenkins, is it an existing instance or a new deployment?
* If no strong preference, what are your primary considerations (e.g., cost, ease of use, compliance, integration with existing tools)?
* Will you use managed runners (if available for your chosen platform) or self-hosted runners?
* If self-hosted, what is the desired OS (Linux, Windows, macOS) and deployment environment (VMs, Kubernetes)?
* Approximate number of concurrent builds expected during peak times?
* Which cloud provider(s) are you using (AWS, Azure, GCP, On-Premise)?
* What is your primary deployment target? (e.g., Kubernetes, EC2 VMs, Azure App Service, AWS Lambda, on-premise servers)
* Do you have specific deployment strategies in mind? (e.g., Blue/Green, Canary, Rolling Updates)
* Which container registry will you use for Docker images?
* Which binary repository/storage will you use for other artifacts?
* How are secrets currently managed? (e.g., environment variables, a dedicated secrets manager like Vault, cloud provider secrets services)
* Are there any specific security scanning tools you wish to integrate? (e.g., SonarQube, Snyk, Trivy)
* Are there any specific compliance requirements?
* Which types of tests are critical for your pipeline? (e.g., unit, integration, E2E, performance)
* Are there specific testing frameworks you currently use or prefer?
* Are there any other tools or services that the CI/CD pipeline needs to integrate with? (e.g., Jira, Slack, PagerDuty, specific monitoring tools)
This detailed infrastructure analysis forms the bedrock of an efficient and effective CI/CD pipeline. Once we receive your comprehensive responses to the questions above, we will proceed to Step 2: Define Pipeline Stages and Logic. In this subsequent step, we will leverage your infrastructure details to design the specific stages (linting, testing, building, deploying) and the underlying logic that will form your complete CI/CD pipeline configuration.
We look forward to your input to move forward in generating a tailored and robust DevOps pipeline solution for your needs.
yaml
stages:
- lint
- test
- build
- deploy
variables:
NODE_VERSION: '18.x' # Specify your desired Node.js version
DOCKER_IMAGE_NAME: 'my-node-app'
DOCKER_REGISTRY: '$CI_REGISTRY' # Use GitLab's built-in container registry
AWS_REGION: 'us-east-1' # Example AWS region
.node_base:
image: node:$NODE_VERSION-alpine # Use a lightweight Node.js image
cache:
key: ${CI_COMMIT_REF_SLUG} # Cache per branch
paths:
- node_modules/
before_script:
- npm ci
lint:
extends: .node_base
stage: lint
script:
- npm run lint # Assumes a 'lint' script in package.json
test:
extends: .node_base
stage: test
script:
- npm test # Assumes a 'test' script in package.json
artifacts:
when: always
reports:
junit:
- junit.xml # If using a JUnit reporter, configure its output path
build:
extends: .node_base
stage: build
image: docker:20.10.16-dind-alpine3.16 # Use a Docker-in-Docker image for building Docker images
services:
- docker:20.10.16-dind-alpine3.16
variables:
DOCKER_HOST: tcp://docker:2375 # Required for dind
DOCKER_TLS_CERTDIR: "" # Required for dind
script:
- echo "Building application..."
- npm run build # Assumes a 'build' script in package.json
- echo "Building and pushing Docker image..."
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY # Login to GitLab Container Registry
- export IMAGE_TAG="$CI_REGISTRY/$CI_PROJECT_PATH/$DOCKER_IMAGE_NAME:$CI_COMMIT_SHORT_SHA"
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
- echo "DOCKER_IMAGE_TAG=$IMAGE_TAG" >> build.env # Store image tag for subsequent stages
artifacts:
paths:
- dist/ # Adjust path to your build output
reports:
dotenv: build.env # Pass variables between stages
deploy_staging:
stage: deploy
image: alpine/helm:3.12.0 # Example: using Helm for Kubernetes deployments
# image: python:3.9-slim-buster # Example: using Python for AWS CLI deployments
environment:
name: staging
url: https://staging.your-app.com
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
script:
- echo "Deploying to Staging environment..."
- export DOCKER_IMAGE_TAG=$(cat build.env | grep DOCKER_IMAGE_TAG | cut -d '=' -f 2)
- echo "Deploying image: $DOCKER_IMAGE_TAG"
# Example: AWS S3 deployment
# - apk add --no-cache curl python3 py3-pip
# - pip install awscli
# - aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID_STAGING
# - aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY_STAGING
# - aws configure set default.region $AWS_REGION
# - aws s3 sync ./dist s3://your-staging-bucket-name --delete
# Example: Kubernetes deployment using Helm
# - helm upgrade --install my-app-staging ./helm-charts/my-app --namespace staging --set image.tag=$DOCKER_IMAGE_TAG --set ingress.host=staging.your-app.com
variables:
# Define protected variables in GitLab CI/CD settings
AWS_ACCESS_KEY_ID_STAGING: $AWS_ACCESS_KEY_ID_STAGING
AWS_SECRET_ACCESS_KEY_STAGING: $AWS_SECRET_ACCESS_KEY_STAGING
deploy_production:
stage: deploy
image: alpine/helm:3.12.0 # Example: using Helm for Kubernetes deployments
environment:
name: production
url: https://your-app.com
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
when: manual # Manual deployment for production
allow_failure: false
script:
- echo "Deploying to Production environment..."
- export DOCKER_IMAGE_TAG=$(cat build.env | grep DOCKER_IMAGE_TAG | cut -d '=' -f 2)
- echo "Deploying image: $DOCKER_IMAGE_TAG"
# Example: AWS S3 deployment
# - apk add --
This document provides detailed, production-ready CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to automate the software development lifecycle, encompassing linting, testing, building, and deployment stages to ensure high-quality, reliable, and efficient software delivery.
Automating your build, test, and deployment processes is crucial for modern software development. This deliverable provides robust templates for a complete CI/CD pipeline, tailored for three popular platforms: GitHub Actions, GitLab CI, and Jenkins. Each pipeline is designed to be adaptable, covering common development workflows and offering a solid foundation for your project's specific needs.
All provided pipeline configurations incorporate the following essential stages, ensuring a comprehensive and robust CI/CD process:
This configuration is suitable for projects hosted on GitHub. It uses a Node.js application as an example, building a Docker image and pushing it to a container registry, then simulating a deployment.
File Location: .github/workflows/main.yml
name: CI/CD Pipeline
on:
push:
branches:
- main
- develop
tags:
- 'v*.*.*' # Trigger on version tags
pull_request:
branches:
- main
- develop
env:
NODE_VERSION: '18.x' # Specify Node.js version
DOCKER_IMAGE_NAME: my-app # Your application's Docker image name
REGISTRY: ghcr.io # Or your Docker Hub username, AWS ECR URL, etc.
# For GHCR, REGISTRY should be ghcr.io/${{ github.repository_owner }}
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm' # Cache npm dependencies
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint # Assumes 'lint' script in package.json
test:
runs-on: ubuntu-latest
needs: lint # This job depends on 'lint' completing successfully
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test # Assumes 'test' script in package.json
build:
runs-on: ubuntu-latest
needs: test # This job depends on 'test' completing successfully
outputs:
image_tag: ${{ steps.set_image_tag.outputs.tag }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GH_CR_TOKEN }} # Use GH_CR_TOKEN for GHCR, or DOCKER_HUB_TOKEN for Docker Hub, AWS_ACCESS_KEY_ID for ECR
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}
tags: |
type=schedule
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha,format=long
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Set image tag output
id: set_image_tag
run: echo "tag=${{ steps.meta.outputs.tags }}" >> $GITHUB_OUTPUT
deploy-staging:
runs-on: ubuntu-latest
needs: build # This job depends on 'build' completing successfully
environment:
name: Staging
url: https://staging.your-app.com # Optional: URL for the environment
if: github.ref == 'refs/heads/develop' || github.event_name == 'pull_request' # Deploy to staging on develop branch or PRs
steps:
- name: Deploy to Staging Environment
run: |
echo "Deploying image ${{ env.REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ needs.build.outputs.image_tag }} to Staging..."
# Example: Use AWS CLI, kubectl, or SSH to deploy
# For AWS ECS/EKS:
# aws ecs update-service --cluster your-cluster --service your-service --force-new-deployment --task-definition your-task-definition:${{ needs.build.outputs.image_tag }}
# For Kubernetes:
# kubectl set image deployment/your-deployment your-container=${{ env.REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ needs.build.outputs.image_tag }} --namespace staging
echo "Deployment to Staging complete!"
deploy-production:
runs-on: ubuntu-latest
needs: build # This job depends on 'build' completing successfully
environment:
name: Production
url: https://your-app.com
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v') # Deploy to production on main branch or tags
# Example: Manual approval for production deployment
# environment:
# name: Production
# url: https://your-app.com
# required_contexts: [ "manual" ] # Requires manual approval
steps:
- name: Deploy to Production Environment
run: |
echo "Deploying image ${{ env.REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ needs.build.outputs.image_tag }} to Production..."
# Example: Use AWS CLI, kubectl, or SSH to deploy
# aws ecs update-service --cluster your-prod-cluster --service your-prod-service --force-new-deployment --task-definition your-prod-task-definition:${{ needs.build.outputs.image_tag }}
echo "Deployment to Production complete!"
Key Features:
push to main, develop, or version tags, and on pull_request to main or develop.test depends on lint, and build depends on test for sequential execution of stages).npm dependencies are cached to speed up subsequent runs.docker/build-push-action for efficient Docker image building and pushing to a registry (e.g., GitHub Container Registry).secrets.GH_CR_TOKEN) for secure credentials.This configuration is for projects hosted on GitLab. It also uses a Node.js application example, building a Docker image and pushing it to GitLab's Container Registry, then simulating a Kubernetes deployment.
File Location: .gitlab-ci.yml
stages:
- lint
- test
- build
- deploy
variables:
NODE_VERSION: '18.x'
DOCKER_IMAGE_NAME: $CI_REGISTRY_IMAGE # Predefined GitLab variable for image name
DOCKER_TAG: $CI_COMMIT_SHA # Use commit SHA as default tag
default:
image: node:${NODE_VERSION}-alpine # Default image for all jobs
before_script:
- npm ci --cache .npm --prefer-offline # Install dependencies once, use cache
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
- node_modules/
lint:
stage: lint
script:
- npm run lint # Assumes 'lint' script in package.json
test:
stage: test
script:
- npm test # Assumes 'test' script in package.json
artifacts:
when: always
reports:
junit: junit.xml # Example for JUnit test reports
build_image:
stage: build
image: docker:24.0.5-alpine3.18 # Use a Docker image for building
services:
- docker:24.0.5-dind-alpine3.18 # Docker in Docker for building images
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: "" # Disable TLS for DIND
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $DOCKER_IMAGE_NAME:$DOCKER_TAG .
- docker push $DOCKER_IMAGE_NAME:$DOCKER_TAG
rules:
- if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH == "develop" || $CI_COMMIT_TAG =~ /^v\d+\.\d+\.\d+$/ # Run on main, develop, or tags
deploy_staging:
stage: deploy
image: alpine/git # Or kubectl image for Kubernetes
environment:
name: staging
url: https://staging.your-app.com
script:
- echo "Deploying $DOCKER_IMAGE_NAME:$DOCKER_TAG to Staging..."
# Example:
\n