This deliverable provides detailed, professional CI/CD pipeline configurations tailored for common development workflows, including linting, testing, building, and deployment stages. The configurations are presented for three leading CI/CD platforms: GitHub Actions, GitLab CI, and a basic Jenkinsfile example. These templates are designed to be highly actionable and serve as a robust starting point for your project.
You have requested the generation of comprehensive CI/CD pipeline configurations. This output provides ready-to-use YAML files for GitHub Actions and GitLab CI, along with a foundational Jenkinsfile, designed to automate your software development lifecycle. These pipelines incorporate best practices for code quality, build integrity, and reliable deployment.
Core Stages Covered:
The generated configurations adhere to the following principles:
To provide concrete and actionable examples, we'll assume a common scenario:
* src/: Application source code.
* Dockerfile: Defines the Docker image for the application.
* package.json (or similar, e.g., requirements.txt, pom.xml): Defines dependencies and scripts for linting (npm run lint) and testing (npm test).
* Container Registry: Pushing the built Docker image to a container registry (e.g., Docker Hub, AWS ECR, GitLab Container Registry, Azure ACR, Google Container Registry).
* Compute Service: A placeholder for deploying the container to a service like Kubernetes (EKS, GKE, AKS), AWS ECS, Azure App Service, or a virtual machine. Actual deployment commands will require customization.
GitHub Actions provides a flexible way to automate workflows directly within your GitHub repository.
File Location: .github/workflows/main.yml
# .github/workflows/main.yml
name: CI/CD Pipeline
on:
push:
branches:
- main
pull_request:
branches:
- main
env:
# Replace with your Docker Hub username or registry host
DOCKER_REGISTRY: docker.io
DOCKER_IMAGE_NAME: your-org/your-app
# For Docker Hub, this is your username. For ECR/ACR/GCR, it's the registry host.
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
jobs:
lint:
name: Lint Code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js (example for JS/TS projects)
uses: actions/setup-node@v4
with:
node-version: '20' # Or your specific Node.js version
- name: Install dependencies
run: npm ci # Use npm ci for clean installs in CI
- name: Run linter
run: npm run lint # Assumes 'lint' script in package.json
test:
name: Run Tests
runs-on: ubuntu-latest
needs: lint # Ensure linting passes before testing
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js (example for JS/TS projects)
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test # Assumes 'test' script in package.json
# - name: Upload test results (e.g., for Jest/JUnit reports)
# uses: actions/upload-artifact@v4
# if: always()
# with:
# name: test-results
# path: ./junit.xml # Adjust path to your test report file
build:
name: Build Docker Image
runs-on: ubuntu-latest
needs: [lint, test] # Ensure linting and tests pass before building
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Log in to Docker Registry
uses: docker/login-action@v3
with:
registry: ${{ env.DOCKER_REGISTRY }}
username: ${{ env.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }} # Or AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY for ECR
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}
tags: |
type=sha,format=long,prefix=
type=raw,value=latest,enable=${{ github.ref == format('refs/heads/{0}', github.event.repository.default_branch) }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
name: Deploy to Environment
runs-on: ubuntu-latest
needs: build # Ensure image is built and pushed before deployment
if: github.ref == format('refs/heads/{0}', github.event.repository.default_branch) # Only deploy on push to main branch
environment: production # Define an environment for better visibility and protection rules
steps:
- name: Checkout code (optional, if deployment scripts are in repo)
uses: actions/checkout@v4
# Example: Deploy to AWS ECS/EKS using AWS CLI
# Requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as GitHub Secrets
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1 # Replace with your AWS region
- name: Deploy to AWS ECS (example)
run: |
# Replace with your actual deployment commands
# e.g., update an ECS service definition, deploy to EKS with kubectl/helm
# aws ecs update-service --cluster your-cluster --service your-service --force-new-deployment
# kubectl apply -f k8s/deployment.yaml --record
echo "Deploying ${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}"
echo "Deployment script for production environment would run here."
# Example: Call a custom deployment script
# ./scripts/deploy_to_production.sh "${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}"
# Example: Deploy to Azure Web App (requires AZURE_CREDENTIALS secret)
# - name: Azure Login
# uses: azure/login@v1
# with:
# creds: ${{ secrets.AZURE_CREDENTIALS }}
# - name: Deploy to Azure Web App
# uses: azure/webapps-deploy@v2
# with:
# app-name: 'your-webapp-name'
# slot-name: 'production'
# images: '${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}'
- name: Deployment successful
run: echo "Application deployed successfully to production!"
Workflow: DevOps Pipeline Generator
Step: gemini → analyze_infrastructure_needs
This document outlines a comprehensive analysis of the infrastructure needs required to generate a robust and efficient CI/CD pipeline. The goal of this initial step is to identify and categorize the critical infrastructure components and dependencies that will inform the design and configuration of your tailored GitHub Actions, GitLab CI, or Jenkins pipeline. A thorough understanding of your existing and desired infrastructure ensures the generated pipeline is not only functional but also optimized for performance, security, and scalability.
To generate an effective CI/CD pipeline, we must consider several foundational infrastructure categories. Each category plays a vital role in the pipeline's execution, from code commit to production deployment.
* Purpose: Hosts your application's source code and triggers pipeline events.
* Considerations:
* Provider: GitHub, GitLab, Bitbucket, Azure DevOps Repos.
* Repository Structure: Monorepo vs. Polyrepo.
* Branching Strategy: GitFlow, GitHub Flow, GitLab Flow, Trunk-Based Development.
* Webhooks/Integrations: How the SCM communicates with the CI/CD platform.
* Purpose: The engine that orchestrates and executes your pipeline stages.
* Considerations:
* Preferred Platform: GitHub Actions, GitLab CI, Jenkins, Azure Pipelines, CircleCI.
* Runner/Agent Strategy: Self-hosted vs. cloud-hosted (managed) runners/agents.
* Scalability: How the platform scales to handle concurrent builds.
* Integration Ecosystem: Availability of plugins/actions for other tools.
* Purpose: Provides the necessary tools and environment to compile, package, and containerize your application.
* Considerations:
* Operating System: Linux (Ubuntu, Alpine), Windows, macOS.
* Programming Languages/Runtimes: Java (JDK versions), Node.js (npm/yarn), Python (pip), Go, .NET, Ruby, PHP.
* Build Tools: Maven, Gradle, npm, yarn, pip, dotnet CLI, Go modules.
* Containerization: Docker daemon, Podman.
* Tooling Versions: Specific versions required for compatibility.
* Purpose: Executes various types of tests to ensure code quality and functionality.
* Considerations:
* Test Frameworks: JUnit, Jest, Pytest, Cypress, Selenium, Playwright.
* Test Data Management: How test data is provisioned and managed.
* Reporting: Integration with test reporting tools (e.g., Allure, SonarQube).
* Environments: Dedicated environments for integration, staging, or E2E tests.
* Purpose: Stores compiled binaries, Docker images, and other build artifacts.
* Considerations:
* Type: Docker Registry (Docker Hub, ECR, GCR, Azure Container Registry), Maven/npm/PyPI repository (Nexus, Artifactory), GitHub Packages, GitLab Package Registry.
* Access Control: Permissions for pushing and pulling artifacts.
* Retention Policies: How long artifacts are stored.
* Purpose: The environment(s) where the application will be deployed.
* Considerations:
* Cloud Provider: AWS, Azure, GCP, On-Premise.
* Deployment Model:
* Container Orchestration: Kubernetes (EKS, AKS, GKE, OpenShift), AWS ECS/Fargate.
* Serverless: AWS Lambda, Azure Functions, Google Cloud Functions.
* Virtual Machines/Servers: AWS EC2, Azure VMs, GCP Compute Engine, bare metal (SSH, Ansible, Chef, Puppet).
* Platform as a Service (PaaS): AWS Elastic Beanstalk, Azure App Service, Heroku.
* Deployment Strategy: Rolling updates, Blue/Green, Canary deployments.
* Network Access: Firewall rules, VPC/VNet peering, security groups.
* Purpose: Securely stores and manages sensitive information (API keys, database credentials, tokens).
* Considerations:
* Provider: AWS Secrets Manager, Azure Key Vault, GCP Secret Manager, HashiCorp Vault, CI/CD platform built-in secrets.
* Access Control: Least privilege access for pipeline agents.
* Rotation: How secrets are rotated and managed.
* Purpose: Observability of the application and the pipeline itself.
* Considerations:
* Logging: ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Datadog, CloudWatch Logs, Azure Monitor Logs, GCP Logging.
* Metrics: Prometheus, Grafana, Datadog, CloudWatch Metrics, Azure Monitor Metrics, GCP Monitoring.
* Alerting: PagerDuty, Slack, email integrations.
* Purpose: Integrates security and quality checks throughout the pipeline.
* Considerations:
* Static Application Security Testing (SAST): SonarQube, Checkmarx, Fortify.
* Dynamic Application Security Testing (DAST): OWASP ZAP, Burp Suite.
* Software Composition Analysis (SCA): Snyk, WhiteSource, Trivy.
* Container Image Scanning: Clair, Trivy, Aqua Security.
* Linting/Code Formatting: ESLint, Prettier, Black, Flake8.
The landscape of CI/CD infrastructure is constantly evolving. Understanding current trends is crucial for building future-proof pipelines.
Based on the analysis of key categories and current trends, we recommend the following principles for your CI/CD infrastructure planning:
To proceed with generating a precise and effective CI/CD pipeline, we require specific information regarding your current and desired infrastructure setup.
Action Required from Customer: Please provide detailed answers to the following questions. If you are unsure about any item, please indicate that, and we can schedule a follow-up discussion.
* Which SCM platform do you currently use (e.g., GitHub, GitLab, Bitbucket, Azure DevOps Repos)?
* Do you have a preference for monorepo or polyrepo structures?
* What is your primary branching strategy?
* Which CI/CD platform do you prefer for the generated pipeline (GitHub Actions, GitLab CI, Jenkins)?
* Do you plan to use cloud-hosted runners/agents or self-hosted ones?
* What is the primary programming language(s) and framework(s) of your application (e.g., Java/Spring Boot, Node.js/React, Python/Django, .NET Core)?
* Are you containerizing your application (e.g., Docker)?
* What are the key build tools and dependencies (e.g., Maven, npm, pip, dotnet CLI, specific JDK version)?
* What types of tests are performed (unit, integration, E2E, performance)?
* Which testing frameworks are you using or planning to use?
* Do you have an existing artifact repository (e.g., Nexus, Artifactory, Docker Hub, ECR)? If not, do you have a preference?
* Which cloud provider(s) are you targeting for deployment (AWS, Azure, GCP, On-Premise)?
* What is your primary deployment model (e.g., Kubernetes, Serverless functions, VMs, PaaS)?
* Do you have specific environments (e.g., Dev, Staging, Production) with different configurations?
* How do you currently manage secrets and credentials within your organization?
* Are there any specific security scanning tools (SAST, SCA, container scanning) you wish to integrate?
* Do you have code quality requirements (e.g., SonarQube integration, linting rules)?
Next Action: Once this information is provided, we will proceed to Step 2: Define Pipeline Stages and Logic, where we will translate these infrastructure needs into concrete CI/CD pipeline configurations.
name: The name of your workflow.on: Defines when the workflow runs (e.g., on push to main, pull_request).env: Global environment variables for the workflow.jobs: * lint:
* Checks out code, sets up Node.js (customize for Python, Java, etc.), installs dependencies, and runs npm run lint.
* runs-on: Specifies the runner environment.
* test:
* needs: lint: Ensures this job only runs if lint succeeds.
* Similar setup to lint, but runs npm test.
* build:
* needs: [lint, test]: Ensures both lint and test pass.
* Docker Login: Uses docker/login-action to authenticate with your container registry using secrets.
* Metadata Extraction: docker/metadata-action automatically generates Docker image tags (e.g., based on Git SHA, latest for main branch).
* Build & Push: docker/build-push-action
This document provides detailed and actionable CI/CD pipeline configurations tailored for GitHub Actions, GitLab CI, and Jenkins. These configurations encompass essential stages including linting, testing, building, and deployment, designed to streamline your development workflow and ensure robust, high-quality software delivery.
We aim to provide you with a foundation for automated, efficient, and reliable software releases, significantly reducing manual effort and potential errors.
Continuous Integration (CI) and Continuous Delivery/Deployment (CD) pipelines are fundamental to modern software development. They automate the processes of building, testing, and deploying applications, ensuring that code changes are integrated, validated, and released rapidly and reliably.
Key Benefits:
Common Stages Covered:
Before diving into platform-specific configurations, here are some universal best practices:
GitHub Actions provides a flexible and powerful CI/CD solution directly within your GitHub repository. Workflows are defined using YAML files in the .github/workflows/ directory.
push, pull_request)..github/workflows/main.yml)This example demonstrates a CI/CD pipeline for a Node.js application, including linting, testing, building a Docker image, and deploying to a generic environment.
name: Node.js CI/CD Pipeline
on:
push:
branches:
- main
- develop
pull_request:
branches:
- main
- develop
env:
NODE_VERSION: '18.x' # Specify Node.js version
DOCKER_IMAGE_NAME: my-nodejs-app # Name for your Docker image
AWS_REGION: us-east-1 # Example AWS region for deployment
jobs:
lint:
name: Lint Code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm' # Cache npm dependencies
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint # Assuming you have a 'lint' script in package.json
test:
name: Run Tests
needs: lint # This job depends on 'lint' job
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run Jest tests
run: npm test # Assuming you have a 'test' script in package.json
build:
name: Build Docker Image
needs: test # This job depends on 'test' job
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
- name: Build and push Docker image
id: docker_build
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ secrets.DOCKER_USERNAME }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} # Tag with commit SHA
cache-from: type=gha # Use GitHub Actions cache for Docker layers
cache-to: type=gha,mode=max
- name: Print Docker image tag
run: echo "Docker image built and pushed: ${{ steps.docker_build.outputs.digest }}"
deploy-staging:
name: Deploy to Staging
needs: build # This job depends on 'build' job
if: github.ref == 'refs/heads/develop' # Only deploy staging on 'develop' branch pushes
runs-on: ubuntu-latest
environment: staging # Define an environment for better visibility and protection rules
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Deploy to AWS ECS/EC2 (example)
run: |
# Replace with your actual deployment commands
# e.g., using AWS CLI, kubectl, serverless framework, etc.
echo "Deploying ${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} to Staging environment..."
# Example: Update an ECS service
# aws ecs update-service --cluster my-cluster --service my-staging-service --force-new-deployment
echo "Staging deployment complete."
deploy-production:
name: Deploy to Production
needs: build # This job depends on 'build' job
if: github.ref == 'refs/heads/main' # Only deploy production on 'main' branch pushes
runs-on: ubuntu-latest
environment: production # Define an environment for better visibility and protection rules
# Add manual approval for production deployments
# environment:
# name: production
# url: https://your-prod-app.com
# wait_for_review: true # This requires a reviewer to approve the deployment
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Deploy to AWS ECS/EC2 (example)
run: |
# Replace with your actual deployment commands
echo "Deploying ${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} to Production environment..."
# Example: Update an ECS service
# aws ecs update-service --cluster my-cluster --service my-production-service --force-new-deployment
echo "Production deployment complete."
.github/workflows/ directory: In your repository's root, create this directory.main.yml (or any .yml name) inside the .github/workflows/ directory. * Navigate to your GitHub repository > Settings > Secrets and variables > Actions > New repository secret.
* Add DOCKER_USERNAME, DOCKER_TOKEN, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY with your respective values.
main.yml file to your repository. The workflow will automatically trigger based on the on events defined.Actions tab in your GitHub repository to monitor workflow execution.actions/setup-node, npm ci, npm run lint, npm test commands to suit Python (setup-python, pip install, pylint, pytest), Java (setup-java, maven install, mvn test), Go, etc.deploy-staging and deploy-production jobs are placeholders. Replace the run commands with specific deployment scripts for AWS ECS, Kubernetes, Azure App Service, Google Cloud Run, Heroku, etc.cache keys for different package managers (e.g., npm, pip, gradle).GitLab CI/CD is deeply integrated into GitLab, allowing you to define your pipeline using a .gitlab-ci.yml file in your repository's root.
build, test, deploy)..gitlab-ci.yml)This example mirrors the Node.js application pipeline for GitLab CI.
stages:
- lint
- test
- build
- deploy-staging
- deploy-production
variables:
NODE_VERSION: '18.x'
DOCKER_IMAGE_NAME: my-gitlab-nodejs-app
AWS_REGION: us-east-1 # Example AWS region for deployment
DOCKER_REGISTRY: registry.gitlab.com/${CI_PROJECT_PATH} # GitLab Container Registry
default:
image: node:${NODE_VERSION}-alpine # Use a base image for all jobs by default
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull-push # Cache node_modules across jobs
lint_job:
stage: lint
script:
- npm ci
- npm run lint
artifacts:
expire_in: 1 week
paths:
- node_modules/ # Cache node_modules for subsequent stages
test_job:
stage: test
script
\n