As a professional AI assistant within PantheraHive, I am pleased to provide you with the comprehensive CI/CD pipeline configurations as part of the "DevOps Pipeline Generator" workflow. This deliverable outlines detailed setups for GitHub Actions, GitLab CI, and Jenkins, covering essential stages such as linting, testing, building, and deployment.
Our goal is to provide you with actionable, production-ready templates that can be adapted to your specific application stack and infrastructure. Each configuration emphasizes best practices for security, efficiency, and maintainability.
This document provides detailed CI/CD pipeline configurations for three popular platforms: GitHub Actions, GitLab CI, and Jenkins. Each configuration is designed to be comprehensive, including stages for code quality (linting), validation (testing), artifact creation (building), and environment promotion (deployment).
Across all generated pipeline configurations, the following core principles have been observed:
GitHub Actions provides a flexible and powerful CI/CD solution directly integrated with your GitHub repositories. This example uses a Node.js application that builds a Docker image and deploys it.
File Location: .github/workflows/main.yml
**Key Features and Customization:** * **`on` Triggers**: Configured for pushes to `main`, pull requests, and manual `workflow_dispatch`. * **`env` Variables**: Centralized environment variables for easy modification. * **`needs` Dependency**: Jobs run in sequence (lint -> test -> build -> deploy). * **Caching**: `actions/setup-node` includes caching for `npm` dependencies. * **Docker Build & Push**: Uses `docker/build-push-action` for efficient Docker image management. * **Secrets Management**: Uses GitHub Secrets (`secrets.AWS_ACCESS_KEY_ID`, `secrets.DOCKER_PASSWORD`, etc.) for sensitive information. * **Environment Protection**: Utilizes GitHub Environments (`Staging`, `Production`) for branch protection, required reviewers, and secret management specific to environments. * **Manual Approval**: The `trstringer/manual-approval` action is used for production deployments, requiring explicit approval from specified GitHub users. * **Deployment Commands**: Placeholder `aws` commands are provided for ECS. Adapt these to your cloud provider (AWS, Azure, GCP) and deployment strategy (Kubernetes, Serverless, VMs). --- ### 2. GitLab CI Configuration GitLab CI is deeply integrated with GitLab repositories, offering robust CI/CD capabilities. This example also uses a Node.js application that builds a Docker image and deploys it. **File Location**: `.gitlab-ci.yml`
This document outlines a comprehensive analysis of the infrastructure needs required to generate robust, scalable, and secure CI/CD pipelines. The goal is to identify the foundational components and strategies essential for automating the software delivery lifecycle, encompassing source code management, build, test, artifact management, and deployment.
Given the generic nature of the initial request, this analysis provides a strategic overview of common requirements, industry best practices, and actionable recommendations. The core principle is to establish a flexible and maintainable infrastructure that supports rapid development cycles while ensuring stability and security. By standardizing on modern tools and methodologies like containerization, Infrastructure as Code (IaC), and integrated security, organizations can significantly improve their DevOps maturity and accelerate time-to-market.
Without specific details regarding your current application landscape, technology stack, or existing infrastructure, this analysis operates under the assumption that:
This analysis will provide a foundational understanding, which will be further refined with your specific inputs in the subsequent steps.
* Platform: GitHub or GitLab are the primary candidates as per the workflow description.
* Webhooks/API: The CI/CD system must effectively listen for SCM events (push, pull request, tag creation) and interact via API for status updates.
* Permissions: Granular access control for the CI/CD system to interact with repositories.
* GitHub Actions:
* Pros: Deeply integrated with GitHub, YAML-based definitions, extensive marketplace of pre-built actions, managed runners available, strong community support.
* Cons: Less mature for complex enterprise-level self-hosted runner scenarios compared to Jenkins.
* GitLab CI:
* Pros: Fully integrated into the GitLab ecosystem (SCM, registry, security scanning, issue tracking), YAML-based, powerful AutoDevOps features, managed and self-hosted runners.
* Cons: Tightly coupled with GitLab, which might be a limitation if using other SCMs.
* Jenkins:
* Pros: Highly extensible via a vast plugin ecosystem, supports complex workflows, can be self-hosted anywhere, very mature.
* Cons: Steeper learning curve, requires significant operational overhead for maintenance and scaling, configuration can become complex (Groovy DSL).
* Operating System: Linux (most common), Windows, macOS (for mobile/desktop apps).
* Runtimes/SDKs: Node.js, Python, Java (JDK), .NET, Go, Ruby, PHP, specific compilers (e.g., GCC, Clang).
* Build Tools: Maven, Gradle, npm, yarn, pip, go build, MSBuild.
* Testing Frameworks: Jest, JUnit, Pytest, Cypress, Selenium, Playwright.
* Linting/Static Analysis Tools: ESLint, SonarQube, Black, Flake8, CheckStyle.
* Containerization: Docker for encapsulating build and test environments.
* Containerized Build Agents: Utilize Docker containers for build and test environments to ensure reproducibility, isolate dependencies, and simplify environment setup. This allows pipelines to run consistently across different agents.
* Managed Runners: Leverage managed runners (GitHub-hosted runners, GitLab Shared Runners) for common technology stacks to reduce infrastructure management overhead.
* Self-Hosted Runners: Deploy self-hosted runners (on VMs or Kubernetes) for specific hardware requirements, custom environments, or enhanced security/performance needs.
* Docker Images:
* Cloud Registries: GitHub Container Registry, GitLab Container Registry, AWS ECR, Azure Container Registry (ACR), Google Container Registry (GCR).
* Self-Hosted: Harbor, Docker Registry.
* Language-Specific Packages:
* Maven/Gradle: Nexus Repository Manager, JFrog Artifactory, GitHub Packages.
* npm/Yarn: Nexus, Artifactory, GitHub Packages, npm registry.
* Python (PyPI): Nexus, Artifactory, GitHub Packages.
* Generic Binaries/Files: Cloud object storage (AWS S3, Azure Blob Storage, Google Cloud Storage).
* For Docker images, use the native container registry integrated with your SCM (GitHub Container Registry, GitLab Container Registry) or a cloud-specific registry (ECR, ACR, GCR) for tight integration with deployment targets.
* For other package types, consider a universal package manager like JFrog Artifactory or Nexus, or utilize GitHub Packages for deep SCM integration. Cloud object storage is ideal for generic artifacts.
* Virtual Machines (VMs): AWS EC2, Azure VMs, Google Compute Engine.
* Tools: Ansible, Chef, Puppet, custom shell scripts, cloud-init.
* Container Orchestration (Kubernetes): AWS EKS, Azure AKS, Google GKE, OpenShift.
* Tools: Helm, Kustomize, Argo CD, Flux CD (for GitOps).
* Serverless: AWS Lambda, Azure Functions, Google Cloud Functions.
* Tools: Serverless Framework, AWS SAM.
* Platform as a Service (PaaS): AWS Elastic Beanstalk, Azure App Service, Google App Engine, Heroku.
* Modern Applications: Prioritize Kubernetes for microservices and scalable applications due to its robust orchestration capabilities, self-healing, and strong ecosystem. Implement GitOps using tools like Argo CD or Flux CD for declarative, Git-driven deployments.
* Simpler Applications/Cost Optimization: Serverless or PaaS solutions can significantly reduce operational overhead for applications that fit their model.
* Infrastructure as Code (IaC): Regardless of
yaml
image: node:18-alpine # Base image for all jobs unless overridden
variables:
DOCKER_IMAGE_NAME: your-app-name
DOCKER_REGISTRY: $CI_REGISTRY # GitLab's built-in registry
AWS_REGION: us-east-1 # Specify your AWS region
# For external registries like Docker Hub:
# DOCKER_REGISTRY: docker.io/your-docker-username
stages:
- lint
- test
- build
- deploy_staging
- deploy_production
lint_job:
stage: lint
script:
- npm ci
- npm run lint # Assumes 'lint' script in package.json
cache:
key: ${CI_COMMIT_REF_SLUG}-node-modules
paths:
- node_modules/
policy: pull-push # Cache dependencies between jobs/pipelines
test_job:
stage: test
script:
- npm ci
- npm test # Assumes 'test' script in package.json
cache:
key: ${CI_COMMIT_REF_SLUG}-node-modules
paths:
- node_modules/
policy: pull # Only pull cache, don't modify it after install
# Optional: Integration with GitLab's SAST/DAST/Dependency Scanning
# artifacts:
# reports:
# sast: gl-sast-report.json
# dast: gl-dast-report.json
# dependency_scanning: gl-dependency-scanning-report.json
build_docker_image_job:
stage: build
image: docker:24.0.5-git # Use a Docker-in-Docker enabled image
services:
- docker:24.0.5-dind # Required for Docker-in-Docker
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY # Login to GitLab's registry
# For external registries:
# - docker login -u $DOCKER_USERNAME -p $DOCK
This document provides detailed and professional CI/CD pipeline configurations tailored for your project, encompassing essential stages such as linting, testing, building, and deployment. We have generated configurations for GitHub Actions, GitLab CI, and Jenkins, offering flexibility and robust automation for your development workflow.
The generated pipelines are designed to automate the entire software delivery process, from code commit to production deployment. Each configuration leverages the best practices of its respective platform to ensure code quality, reliability, and efficient delivery. These templates serve as a strong foundation, ready for customization to fit your specific application architecture and deployment targets.
Key Benefits:
Each generated pipeline incorporates the following critical stages, executed sequentially upon a trigger (typically a code push to a specific branch):
* Purpose: Static code analysis to identify stylistic issues, potential bugs, and adherence to coding standards.
* Tools: ESLint (JavaScript), Black/Flake8 (Python), Prettier (Formatter), etc.
* Outcome: Fails the pipeline if code does not meet defined quality standards, preventing poorly formatted or potentially problematic code from progressing.
* Purpose: Verifies the functionality and correctness of the application code.
* Types: Unit tests, integration tests, end-to-end (E2E) tests.
* Tools: Jest (JavaScript), Pytest (Python), JUnit (Java), Cypress/Selenium (E2E).
* Outcome: Fails the pipeline if any tests fail, ensuring only validated code moves forward.
* Purpose: Compiles source code, resolves dependencies, and packages the application into a deployable artifact (e.g., Docker image, JAR file, compiled static assets).
* Tools: npm/yarn (Node.js), Maven/Gradle (Java), pip (Python), Docker.
* Outcome: Produces a ready-to-deploy artifact, versioned and stored for subsequent stages.
* Purpose: Releases the built artifact to one or more environments (e.g., Staging, Production).
* Strategies: Blue/Green, Canary, Rolling updates, Recreate.
* Targets: Cloud providers (AWS, Azure, GCP), Kubernetes clusters, virtual machines, serverless functions, static hosting.
* Outcome: Makes the new version of the application available to users in the target environment.
Below are detailed configurations for GitHub Actions, GitLab CI, and Jenkins, demonstrating a common scenario (e.g., a Node.js web application) with placeholders for specific credentials and deployment targets.
GitHub Actions uses YAML files (.yml) stored in the .github/workflows/ directory of your repository.
Filename: .github/workflows/main.yml
name: CI/CD Pipeline
on:
push:
branches:
- main
- develop
pull_request:
branches:
- main
- develop
jobs:
build_and_deploy:
runs-on: ubuntu-latest
env:
NODE_VERSION: '18.x' # Specify your Node.js version
# Add other environment variables as needed, e.g.,
# MY_APP_ENV: production
steps:
- name: Checkout Repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm' # or 'yarn', 'pnpm'
- name: Install Dependencies
run: npm ci # or yarn install --frozen-lockfile, pnpm install --frozen-lockfile
- name: Run Linting
run: npm run lint # Assumes a 'lint' script in package.json
# Add a 'continue-on-error: true' if you want linting to be non-blocking for subsequent stages
# For professional setups, linting should typically fail the pipeline.
- name: Run Tests
run: npm test # Assumes a 'test' script in package.json
env:
CI: true # Important for some test runners (e.g., Jest)
- name: Build Application
run: npm run build # Assumes a 'build' script in package.json
env:
# Example: Pass build-time environment variables
REACT_APP_API_URL: ${{ secrets.REACT_APP_API_URL }}
- name: Archive Build Artifacts (Optional)
uses: actions/upload-artifact@v4
with:
name: build-artifact
path: build/ # Adjust path to your build output directory
- name: Deploy to Staging (Example: AWS S3/CloudFront)
if: github.ref == 'refs/heads/develop'
uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET_STAGING }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1' # Adjust region
SOURCE_DIR: 'build' # Adjust path to your build output directory
- name: Deploy to Production (Example: AWS S3/CloudFront)
if: github.ref == 'refs/heads/main'
uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET_PRODUCTION }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1' # Adjust region
SOURCE_DIR: 'build' # Adjust path to your build output directory
- name: Invalidate CloudFront Cache (Production)
if: github.ref == 'refs/heads/main'
uses: chetan/invalidate-cloudfront-action@v2
env:
DISTRIBUTION: ${{ secrets.AWS_CLOUDFRONT_DISTRIBUTION_ID_PRODUCTION }}
PATHS: '/*'
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1'
Integration Steps for GitHub Actions:
.github/workflows/main.yml in your repository.AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_S3_BUCKET_STAGING, AWS_S3_BUCKET_PRODUCTION, AWS_CLOUDFRONT_DISTRIBUTION_ID_PRODUCTION, REACT_APP_API_URL).main or develop branch to trigger the pipeline.GitLab CI uses a single YAML file (.gitlab-ci.yml) at the root of your repository.
Filename: .gitlab-ci.yml
image: node:18 # Use a Docker image with Node.js pre-installed
stages:
- lint
- test
- build
- deploy
cache:
paths:
- node_modules/
variables:
# Define global variables or per-job variables
# CI_DEBUG_TRACE: "true" # Uncomment for detailed job logs
lint_job:
stage: lint
script:
- npm ci
- npm run lint # Assumes a 'lint' script in package.json
# For professional setups, linting should typically fail the pipeline.
# allow_failure: true # Uncomment if you want linting to be non-blocking
test_job:
stage: test
script:
- npm ci
- npm test # Assumes a 'test' script in package.json
# coverage: '/All files[^|]*\|[^|]*\s+([\d\.]+)/' # Example for Jest coverage
build_job:
stage: build
script:
- npm ci
- npm run build # Assumes a 'build' script in package.json
artifacts:
paths:
- build/ # Adjust path to your build output directory
expire_in: 1 week # How long to keep the artifacts
variables:
# Example: Pass build-time environment variables
REACT_APP_API_URL: $REACT_APP_API_URL # Defined as a CI/CD variable
deploy_staging_job:
stage: deploy
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # Use an image with AWS CLI
script:
- aws s3 sync build/ s3://$AWS_S3_BUCKET_STAGING --delete # Sync build directory to S3
- aws cloudfront create-invalidation --distribution-id $AWS_CLOUDFRONT_DISTRIBUTION_ID_STAGING --paths "/*" || true # Invalidate CloudFront cache
only:
- develop # Only run on 'develop' branch
environment:
name: staging
url: https://staging.yourdomain.com # Replace with your staging URL
deploy_production_job:
stage: deploy
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # Use an image with AWS CLI
script:
- aws s3 sync build/ s3://$AWS_S3_BUCKET_PRODUCTION --delete # Sync build directory to S3
- aws cloudfront create-invalidation --distribution-id $AWS_CLOUDFRONT_DISTRIBUTION_ID_PRODUCTION --paths "/*" # Invalidate CloudFront cache
only:
- main # Only run on 'main' branch
environment:
name: production
url: https://yourdomain.com # Replace with your production URL
Integration Steps for GitLab CI:
.gitlab-ci.yml at the root of your repository.AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_S3_BUCKET_STAGING, AWS_S3_BUCKET_PRODUCTION, AWS_CLOUDFRONT_DISTRIBUTION_ID_STAGING, AWS_CLOUDFRONT_DISTRIBUTION_ID_PRODUCTION, REACT_APP_API_URL). Mark sensitive variables as "Protected" and "Masked."main or develop branch to trigger the pipeline.Jenkins Pipelines are defined using a Jenkinsfile written in Groovy DSL, stored at the root of your repository.
Filename: Jenkinsfile
pipeline {
agent { docker { image 'node:18' } } # Use a Docker agent with Node.js
environment {
# Define global environment variables
NODE_VERSION = '18.x'
BUILD_DIR = 'build'
AWS_REGION = 'us-east-1'
# Sensitive credentials should be managed via Jenkins Credentials
# AWS_ACCESS_KEY_ID = credentials('aws-access-key-id')
# AWS_SECRET_ACCESS_KEY = credentials('aws-secret-access-key')
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Install Dependencies') {
steps {
sh 'npm ci'
}
}
stage('Lint') {
steps {
sh 'npm run lint' // Assumes a 'lint' script in package.json
}
}
stage('Test') {
steps {
sh 'npm test' // Assumes a 'test' script in package.json
}
}
stage('Build') {
steps {
sh 'npm run build' // Assumes a 'build' script in package.json
}
post {
success {
archiveArtifacts artifacts: "${BUILD_DIR}/**", fingerprint: true
}
}
}
stage('Deploy to Staging') {
when {
branch 'develop'
}
steps {
withCredentials([
aws(credentialsId: 'aws-dev-credentials', roleAccount: '123456789012')
]) {
sh "aws s3 sync ${env.BUILD_DIR}/ s3://${STAGING_S3_BUCKET} --delete --region ${env.AWS_REGION}"
sh "aws cloudfront create-invalidation --distribution-id ${STAGING_CLOUDFRONT_DISTRIBUTION_ID} --paths '/*' --region ${env.AWS_REGION} || true"
}
}
}
stage('Deploy to Production') {
when {
branch 'main'
\n