DevOps Pipeline Generator
Run ID: 69caf79e26e01bf7c6786dfe2026-03-30Infrastructure
PantheraHive BOS
BOS Dashboard

DevOps Pipeline Generator: Comprehensive CI/CD Configurations

This document provides detailed and actionable CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to automate the entire software delivery lifecycle, encompassing linting, testing, building, and deployment stages for a typical full-stack web application.


1. Introduction

This deliverable provides ready-to-use boilerplate CI/CD pipeline configurations tailored for three popular platforms: GitHub Actions, GitLab CI, and Jenkins. Each configuration is structured to support a modern development workflow, ensuring code quality, reliability, and efficient deployment. The examples are comprehensive, covering common stages from code commit to production deployment.

2. Assumptions & Customization Guide

To provide concrete and actionable examples, the following assumptions have been made regarding the application stack and deployment target. These can be easily customized to fit your specific project requirements.

2.1. Application Stack Assumptions

* Linting: ESLint

* Testing: Jest

* Building: npm run build generates static assets

* Linting: Flake8 or Black

* Testing: Pytest

* Building: Poetry/Pipenv/pip for dependency management, no specific build step beyond dependency installation and perhaps a simple setup.py if packaging.

text • 209 chars
    .
    ├── frontend/
    │   ├── package.json
    │   └── ...
    ├── backend/
    │   ├── requirements.txt (or pyproject.toml)
    │   └── ...
    ├── Dockerfile
    ├── docker-compose.yml
    └── ...
    
Sandboxed live preview

Step 1 of 3: Analyze Infrastructure Needs - DevOps Pipeline Generator

This document provides a comprehensive analysis of the infrastructure needs required to support a robust and efficient DevOps CI/CD pipeline. This foundational analysis is crucial for designing and implementing configurations for GitHub Actions, GitLab CI, or Jenkins that incorporate testing, linting, building, and deployment stages.


1. Executive Summary

The successful implementation of a modern CI/CD pipeline hinges on a well-defined and adequately provisioned infrastructure. This analysis identifies the core components, key considerations, and best practices for establishing a scalable, secure, reliable, and cost-effective infrastructure. While specific choices will depend on the project's unique requirements, this report outlines a common set of infrastructure elements and strategic recommendations to ensure the pipeline can effectively automate software delivery from code commit to production deployment across various environments.


2. Key Considerations for Infrastructure Selection & Design

When planning the infrastructure for a DevOps pipeline, several critical factors must be evaluated to ensure optimal performance, security, and maintainability:

  • Scalability: The infrastructure must be able to handle varying loads of builds, tests, and deployments, scaling up or down as needed to meet demand without performance degradation.
  • Reliability & High Availability: Core CI/CD services and artifact repositories should be highly available to prevent pipeline downtime and ensure continuous delivery.
  • Security: End-to-end security is paramount, encompassing secure access to source code, secrets management, vulnerability scanning, and secure deployment targets.
  • Cost Efficiency: Infrastructure choices should balance performance and reliability with cost-effectiveness, leveraging managed services where appropriate to reduce operational overhead.
  • Maintainability & Observability: The infrastructure should be easy to monitor, troubleshoot, update, and manage, with comprehensive logging and metrics for insights into pipeline health and application performance.
  • Integration Capabilities: All chosen components must integrate seamlessly with each other and with existing developer toolchains to create a cohesive workflow.
  • Compliance & Governance: Adherence to industry-specific regulations (e.g., GDPR, HIPAA, SOC 2) and internal governance policies must be factored into infrastructure design.
  • Geographic Distribution: For global applications, the ability to deploy and serve users from multiple regions may influence infrastructure choices.

3. Core Infrastructure Components Identified

Based on the general requirement for a "DevOps Pipeline Generator," the following core infrastructure components are typically required:

  1. Source Code Management (SCM) System:

* Purpose: Host application source code, manage version control, facilitate collaboration (e.g., pull requests, branching strategies).

* Examples: GitHub, GitLab (self-hosted or cloud), Bitbucket, Azure DevOps Repos.

  1. CI/CD Orchestration Platform:

* Purpose: Define, trigger, and manage pipeline workflows (build, test, lint, deploy). Orchestrate agents and tasks.

* Examples: GitHub Actions, GitLab CI, Jenkins, Azure DevOps Pipelines.

  1. Build & Test Environments:

* Purpose: Provide isolated and consistent environments for compiling code, running unit/integration/end-to-end tests, and linting.

* Examples: Docker containers (for consistent environments), Virtual Machines (VMs), cloud-hosted build agents (e.g., GitHub Actions runners, GitLab shared runners, Jenkins agents).

  1. Artifact & Image Registry:

* Purpose: Securely store and version build artifacts (e.g., JARs, WARs, npm packages) and Docker images.

* Examples: Docker Hub, Amazon Elastic Container Registry (ECR), Azure Container Registry (ACR), Google Container Registry (GCR), Nexus Repository Manager, Artifactory.

  1. Deployment Targets:

* Purpose: Environments where the application will be deployed (e.g., development, staging, production).

* Examples:

* Cloud-based:

* Container Orchestration: Kubernetes (Amazon EKS, Azure AKS, Google GKE), AWS ECS, Azure Container Apps.

* Virtual Machines: AWS EC2, Azure Virtual Machines, Google Compute Engine.

* Serverless: AWS Lambda, Azure Functions, Google Cloud Functions, AWS Fargate.

* Platform as a Service (PaaS): AWS Elastic Beanstalk, Azure App Service, Google App Engine.

* On-Premise: VMs, bare metal servers, on-premise Kubernetes clusters.

  1. Secrets Management System:

* Purpose: Securely store and manage sensitive information (API keys, database credentials, environment variables) used by the pipeline and applications.

* Examples: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, Google Secret Manager.

  1. Monitoring & Logging Solutions:

* Purpose: Collect, aggregate, analyze, and visualize logs and metrics from the pipeline and deployed applications for performance, health, and security insights.

* Examples: Prometheus & Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Datadog, CloudWatch (AWS), Azure Monitor, Google Cloud Operations.

  1. Security Scanning Tools:

* Purpose: Integrate security checks throughout the pipeline (shift-left security).

* Examples:

* Static Application Security Testing (SAST): SonarQube, Checkmarx.

* Software Composition Analysis (SCA): Snyk, Trivy, Dependabot, Renovate.

* Dynamic Application Security Testing (DAST): OWASP ZAP.

* Container Image Scanning: Trivy, Clair, Aqua Security.

  1. Infrastructure as Code (IaC) Tools:

* Purpose: Define and provision infrastructure components declaratively, ensuring consistency and repeatability.

* Examples: Terraform, AWS CloudFormation, Azure Resource Manager (ARM) Templates/Bicep, Google Cloud Deployment Manager, Pulumi.


4. Analysis of Infrastructure Needs by Category

4.1. Source Code Management (SCM)

  • Need: Centralized, version-controlled repository for all application code, pipeline definitions, and IaC.
  • Insights: Modern SCM platforms (GitHub, GitLab) often integrate directly with their respective CI/CD services, simplifying setup. Branching strategies (e.g., GitFlow, GitHub Flow) require SCM features like protected branches and pull request reviews.
  • Recommendation: Utilize a cloud-hosted SCM for ease of use, scalability, and built-in integration with CI/CD tools.

4.2. CI/CD Orchestration Platform

  • Need: A robust platform to automate the entire software delivery lifecycle.
  • Insights:

* GitHub Actions/GitLab CI: Offer tight integration with their SCM, YAML-based configuration (Infrastructure as Code for pipelines), and managed runners, reducing operational overhead. Ideal for cloud-native projects.

* Jenkins: Highly customizable, extensible, and suitable for complex, on-premise, or hybrid environments. Requires more management overhead for agents and scaling.

  • Recommendation: Prioritize cloud-native, managed CI/CD solutions (GitHub Actions, GitLab CI) for projects targeting cloud deployments due to their ease of setup, maintenance, and scalability. Jenkins is a strong candidate for highly customized or legacy environments.

4.3. Build & Test Infrastructure

  • Need: Fast, consistent, and isolated environments for builds and tests.
  • Insights: Containerization (Docker) is the gold standard for build environments, ensuring dependency consistency across developer machines and pipeline stages. Cloud-based runners (e.g., GitHub Actions' Ubuntu/Windows/macOS runners, GitLab's shared runners) provide ephemeral, pre-configured environments that scale automatically. Self-hosted Jenkins agents require careful resource planning and management.
  • Recommendation: Standardize on Docker for build environments. Leverage managed cloud CI/CD runners for scalability and reduced operational burden. For specific, resource-intensive tasks, consider self-hosted runners or dedicated build VMs.

4.4. Artifact & Image Registry

  • Need: Secure, versioned, and highly available storage for build outputs and container images.
  • Insights: Cloud-native registries (ECR, ACR, GCR) offer high availability, integration with IAM/RBAC for fine-grained access control, and often lower latency when deploying to the same cloud provider. On-premise solutions like Nexus or Artifactory provide centralized artifact management across diverse project types.
  • Recommendation: Use the container registry provided by the target cloud provider for container images. For other artifacts (e.g., language-specific packages), consider a universal artifact repository like Nexus or Artifactory, or cloud-native object storage (S3, Blob Storage, GCS) for simple file storage.

4.5. Deployment Targets

  • Need: Production-like environments for testing and actual application deployment.
  • Insights:

* Cloud: Offers unparalleled scalability, global reach, and a vast array of managed services (Kubernetes, Serverless, PaaS) that simplify operations.

* On-premise: May be required for specific data sovereignty, compliance, or existing infrastructure investments.

* Kubernetes: Dominant for microservices architectures, providing robust orchestration, scaling, and self-healing capabilities.

* Serverless: Ideal for event-driven, stateless workloads, offering extreme scalability and pay-per-execution cost models.

  • Recommendation: Favor cloud-native deployment targets (Kubernetes, Serverless, PaaS) for new projects due to their agility, scalability, and reduced operational complexity. Use IaC tools (Terraform)

yaml

name: CI/CD Pipeline

on:

push:

branches:

- main

pull_request:

branches:

- main

workflow_dispatch: # Allows manual triggering

env:

PROJECT_NAME: my-webapp

AWS_REGION: us-east-1

ECR_REPOSITORY: ${{ vars.ECR_REPOSITORY_NAME || format('{0}-repo', env.PROJECT_NAME) }} # Use repository name from VARS or generate default

S3_BUCKET: ${{ vars.S3_BUCKET_NAME || format('{0}-static-assets', env.PROJECT_NAME) }} # Use bucket name from VARS or generate default

jobs:

lint:

name: Lint Code

runs-on: ubuntu-latest

steps:

- name: Checkout code

uses: actions/checkout@v4

- name: Setup Node.js for Frontend Linting

uses: actions/setup-node@v4

with:

node-version: '18'

- name: Install Frontend Dependencies

working-directory: ./frontend

run: npm ci

- name: Run Frontend Lint

working-directory: ./frontend

run: npm run lint

- name: Set up Python for Backend Linting

uses: actions/setup-python@v5

with:

python-version: '3.9'

- name: Install Backend Dependencies

working-directory: ./backend

run: |

python -m pip install --upgrade pip

pip install flake8 black # Or poetry install, pipenv install

pip install -r requirements.txt

- name: Run Backend Lint (Flake8)

working-directory: ./backend

run: flake8 .

- name: Run Backend Lint (Black)

working-directory: ./backend

run: black --check .

test:

name: Run Tests

runs-on: ubuntu-latest

needs: lint # Ensure linting passes before testing

steps:

- name: Checkout code

uses: actions/checkout@v4

- name: Setup Node.js for Frontend Testing

uses: actions/setup-node@v4

with:

node-version: '18'

- name: Install Frontend Dependencies

working-directory: ./frontend

run: npm ci

- name: Run Frontend Tests

working-directory: ./frontend

run: npm test -- --coverage # Example with coverage

- name: Set up Python for Backend Testing

uses: actions/setup-python@v5

with:

python-version: '3.9'

- name: Install Backend Dependencies

working-directory: ./backend

run: |

python -m pip install --upgrade pip

pip install pytest # Or poetry install, pipenv install

pip install -r requirements.txt

- name: Run Backend Tests

working-directory: ./backend

run: pytest ./backend # Adjust path if needed

build:

name: Build Application and Docker Image

runs-on: ubuntu-latest

needs: test # Ensure tests pass before building

outputs:

image_tag: ${{ steps.set_image_tag.outputs.image_tag }}

steps:

- name: Checkout code

uses: actions/checkout@v4

# --- Build Frontend ---

- name: Setup Node.js for Frontend Build

uses: actions/setup-node@v4

with:

node-version: '18'

- name: Install Frontend Dependencies

working-directory: ./frontend

run: npm ci

- name: Build Frontend

working-directory: ./frontend

run: npm run build

- name: Archive Frontend Build Artifacts

uses: actions/upload-artifact@v4

with:

name: frontend-build

path: frontend/dist # Adjust to your build output directory

# --- Build Backend (No specific "build" step for Python beyond dependency installation) ---

# For Python, typically the "build" is just ensuring dependencies are available or packaging into a wheel.

# For this example, we'll assume the Dockerfile handles Python dependencies.

# --- Build Docker Image ---

- name: Set up Docker Buildx

uses: docker/setup-buildx-action@v3

- name: Login to Amazon ECR

uses: docker/login-action@v3

with:

registry: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com

username: ${{ secrets.AWS_ACCESS_KEY_ID }}

password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

- name: Set Docker Image Tag

id: set_image_tag

run: |

TIMESTAMP=$(date +%Y%m%d%H%M%S)

GIT_SHA=${{ github.sha }}

IMAGE_TAG="${{ env.PROJECT_NAME }}:${TIMESTAMP}-${GIT_SHA::7}"

echo "image_tag=$IMAGE_TAG" >> $GITHUB_OUTPUT

- name: Build and Push Docker Image to ECR

uses: docker/build-push-action@v5

with:

context: .

push: true

tags: |

${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com/${{ env.ECR_REPOSITORY }}:${{ steps.set_image_tag.outputs.image_tag }}

${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com/${{ env.ECR_REPOSITORY }}:latest

build-args: |

NODE_ENV=production # Example build arg

PYTHONUNBUFFERED=1

deploy:

name: Deploy Application

runs-on: ubuntu-latest

needs: build # Ensure build passes before deploying

environment: production # Use a GitHub environment for protection rules and secrets

steps:

- name: Checkout code

uses: actions/checkout@v4

# --- Deploy Frontend Static Assets to S3 ---

- name: Download Frontend Build Artifacts

uses: actions/download-artifact@v4

with:

name: frontend-build

path: frontend-build-output

- name: Configure AWS Credentials

uses: aws-actions/configure-aws-credentials@v4

with:

aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}

aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

aws-region: ${{ env.AWS_REGION }}

- name: Sync Frontend to S3

run: aws s3 sync frontend-build-output/dist s3://${{ env.S3_BUCKET }} --delete # Adjust path if needed

# --- Deploy Containerized Application to Server (Example via SSH) ---

- name: Deploy to Server via SSH

uses: appleboy/ssh-action@v1.0.3

with:

host: ${{ secrets.SSH_HOST }}

username: ${{ secrets.SSH_USER }}

key: ${{ secrets.SSH_PRIVATE_KEY }}

script: |

echo "Deploying new Docker image..."

# Login to ECR on the remote server

aws ecr get-login-password --region ${{ env.AWS_REGION }} | docker login --username AWS --password-stdin ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com

# Pull the latest image

docker pull ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com/${{ env.ECR_REPOSITORY }}:${{ needs.build.outputs.image_tag }}

# Stop existing container (if any)

docker stop ${{ env.PROJECT_NAME }} || true

docker rm ${{ env.PROJECT_NAME }} || true

# Run new container

docker run -d --name ${{ env.PROJECT_NAME }} -p 80:80 \

gemini Output

DevOps Pipeline Generator: Comprehensive CI/CD Configurations

This document provides detailed and actionable CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to establish robust continuous integration and continuous deployment practices for a typical web application, encompassing essential stages such as linting, testing, building, and deployment.


1. Introduction

Automated CI/CD pipelines are critical for modern software development, enabling faster, more reliable, and consistent delivery of applications. This deliverable outlines example configurations for three leading CI/CD platforms, tailored to a generic containerized application (e.g., Node.js, Python, Java application packaged with Docker) to demonstrate a full lifecycle. Each configuration is modular, allowing for easy adaptation to your specific technology stack and deployment targets.

2. Core CI/CD Principles & Stages

Regardless of the platform, a well-structured CI/CD pipeline typically follows these core stages:

  • Linting: Analyzes source code for programmatic errors, bugs, stylistic errors, and suspicious constructs. Ensures code quality and adherence to coding standards.
  • Testing: Executes various tests (unit, integration, end-to-end) to verify functionality, catch regressions, and ensure code correctness.
  • Building: Compiles source code, resolves dependencies, and packages the application into an executable artifact (e.g., Docker image, JAR, WAR, executable binary).
  • Deployment: Pushes the built artifact to a target environment (e.g., development, staging, production). This often involves pushing to a container registry and then deploying to a cloud provider, Kubernetes cluster, or virtual machines.

3. Platform-Specific CI/CD Pipeline Configurations

Below are detailed configurations for GitHub Actions, GitLab CI, and Jenkins, demonstrating the common stages for a generic Dockerized application.

3.1. GitHub Actions Configuration

GitHub Actions provides a flexible way to automate workflows directly within your GitHub repository.

File: .github/workflows/main.yml


name: CI/CD Pipeline

on:
  push:
    branches:
      - main
      - develop
  pull_request:
    branches:
      - main
      - develop

jobs:
  lint-test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Node.js (example for a Node.js app)
        uses: actions/setup-node@v4
        with:
          node-version: '18'

      - name: Install dependencies
        run: npm ci

      - name: Run ESLint
        run: npm run lint # Assumes 'lint' script in package.json

      - name: Run Unit and Integration Tests
        run: npm test # Assumes 'test' script in package.json

  build-and-push-docker:
    needs: lint-test # Ensure linting and tests pass before building
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Login to Docker Hub (or other registry)
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}
          # For AWS ECR, GCP GCR, etc., use respective login actions or configure directly.

      - name: Build Docker image
        id: docker_build
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: |
            ${{ secrets.DOCKER_REGISTRY_URL }}/my-app:${{ github.sha }}
            ${{ secrets.DOCKER_REGISTRY_URL }}/my-app:latest # Push 'latest' on main branch
          build-args: |
            APP_ENV=production
          # For multi-arch builds: platforms: linux/amd64,linux/arm64

      - name: Output Docker image name
        run: echo "Image built: ${{ steps.docker_build.outputs.digest }}"

  deploy-to-environment:
    needs: build-and-push-docker # Ensure image is built before deployment
    runs-on: ubuntu-latest
    environment:
      name: ${{ github.ref == 'refs/heads/main' && 'production' || 'staging' }}
      url: https://my-app-${{ github.ref_name }}.example.com
    # Conditional deployment based on branch
    if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop'
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Deploy to Kubernetes (example)
        if: github.ref == 'refs/heads/main'
        uses: azure/k8s-set-context@v3 # Or use actions for other cloud providers
        with:
          method: kubeconfig
          kubeconfig: ${{ secrets.KUBE_CONFIG_PROD }} # Production Kubeconfig
        # Add a step to apply Kubernetes manifests or Helm charts
      - name: Apply Kubernetes manifests (Production)
        if: github.ref == 'refs/heads/main'
        run: |
          kubectl apply -f k8s/deployment.yml --record
          kubectl rollout status deployment/my-app-deployment
        env:
          KUBECONFIG: ${{ secrets.KUBE_CONFIG_PROD }}

      - name: Deploy to Staging (example)
        if: github.ref == 'refs/heads/develop'
        uses: azure/k8s-set-context@v3
        with:
          method: kubeconfig
          kubeconfig: ${{ secrets.KUBE_CONFIG_STAGING }} # Staging Kubeconfig
      - name: Apply Kubernetes manifests (Staging)
        if: github.ref == 'refs/heads/develop'
        run: |
          kubectl apply -f k8s/staging-deployment.yml --record
          kubectl rollout status deployment/my-app-staging
        env:
          KUBECONFIG: ${{ secrets.KUBE_CONFIG_STAGING }}

      - name: Trigger external deployment (e.g., Lambda, Serverless)
        if: github.ref == 'refs/heads/feature-branch' # Example for feature branches
        run: echo "Deployment for feature branch triggered via webhook..."
        # Add curl or other commands to trigger specific deployments

Key Assumptions & Customization for GitHub Actions:

  • package.json scripts: Assumes npm run lint and npm test scripts exist. Adjust for your language/framework (e.g., mvn test, pytest).
  • Dockerfile: A Dockerfile must be present at the root of your repository.
  • Docker Registry: Replace ${{ secrets.DOCKER_REGISTRY_URL }} with your registry (e.g., docker.io, ghcr.io, your-aws-account.dkr.ecr.eu-west-1.amazonaws.com).
  • Secrets:

* DOCKER_USERNAME, DOCKER_PASSWORD: For Docker Hub or other registries.

* KUBE_CONFIG_PROD, KUBE_CONFIG_STAGING: Kubernetes cluster credentials. Store these securely in GitHub repository secrets.

  • Deployment Commands: The kubectl apply commands are placeholders. Replace them with commands specific to your deployment target (e.g., Helm charts, AWS CLI, Azure CLI, Google Cloud SDK, serverless deployment tools).
  • Environments: The environment block helps manage environment-specific variables, secrets, and provides deployment protection rules.

3.2. GitLab CI Configuration

GitLab CI is deeply integrated with GitLab repositories, providing powerful CI/CD capabilities directly within your project.

File: .gitlab-ci.yml


image: docker:latest # Use Docker executor for building images

variables:
  DOCKER_HOST: tcp://docker:2375 # Required for 'docker in docker'
  DOCKER_TLS_CERTDIR: "" # Disable TLS for DIND for simplicity, adjust for production

stages:
  - lint
  - test
  - build
  - deploy

.node_template: &node_base
  image: node:18-alpine # Base image for Node.js tasks
  before_script:
    - npm ci --cache .npm --prefer-offline # Install dependencies

lint:
  <<: *node_base
  stage: lint
  script:
    - npm run lint # Assumes 'lint' script in package.json
  cache:
    key: ${CI_COMMIT_REF_SLUG}-node-modules
    paths:
      - .npm/
      - node_modules/
    policy: pull-push

test:
  <<: *node_base
  stage: test
  script:
    - npm test # Assumes 'test' script in package.json
  cache:
    key: ${CI_COMMIT_REF_SLUG}-node-modules
    paths:
      - .npm/
      - node_modules/
    policy: pull

build_docker_image:
  stage: build
  services:
    - docker:dind # Docker in Docker service
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest # Tag 'latest'
    - docker push $CI_REGISTRY_IMAGE:latest
  rules:
    - if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH == "develop"

deploy_staging:
  stage: deploy
  image: alpine/helm:3.12.3 # Example: using Helm for deployment
  script:
    - echo "Deploying to Staging environment..."
    - apk add --no-cache kubectl # Install kubectl if needed
    - kubectl config set-context --current --namespace=staging
    - kubectl config set-cluster kubernetes --server="$KUBE_URL_STAGING" --certificate-authority="$KUBE_CA_CERT_STAGING"
    - kubectl config set-credentials gitlab --token="$KUBE_TOKEN_STAGING"
    - kubectl config set-context kubernetes --cluster=kubernetes --user=gitlab
    - kubectl config use-context kubernetes
    - helm upgrade --install my-app-staging ./helm/my-app -f ./helm/my-app/values-staging.yaml --set image.tag=$CI_COMMIT_SHA
  environment:
    name: staging
    url: https://my-app-staging.example.com
  rules:
    - if: $CI_COMMIT_BRANCH == "develop"

deploy_production:
  stage: deploy
  image: alpine/helm:3.12.3 # Example: using Helm for deployment
  script:
    - echo "Deploying to Production environment..."
    - apk add --no-cache kubectl
    - kubectl config set-context --current --namespace=production
    - kubectl config set-cluster kubernetes --server="$KUBE_URL_PROD" --certificate-authority="$KUBE_CA_CERT_PROD"
    - kubectl config set-credentials gitlab --token="$KUBE_TOKEN_PROD"
    - kubectl config set-context kubernetes --cluster=kubernetes --user=gitlab
    - kubectl config use-context kubernetes
    - helm upgrade --install my-app ./helm/my-app -f ./helm/my-app/values-prod.yaml --set image.tag=$CI_COMMIT_SHA
  environment:
    name: production
    url: https://my-app.example.com
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: manual # Manual deployment to production

Key Assumptions & Customization for GitLab CI:

  • package.json scripts: Assumes npm run lint and npm test scripts exist.
  • Dockerfile: A Dockerfile must be present at the root of your repository.
  • GitLab Container Registry: $CI_REGISTRY_USER, $CI_REGISTRY_PASSWORD, $CI_REGISTRY_IMAGE are built-in GitLab CI variables for its integrated container registry.
  • Variables:

* KUBE_URL_STAGING, KUBE_CA_CERT_STAGING, KUBE_TOKEN_STAGING: Kubernetes credentials for staging.

* KUBE_URL_PROD, KUBE_CA_CERT_PROD, KUBE_TOKEN_PROD: Kubernetes credentials for production.

* Store these securely in GitLab project CI/CD Variables (masked and protected).

  • Deployment Commands: The kubectl and helm commands are examples. Adjust them to your specific deployment strategy.
  • rules: Use rules for conditional job execution based on branch, tags, or other criteria.
  • manual jobs: For critical environments like production, consider when: manual to require a manual
devops_pipeline_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}