DevOps Pipeline Generator
Run ID: 69cc2cf1fdffe128046c54272026-03-31Infrastructure
PantheraHive BOS
BOS Dashboard

DevOps Pipeline Generator: Comprehensive CI/CD Pipeline Configurations

This document provides detailed and professional CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to be comprehensive, covering essential stages such as linting, testing, building, and deployment, and are tailored for a typical modern application development workflow.

We will provide a generic structure and then specific examples for each platform, using a Node.js application with Docker as a common illustrative technology stack. These examples are designed to be highly adaptable to various other programming languages, frameworks, and deployment targets.


1. Core CI/CD Principles and Stages

A robust CI/CD pipeline automates the software delivery process, ensuring quality, speed, and reliability. The common stages include:


2. Platform-Specific CI/CD Pipeline Configurations

Below, you will find detailed configurations for GitHub Actions, GitLab CI, and Jenkins. Each example demonstrates the core stages and includes best practices for secrets management, caching, and environment-specific deployments.


2.1. GitHub Actions Configuration

GitHub Actions provides a flexible and powerful way to automate workflows directly within your GitHub repository.

text • 1,332 chars
**Key GitHub Actions Concepts & Best Practices:**

*   **`on`**: Defines when the workflow runs (push, pull request, manual `workflow_dispatch`).
*   **`jobs`**: Independent units of work that run in parallel by default. `needs` defines dependencies.
*   **`runs-on`**: Specifies the runner environment (e.g., `ubuntu-latest`).
*   **`uses`**: Reuses existing actions from the GitHub Marketplace (e.g., `actions/checkout`, `actions/setup-node`).
*   **`secrets`**: Securely store sensitive information (e.g., API keys, cloud credentials) accessible via `secrets.MY_SECRET`. `GITHUB_TOKEN` is a special token provided by GitHub.
*   **`environment`**: Links jobs to specific environments, enabling environment protection rules and providing a deployment URL.
*   **Caching**: Use `actions/cache` or built-in caching for package managers (like `npm`) to speed up builds.
*   **Conditional Logic**: `if` statements allow jobs or steps to run only under specific conditions (e.g., branch names, tag pushes).
*   **Artifacts**: `actions/upload-artifact` and `actions/download-artifact` to pass files between jobs or store build outputs.

---

#### 2.2. GitLab CI Configuration

GitLab CI/CD is tightly integrated with GitLab repositories, providing a powerful and flexible pipeline definition.

*   **File Location**: `.gitlab-ci.yml`

Sandboxed live preview

Step 1 of 3: Infrastructure Needs Analysis for DevOps Pipeline Generation

Workflow: DevOps Pipeline Generator

Step: geminianalyze_infrastructure_needs

Description: This step identifies the foundational infrastructure requirements and critical decision points necessary to design and implement a robust, efficient, and secure CI/CD pipeline. Without specific project details, this analysis provides a comprehensive framework and outlines the crucial information needed to tailor a pipeline for your unique application.


1. Executive Summary

This deliverable provides a detailed analysis of the infrastructure needs for generating a comprehensive CI/CD pipeline. It outlines the key domains of infrastructure that must be considered, from source code management and CI/CD platforms to build environments, deployment targets, and security. Given the generic request, this analysis focuses on establishing a robust framework for understanding and defining these needs. We highlight industry trends, provide strategic recommendations for optimal pipeline performance and security, and, most importantly, define the critical information required from your team to proceed with concrete pipeline generation. The subsequent steps will leverage this information to configure specific GitHub Actions, GitLab CI, or Jenkins pipelines.

2. Purpose of this Infrastructure Needs Analysis

The primary goal of this initial analysis is to systematically identify all infrastructure components and considerations that impact the design and functionality of a CI/CD pipeline. By understanding these needs upfront, we can ensure the generated pipeline is:

  • Tailored: Perfectly matches your application's technology stack and deployment strategy.
  • Efficient: Utilizes appropriate resources for fast builds, tests, and deployments.
  • Secure: Integrates best practices for secrets management and access control.
  • Scalable: Can grow with your application's demands.
  • Maintainable: Easy to understand, debug, and update.

This step serves as a crucial bridge, translating the high-level goal of "DevOps Pipeline Generation" into actionable requirements for the subsequent configuration steps.

3. Key Infrastructure Domains for CI/CD

Building a resilient CI/CD pipeline requires careful consideration across several infrastructure domains. Each domain presents specific choices and implications for the final pipeline configuration.

3.1. Source Code Management (SCM) & CI/CD Platform

The foundation of any CI/CD pipeline starts with where your code resides and the platform that orchestrates the pipeline.

  • SCM Integration: Seamless connectivity between your repository and the CI/CD platform is paramount. This includes webhooks for triggering pipelines on code changes, status updates, and branch protection rules.
  • CI/CD Runner/Agent Infrastructure:

* GitHub Actions:

* GitHub-hosted runners: Fully managed by GitHub, offering various OS (Ubuntu, Windows, macOS) and pre-installed software. Simpler setup, but less control and potentially higher cost for large-scale usage.

* Self-hosted runners: User-managed VMs or containers (Docker, Kubernetes) within your own infrastructure. Offers greater control over environment, security, and potentially lower cost for high-volume or specific hardware needs.

* GitLab CI:

* GitLab Shared Runners: Managed by GitLab, suitable for public projects and small private projects.

* GitLab Specific Runners: Self-hosted on your infrastructure (VMs, Docker, Kubernetes executor). Provides dedicated resources, custom environments, and enhanced security.

* Jenkins:

* Master-Agent Architecture: Requires a Jenkins master server and one or more agents (nodes) to execute jobs. Agents can be static VMs, Docker containers, or Kubernetes pods. High flexibility but requires significant management overhead.

  • Data Insight: GitHub Actions and GitLab CI, with their tight SCM integration and managed runner options, are trending for their ease of use and reduced operational overhead compared to traditional self-managed Jenkins instances, especially for cloud-native projects. However, Jenkins remains powerful for highly customized, complex, or on-premise environments.

3.2. Build & Test Environment

The environment where your application is compiled, packaged, and tested is critical for consistency and reliability.

  • Operating System: Linux (Ubuntu, Alpine), Windows, macOS.
  • Programming Languages & Runtimes: Specific versions of Node.js, Python, Java (JDK), .NET SDK, Go, Ruby, PHP, etc.
  • Build Tools: Maven, Gradle, npm, yarn, pip, Go modules, Bundler, Docker, Webpack, Babel.
  • Testing Frameworks: Jest, Pytest, JUnit, Mocha, RSpec, Selenium, Cypress.
  • Linting/Static Analysis Tools: ESLint, Prettier, Black, Flake8, RuboCop, SonarQube, Bandit, Hadolint.
  • Containerization: The use of Docker (and potentially Kubernetes) for building and testing within isolated, reproducible environments is a de-facto standard. This ensures consistency between local development, CI, and production.

3.3. Artifact Management

Once built, artifacts (e.g., Docker images, JARs, WARs, compiled binaries, npm packages) need to be stored securely and reliably.

  • Container Registries: Docker Hub, AWS ECR, GCP GCR/Artifact Registry, Azure Container Registry, GitLab Container Registry.
  • Package Repositories: Nexus, Artifactory, npm registry, PyPI, Maven Central.
  • Cloud Storage: AWS S3, Azure Blob Storage, GCP Cloud Storage for storing general build outputs or deployment assets.
  • Versioning Strategy: Clear conventions for tagging and retrieving artifacts (e.g., semantic versioning, commit SHAs).

3.4. Deployment Targets

The destination for your application, post-build and test, dictates many aspects of the deployment stage.

  • Cloud Providers: AWS (EC2, EKS, ECS, Lambda, S3, CloudFront), Azure (VMs, AKS, Azure Functions, App Service), GCP (GCE, GKE, Cloud Run, Cloud Functions).
  • On-Premise: Virtual Machines, bare metal servers.
  • Container Orchestration: Kubernetes (EKS, AKS, GKE, OpenShift) for scalable, resilient container deployments.
  • Serverless Platforms: AWS Lambda, Azure Functions, GCP Cloud Functions, Cloud Run for event-driven architectures.
  • Platform as a Service (PaaS): Heroku, Vercel, Netlify.
  • Infrastructure as Code (IaC): Terraform, CloudFormation, ARM Templates, Pulumi for provisioning and managing deployment environments.
  • Deployment Strategy: Rolling updates, Blue/Green, Canary deployments, immutable infrastructure.

3.5. Security & Secrets Management

Security must be baked into the pipeline from the start, especially concerning sensitive credentials.

  • Secrets Storage: CI/CD platform native secrets (GitHub Secrets, GitLab CI/CD Variables), dedicated secrets managers (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager, HashiCorp Vault).
  • Access Control: Least privilege principle for CI/CD runners/agents, IAM roles/service accounts for cloud access.
  • Vulnerability Scanning: Integrating tools for static application security testing (SAST), dynamic application security testing (DAST), software composition analysis (SCA), and container image scanning.

3.6. Monitoring & Logging Integration

While primarily post-deployment, the pipeline should facilitate the integration of monitoring and logging tools to ensure visibility into application health and performance.

  • Log Aggregation: Centralized logging (ELK stack, Splunk, Datadog, CloudWatch Logs, Azure Monitor, GCP Cloud Logging).
  • Metrics Collection: Prometheus, Grafana, Datadog, New Relic, CloudWatch Metrics.

4. Emerging Trends & Strategic Recommendations

4.1. Trends

  • Containerization Everywhere: Docker and Kubernetes are the backbone of modern CI/CD, providing consistent, isolated, and scalable environments for builds, tests, and deployments.
  • Infrastructure as Code (IaC): Managing infrastructure (including CI/CD runners and deployment targets) via code (Terraform, CloudFormation) ensures repeatability, versioning, and auditability.
  • GitOps: Extending IaC to operations, where Git repositories are the single source of truth for declarative infrastructure and applications.
  • Shift-Left Security: Integrating security scanning (SAST, SCA, DAST, secret scanning) early in the development and CI process to identify vulnerabilities before deployment.
  • Ephemeral Environments: Spinning up temporary, isolated environments for feature branches or pull requests to enable comprehensive pre-merge testing.
  • Cloud-Native CI/CD: Leveraging integrated CI/CD solutions offered by cloud providers (e.g., AWS CodePipeline, Azure DevOps) or SCM vendors (GitHub Actions, GitLab CI) for seamless integration and managed services.

4.2. Strategic Recommendations

  • Prioritize Security: Implement robust secrets management from day one. Use dedicated secrets managers and follow the principle of least privilege for all access credentials.
  • Embrace Containerization: Standardize on Docker for build and test environments. This ensures consistency and reduces "it works on my machine" issues.
  • Automate Everything Possible: Beyond just build and deploy, automate linting, testing, security scanning, and infrastructure provisioning.
  • Start Simple, Iterate: Begin with a basic pipeline (build, test, deploy) and progressively add more advanced stages (security scans, performance tests, advanced deployment strategies).
  • Choose the Right CI/CD Platform: Select a platform that aligns with your team's expertise, existing ecosystem, and future growth plans. GitHub Actions and GitLab CI are excellent for cloud-native projects with integrated SCM. Jenkins offers unparalleled flexibility for complex or legacy systems.
  • Monitor Your Pipeline: Implement logging and metrics for your CI/CD pipeline itself to identify bottlenecks, failures, and optimize performance.

5. Critical Information Required for Next Steps

To generate a precise and effective CI/CD pipeline, we require specific details about your project and existing infrastructure. Please provide the following information:

5.1. Project & Application Details

  • Application Type: (e.g., Web Application, Microservice, Mobile Backend, Serverless Function, Data Processing Job, Static Site)
  • Primary Programming Language(s) & Frameworks: (e.g., Node.js/React, Python/Django, Java/Spring Boot, Go, .NET Core, Ruby on Rails, PHP/Laravel, etc.)
  • Build Tools Used: (e.g., npm, yarn, Maven, Gradle, pip, Go modules, Webpack, Docker)
  • Testing Frameworks Used: (e.g., Jest, Pytest, JUnit, Mocha, RSpec, Cypress, Selenium)
  • Linting/Code Quality Tools Used (if any): (e.g., ESLint, Prettier, Black, Flake8, SonarQube)
  • Database Technology (if applicable): (e.g., PostgreSQL, MySQL, MongoDB, DynamoDB)
  • Any specific OS requirements for build/test: (e.g., Linux, Windows, macOS)

5.2. CI/CD Platform & Runner Preferences

  • Preferred CI/CD Platform (Choose One):

* GitHub Actions

* GitLab CI

* Jenkins

  • Runner/Agent Preference (if applicable):

* For GitHub Actions/GitLab CI:

* Managed/Cloud-hosted Runners (default, simpler)

* Self-hosted Runners (requires you to provide VMs/containers, more control)

* For Jenkins:

* Existing Jenkins Master/Agents

* New Jenkins setup (VMs, Docker, Kubernetes)

5.3. Artifact Management

  • Type of Artifacts to be produced: (e.g., Docker Images, JAR files, npm packages, compiled binaries, static assets)
  • Preferred Container Registry (if using Docker images): (e.g., Docker Hub, AWS ECR, GCP GCR, Azure Container Registry, GitLab Container Registry, Private Registry)
  • **Other

yaml

stages:

- lint

- test

- build

- deploy

variables:

NODE_VERSION: "18.x"

DOCKER_IMAGE_NAME: your-org/your-app # Replace with your Docker image name

REGISTRY: $CI_REGISTRY # GitLab's built-in registry, or specify an external one like docker.io

default:

image: node:${NODE_VERSION}-alpine # Default image for all jobs

cache:

key: ${CI_COMMIT_REF_SLUG} # Cache per branch/tag

paths:

- node_modules/

--------------------------------------------------------------------------

Stage 1: Linting

--------------------------------------------------------------------------

lint-job:

stage: lint

script:

- npm ci

- npm run lint

rules:

- if: $CI_PIPELINE_SOURCE == "merge_request_event"

- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

- if: $CI_COMMIT_BRANCH =~ /^(develop|feature)\/.*$/

--------------------------------------------------------------------------

Stage 2: Testing

--------------------------------------------------------------------------

test-job:

stage: test

script:

- npm ci

- npm test

artifacts:

when: always

reports:

junit: test-results.xml # Path to JUnit XML test results

rules:

- if: $CI_PIPELINE_SOURCE == "merge_request_event"

- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

- if: $CI_COMMIT_BRANCH =~ /^(develop|feature)\/.*$/

--------------------------------------------------------------------------

Stage 3: Building (and Pushing Docker Image)

--------------------------------------------------------------------------

build-docker-image:

stage: build

image: docker:latest # Use a Docker image with Docker CLI installed

services:

- docker:dind # Docker-in-Docker service

script:

- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY

- >

if [ -n "$CI_COMMIT_TAG" ]; then

IMAGE_TAG="$CI_COMMIT_TAG"

else

IMAGE_TAG="$CI_COMMIT_SHORT_SHA"

fi

- docker build -t $REGISTRY/$DOCKER_IMAGE_NAME:$IMAGE_TAG -t $REGISTRY/$DOCKER_IMAGE_NAME:latest .

- docker push $REGISTRY/$DOCKER_IMAGE_NAME:$IMAGE_TAG

- docker push $REGISTRY/$DOCKER_IMAGE_NAME:latest

rules:

- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Build on main branch

- if: $CI_COMMIT_BRANCH == "develop" # Build on develop branch

- if: $CI_COMMIT_TAG # Build on tags

--------------------------------------------------------------------------

Stage 4: Deployment (Conditional based on branch/tag)

--------------------------------------------------------------------------

deploy-dev:

stage: deploy

image: alpine/git:latest # A lightweight image with git for deployment scripts

environment

gemini Output

DevOps Pipeline Generator: Comprehensive CI/CD Pipeline Configurations

This document provides a detailed overview and illustrative examples of CI/CD pipeline configurations generated by the PantheraHive DevOps Pipeline Generator. Our aim is to equip you with robust, production-ready pipelines for GitHub Actions, GitLab CI, or Jenkins, encompassing essential stages like linting, testing, building, and deployment.


1. Introduction to the DevOps Pipeline Generator

The DevOps Pipeline Generator is designed to streamline your CI/CD setup by automatically generating tailored configurations based on your project requirements. It ensures best practices are followed, providing a solid foundation for automated software delivery. This output details the structure, common stages, and how to implement these generated pipelines effectively.


2. Core Pipeline Stages and Functionality

Regardless of the chosen CI/CD platform, a modern DevOps pipeline typically incorporates several critical stages to ensure code quality, functionality, and reliable deployment. The generated pipelines include:

  • Linting: Analyzes source code for programmatic errors, bugs, stylistic errors, and suspicious constructs. This helps enforce coding standards and improve code readability.
  • Testing: Executes various tests (unit, integration, end-to-end) to validate the application's functionality and ensure new changes haven't introduced regressions.
  • Building: Compiles source code, resolves dependencies, and packages the application into a deployable artifact (e.g., Docker image, JAR, WAR, executable, static files).
  • Deployment: Automates the process of releasing the built artifact to various environments (e.g., staging, production). This can involve pushing Docker images to a registry, deploying to cloud platforms (AWS, Azure, GCP), or updating servers.

3. Illustrative Pipeline Configurations

Below are conceptual outlines for GitLab CI and Jenkins, followed by a detailed example configuration for GitHub Actions. These examples demonstrate how the core stages are implemented across different platforms.

3.1. General Pipeline Structure Overview

All generated pipelines follow a similar logical flow, adapted to the specific syntax and capabilities of the chosen platform:

  1. Trigger: Define when the pipeline should run (e.g., on push to main, pull request, scheduled).
  2. Environment Setup: Configure the necessary environment (e.g., Node.js version, Python interpreter, Docker).
  3. Checkout Code: Fetch the repository's code.
  4. Stage: Lint: Run linters.
  5. Stage: Test: Run tests.
  6. Stage: Build: Create artifacts.
  7. Stage: Deploy (Optional/Conditional): Deploy to target environments.

3.2. GitHub Actions - Detailed Example Configuration

This example provides a comprehensive main.yml for a generic web application (e.g., Node.js, Python, or even a static site with build steps) using GitHub Actions. It includes linting, testing, building a Docker image, and deploying it to a container registry.


# .github/workflows/main.yml

name: CI/CD Pipeline for Web Application

on:
  push:
    branches:
      - main
      - develop
    tags:
      - 'v*' # Run on new tags like v1.0.0
  pull_request:
    branches:
      - main
      - develop
  workflow_dispatch: # Allows manual trigger

env:
  DOCKER_REGISTRY: ghcr.io # Or your Docker Hub/AWS ECR/GCR registry
  IMAGE_NAME: ${{ github.repository }} # Uses repo name as image name

jobs:
  lint_and_test:
    name: Lint and Test
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Set up Node.js (Example for JS-based projects)
      uses: actions/setup-node@v4
      with:
        node-version: '20'
        cache: 'npm' # Or yarn, pnpm
        cache-dependency-path: './package-lock.json' # Or yarn.lock, pnpm-lock.yaml

    # You might have similar setup steps for Python, Java, etc.
    # - name: Set up Python
    #   uses: actions/setup-python@v5
    #   with:
    #     python-version: '3.10'
    #     cache: 'pip'
    #     cache-dependency-path: './requirements.txt'

    - name: Install dependencies
      run: npm ci # Or pip install -r requirements.txt, mvn install

    - name: Run Linter (e.g., ESLint, Flake8)
      run: npm run lint # Or flake8 .

    - name: Run Unit and Integration Tests
      run: npm test # Or pytest

    - name: Upload Test Results (e.g., for coverage reports)
      uses: actions/upload-artifact@v4
      if: always()
      with:
        name: test-results
        path: |
          ./coverage/
          ./junit.xml # Example paths for common test output

  build_docker_image:
    name: Build Docker Image
    runs-on: ubuntu-latest
    needs: lint_and_test # Ensures linting and testing pass before building

    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Log in to Docker Registry
      uses: docker/login-action@v3
      with:
        registry: ${{ env.DOCKER_REGISTRY }}
        username: ${{ github.actor }}
        password: ${{ secrets.GITHUB_TOKEN }} # Use GITHUB_TOKEN for GHCR, or specific secret for other registries

    - name: Extract Docker metadata (tags, labels)
      id: meta
      uses: docker/metadata-action@v5
      with:
        images: ${{ env.DOCKER_REGISTRY }}/${{ env.IMAGE_NAME }}
        tags: |
          type=semver,pattern={{version}}
          type=semver,pattern={{major}}.{{minor}}
          type=raw,value=latest,enable=${{ github.ref == format('refs/heads/{0}', github.event.repository.default_branch) }}

    - name: Build and push Docker image
      uses: docker/build-push-action@v5
      with:
        context: .
        push: true
        tags: ${{ steps.meta.outputs.tags }}
        labels: ${{ steps.meta.outputs.labels }}
        cache-from: type=gha # Cache layers using GitHub Actions cache
        cache-to: type=gha,mode=max

  deploy_to_staging:
    name: Deploy to Staging
    runs-on: ubuntu-latest
    needs: build_docker_image # Ensures image is built
    environment: staging # Defines an environment for secrets/approvals
    if: github.ref == format('refs/heads/{0}', github.event.repository.default_branch) # Deploy only from main branch

    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Download Artifacts (if needed, e.g., K8s manifests)
      uses: actions/download-artifact@v4
      with:
        name: deployment-manifests
        path: ./manifests

    - name: Deploy to Staging (Example: Kubernetes using kubectl)
      uses: azure/k8s-set-context@v3 # Or similar action for AWS/GCP
      with:
        method: kubeconfig
        kubeconfig: ${{ secrets.KUBECONFIG_STAGING }}
        # context: 'your-k8s-context'

    - name: Apply Kubernetes manifests
      run: |
        kubectl apply -f ./manifests/deployment.yaml
        kubectl apply -f ./manifests/service.yaml
        kubectl rollout status deployment/your-app-staging

    - name: Verify Staging Deployment
      run: |
        # Add commands to curl endpoint, check logs, etc.
        echo "Staging deployment successful. Access at: http://your-staging-url.com"

  # deploy_to_production:
  #   name: Deploy to Production
  #   runs-on: ubuntu-latest
  #   needs: deploy_to_staging # Or a manual approval step
  #   environment: production # Requires manual approval in GitHub Environments
  #   if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') # Trigger on tag push
  #   # You might need a manual approval step here via GitHub Environments

  #   steps:
  #   - name: Checkout repository
  #     uses: actions/checkout@v4

  #   - name: Deploy to Production (Example: using Helm)
  #     uses: azure/k8s-set-context@v3
  #     with:
  #       method: kubeconfig
  #       kubeconfig: ${{ secrets.KUBECONFIG_PRODUCTION }}

  #   - name: Upgrade Helm Chart
  #     run: |
  #       helm upgrade --install your-app ./helm-chart \
  #         --namespace production \
  #         --set image.tag=${{ github.ref_name }} \
  #         -f ./helm-chart/values-production.yaml

  #   - name: Verify Production Deployment
  #     run: |
  #       echo "Production deployment successful. Access at: http://your-production-url.com"

Key Features of this GitHub Actions Example:

  • Multi-Stage: Clearly separated lint_and_test, build_docker_image, and deploy_to_staging jobs.
  • Dependency Management: needs keyword ensures jobs run in correct order.
  • Environment Variables: Uses env for reusable values.
  • Secrets Management: References secrets.GITHUB_TOKEN and custom secrets (e.g., secrets.KUBECONFIG_STAGING).
  • Docker Integration: Builds and pushes Docker images to a registry, leveraging docker/metadata-action for intelligent tagging.
  • Conditional Deployment: if statements for deploying only from specific branches or tags.
  • Artifacts: upload-artifact for test results, download-artifact for deployment manifests.
  • Manual Trigger: workflow_dispatch allows for manual runs.
  • Environments: Uses GitHub Environments for better secret management and protection rules (e.g., manual approval for production).

3.3. GitLab CI - Conceptual Outline

For GitLab CI, the pipeline is defined in a .gitlab-ci.yml file at the root of your repository.


# .gitlab-ci.yml

image: docker:latest # Base image for jobs, can be changed per job

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  # Add other global variables here

stages:
  - lint
  - test
  - build
  - deploy_staging
  - deploy_production

lint:
  stage: lint
  image: node:20 # Or python:3.10, etc.
  script:
    - npm ci
    - npm run lint
  # Only run on merge requests, or specific branches
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
    - if: '$CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH == "develop"'

test:
  stage: test
  image: node:20
  script:
    - npm ci
    - npm test
  artifacts:
    paths:
      - coverage/
      - junit.xml
    expire_in: 1 week
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
    - if: '$CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH == "develop"'

build_docker_image:
  stage: build
  image: docker:latest
  services:
    - docker:dind # Docker-in-Docker for building images
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $CI_REGISTRY/$CI_PROJECT_PATH:$CI_COMMIT_SHORT_SHA .
    - docker push $CI_REGISTRY/$CI_PROJECT_PATH:$CI_COMMIT_SHORT_SHA
    - docker tag $CI_REGISTRY/$CI_PROJECT_PATH:$CI_COMMIT_SHORT_SHA $CI_REGISTRY/$CI_PROJECT_PATH:latest # Tag latest
    - docker push $CI_REGISTRY/$CI_PROJECT_PATH:latest
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'

deploy_staging:
  stage: deploy_staging
  image: alpine/helm:3.10.0 # Or specific kubectl image
  script:
    - # Configure Kubernetes context using $KUBE_CONFIG secret variable
    - echo "$KUBE_CONFIG_STAGING" | base64 -d > kubeconfig.yaml
    - export KUBECONFIG=$(pwd)/kubeconfig.yaml
    - helm upgrade --install my-app ./helm-chart -n staging --set image.tag=$CI_COMMIT_SHORT_SHA
  environment:
    name: staging
    url: https://staging.example.com
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'

deploy_production:
  stage: deploy_production
  image: alpine/helm:3.10.0
  
devops_pipeline_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}