DevOps Pipeline Generator
Run ID: 69cba06461b1021a29a8ad1f2026-03-31Infrastructure
PantheraHive BOS
BOS Dashboard

DevOps Pipeline Generator: Comprehensive CI/CD Pipeline Configurations

This document provides detailed and professional CI/CD pipeline configurations tailored for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to establish robust and automated workflows for your software development lifecycle, encompassing linting, testing, building, and deployment stages. Each pipeline example is structured to be immediately actionable, with clear placeholders for your specific project details.


1. Core CI/CD Pipeline Stages

All generated pipelines adhere to a standard set of stages to ensure code quality, reliability, and efficient delivery. These stages are:


2. GitHub Actions Configuration

GitHub Actions provides powerful and flexible CI/CD capabilities directly within your GitHub repository.

File Location: .github/workflows/main.yml

Example Configuration (Node.js Application):

text • 259 chars
---

### 3. GitLab CI/CD Configuration

GitLab CI/CD is tightly integrated with GitLab repositories, offering a seamless experience for building, testing, and deploying.

**File Location**: `.gitlab-ci.yml`

**Example Configuration (Node.js Application)**:

Sandboxed live preview

Step 1 of 3: Infrastructure Needs Analysis for DevOps Pipeline Generation

Workflow: DevOps Pipeline Generator

Step: gemini → analyze_infrastructure_needs

Description: Generate complete CI/CD pipeline configurations for GitHub Actions, GitLab CI, or Jenkins with testing, linting, building, and deployment stages.


1. Introduction: Understanding Your Foundation for CI/CD

This initial phase focuses on a comprehensive analysis of your existing and desired infrastructure to lay a robust foundation for your CI/CD pipeline. A well-designed pipeline is intrinsically linked to the underlying infrastructure it operates on and interacts with. Without a clear understanding of these elements, the generated pipeline configurations may not be optimal, secure, or scalable.

Our goal in this step is to identify key infrastructure components, assess their current state, and define the requirements for a future-proof CI/CD ecosystem. This analysis will directly inform the selection of the appropriate CI/CD platform (GitHub Actions, GitLab CI, Jenkins) and dictate the specific configurations for build environments, testing, artifact storage, and deployment targets.

2. Key Infrastructure Pillars for CI/CD Pipeline Success

To generate an effective CI/CD pipeline, we must analyze the following critical infrastructure pillars:

  • 2.1. Source Code Management (SCM) System:

* Current State: Identify your primary SCM (e.g., GitHub, GitLab, Bitbucket, Azure DevOps Repos).

* Integration Needs: How tightly integrated should the CI/CD pipeline be with your SCM (e.g., webhook triggers, status updates, pull request checks)?

* Data Insight: GitHub and GitLab offer native CI/CD solutions (GitHub Actions, GitLab CI) that provide seamless integration, often simplifying setup and management compared to external tools like Jenkins.

  • 2.2. Preferred CI/CD Platform & Runner Infrastructure:

* Platform Choice: Do you have a preference for GitHub Actions, GitLab CI, or Jenkins?

* GitHub Actions/GitLab CI: Often preferred for cloud-native projects due to ease of setup, managed runners, and strong SCM integration.

* Jenkins: Highly customizable, suitable for complex on-premise environments, hybrid clouds, or when extensive plugin ecosystems are required. Requires self-managed infrastructure for runners.

* Runner Requirements:

* Managed vs. Self-hosted: Will you leverage managed runners (e.g., GitHub Actions hosted runners, GitLab.com shared runners) or require self-hosted runners/agents (e.g., on VMs, Kubernetes clusters, on-premise servers)?

* Operating System: Linux, Windows, macOS?

* Resource Allocation: CPU, RAM, disk space for builds.

* Scalability: How many concurrent builds are expected? What are the peak load requirements?

* Trend: The adoption of managed CI/CD services (GitHub Actions, GitLab CI) is increasing due to reduced operational overhead and inherent scalability. However, self-hosted runners remain crucial for specific compliance, network, or performance needs.

  • 2.3. Build Environment & Toolchain:

* Application Technology Stack:

* Languages: Java (Maven/Gradle), Node.js (npm/yarn), Python (pip), Go, .NET, Ruby, PHP, etc.

* Frameworks: Spring Boot, React, Angular, Django, etc.

* Build Tools: Specific compilers, package managers, dependency resolution tools.

* Containerization: Will Docker be used for consistent build environments? (Highly recommended trend).

* Environment Variables & Configuration: How are build-time configurations managed?

  • 2.4. Testing Infrastructure & Strategy:

* Types of Tests: Unit, Integration, End-to-End (E2E), Performance, Security (SAST/DAST).

* Testing Frameworks: JUnit, Jest, Pytest, Cypress, Selenium, JMeter, SonarQube, etc.

* Test Environments: Are dedicated environments needed for integration or E2E tests? (e.g., temporary Docker containers, ephemeral cloud environments).

* Reporting: How should test results be collected and displayed?

  • 2.5. Artifact Storage & Management:

* Type of Artifacts: Docker images, compiled binaries, JARs, NuGet packages, npm packages.

* Storage Location:

* Container Registries: Docker Hub, GitHub Container Registry, GitLab Container Registry, AWS ECR, Azure Container Registry, Google Container Registry.

* Binary Repositories: Nexus, Artifactory, AWS S3, Azure Blob Storage.

* Retention Policies: How long should artifacts be stored?

  • 2.6. Deployment Targets & Strategy:

* Environment Types: Development, Staging, Production.

* Deployment Platforms:

* Container Orchestration: Kubernetes (EKS, AKS, GKE, OpenShift).

* Virtual Machines: AWS EC2, Azure VMs, Google Compute Engine, on-premise VMs.

* Serverless: AWS Lambda, Azure Functions, Google Cloud Functions.

* Platform as a Service (PaaS): AWS Elastic Beanstalk, Azure App Service, Google App Engine, Heroku.

* Deployment Tools: kubectl, Helm, Terraform, Ansible, AWS CLI, Azure CLI, Serverless Framework.

* Deployment Strategies: Rolling updates, Blue/Green, Canary deployments.

  • 2.7. Security & Secrets Management:

* Secrets Storage: How are sensitive credentials (API keys, database passwords) managed and injected into the pipeline? (e.g., CI/CD platform built-in secrets, HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, Kubernetes Secrets).

* Access Control: Who has access to pipeline configurations and secrets?

* Security Scanning: Integration of SAST/DAST tools, dependency scanning, container image scanning.

* Compliance: Any specific regulatory compliance requirements (e.g., GDPR, HIPAA, SOC 2)?

  • 2.8. Monitoring, Logging & Alerting Integration:

* Existing Tools: Do you have existing observability tools (e.g., Prometheus, Grafana, ELK Stack, Splunk, Datadog, New Relic) that the pipeline should integrate with for deployment metrics, logs, or alerts?

* Pipeline Health: How will the health and performance of the CI/CD pipeline itself be monitored?

3. Critical Factors Influencing Infrastructure Choices

Beyond the technical pillars, several organizational and strategic factors will influence the optimal infrastructure choices:

  • 3.1. Scalability Requirements:

* Data Insight: Organizations with frequent commits and multiple development teams will require highly scalable CI/CD runner infrastructure to avoid build queues and ensure rapid feedback cycles. Cloud-native solutions often excel here.

  • 3.2. Cost Considerations:

* Data Insight: Managed cloud CI/CD services offer pay-as-you-go models, reducing upfront capital expenditure. Self-hosted solutions, while requiring initial investment, can offer cost efficiencies at extreme scale or for specific use cases.

  • 3.3. Team Expertise & Existing Tooling:

* Data Insight: Leveraging existing team skills (e.g., if a team is already proficient in Jenkins, migrating to a new platform might introduce a learning curve) and integrating with tools already in use can accelerate adoption and reduce friction.

  • 3.4. Security & Compliance Posture:

* Data Insight: Industries with stringent compliance requirements (e.g., finance, healthcare) may necessitate on-premise or highly controlled self-hosted runner environments and specific secrets management solutions.

  • 3.5. Hybrid Cloud / Multi-Cloud Strategy:

* Data Insight: For organizations operating across multiple cloud providers or a mix of cloud and on-premise, solutions like Jenkins with its distributed architecture or cloud-agnostic tools like Terraform become more relevant.

4. Initial Recommendations & Trends

Based on general industry trends and best practices, we offer the following initial recommendations, which will be refined upon receiving your specific input:

  • Recommendation 1: Embrace Cloud-Native CI/CD (Where Applicable): For most modern applications hosted on cloud platforms, leveraging GitHub Actions (for GitHub users) or GitLab CI (for GitLab users) is highly recommended.

* Trend: These platforms offer excellent developer experience, deep SCM integration, managed runners, and robust security features out-of-the-box, significantly reducing operational overhead.

  • Recommendation 2: Standardize Build Environments with Containers: Use Docker images for all build and test stages.

* Trend: Containerization ensures consistent, isolated, and reproducible build environments across all stages and prevents "it works on my machine" issues. It also simplifies dependency management.

  • Recommendation 3: Prioritize Secrets Management: Implement a robust secrets management solution early in the design phase.

* Trend: Never hardcode credentials. Utilize the CI/CD platform's built-in secrets management or integrate with dedicated tools like HashiCorp Vault or cloud provider secret services.

  • Recommendation 4: Infrastructure as Code (IaC) for Deployment: Define your deployment targets and configurations using IaC tools like Terraform or CloudFormation.

* Trend: IaC ensures idempotent, version-controlled, and auditable infrastructure provisioning, making deployments reliable and repeatable.

5. Information Required from Customer (Next Steps for Analysis)

To proceed with generating detailed and accurate CI/CD pipeline configurations, we require the following specific information from your team. Please provide as much detail as possible for each point:

  1. Application Details:

* Primary Application Language(s) & Framework(s): (e.g., Python/Django, Java/Spring Boot, Node.js/React, Go, .NET Core)

* Database Technologies: (e.g., PostgreSQL, MySQL, MongoDB, DynamoDB)

* Any specific build tools or compilers required: (e.g., Maven, Gradle, npm, yarn, pipenv, dotnet CLI)

  1. Source Code Management (SCM):

* Which SCM platform do you currently use? (e.g., GitHub.com, GitHub Enterprise, GitLab.com, GitLab Self-Managed, Bitbucket Cloud, Azure DevOps Repos)

  1. Preferred CI/CD Platform:

* Do you have a strong preference for GitHub Actions, GitLab CI, or Jenkins?

* If Jenkins, is it an existing instance or a new deployment?

* If no strong preference, what are your primary considerations (e.g., cost, ease of use, compliance, integration with existing tools)?

  1. CI/CD Runner Infrastructure:

* Will you use managed runners (if available for your chosen platform) or self-hosted runners?

* If self-hosted, what is the desired OS (Linux, Windows, macOS) and deployment environment (VMs, Kubernetes)?

* Approximate number of concurrent builds expected during peak times?

  1. Deployment Targets:

* Which cloud provider(s) are you using (AWS, Azure, GCP, On-Premise)?

* What is your primary deployment target? (e.g., Kubernetes, EC2 VMs, Azure App Service, AWS Lambda, on-premise servers)

* Do you have specific deployment strategies in mind? (e.g., Blue/Green, Canary, Rolling Updates)

  1. Artifact Storage:

* Which container registry will you use for Docker images?

* Which binary repository/storage will you use for other artifacts?

  1. Security & Secrets Management:

* How are secrets currently managed? (e.g., environment variables, a dedicated secrets manager like Vault, cloud provider secrets services)

* Are there any specific security scanning tools you wish to integrate? (e.g., SonarQube, Snyk, Trivy)

* Are there any specific compliance requirements?

  1. Testing Strategy:

* Which types of tests are critical for your pipeline? (e.g., unit, integration, E2E, performance)

* Are there specific testing frameworks you currently use or prefer?

  1. Existing Tooling & Integrations:

* Are there any other tools or services that the CI/CD pipeline needs to integrate with? (e.g., Jira, Slack, PagerDuty, specific monitoring tools)

6. Conclusion & Next Steps

This detailed infrastructure analysis forms the bedrock of an efficient and effective CI/CD pipeline. Once we receive your comprehensive responses to the questions above, we will proceed to Step 2: Define Pipeline Stages and Logic. In this subsequent step, we will leverage your infrastructure details to design the specific stages (linting, testing, building, deploying) and the underlying logic that will form your complete CI/CD pipeline configuration.

We look forward to your input to move forward in generating a tailored and robust DevOps pipeline solution for your needs.

yaml

stages:

- lint

- test

- build

- deploy

variables:

NODE_VERSION: '18.x' # Specify your desired Node.js version

DOCKER_IMAGE_NAME: 'my-node-app'

DOCKER_REGISTRY: '$CI_REGISTRY' # Use GitLab's built-in container registry

AWS_REGION: 'us-east-1' # Example AWS region

Define a base template for Node.js jobs

.node_base:

image: node:$NODE_VERSION-alpine # Use a lightweight Node.js image

cache:

key: ${CI_COMMIT_REF_SLUG} # Cache per branch

paths:

- node_modules/

before_script:

- npm ci

lint:

extends: .node_base

stage: lint

script:

- npm run lint # Assumes a 'lint' script in package.json

test:

extends: .node_base

stage: test

script:

- npm test # Assumes a 'test' script in package.json

artifacts:

when: always

reports:

junit:

- junit.xml # If using a JUnit reporter, configure its output path

build:

extends: .node_base

stage: build

image: docker:20.10.16-dind-alpine3.16 # Use a Docker-in-Docker image for building Docker images

services:

- docker:20.10.16-dind-alpine3.16

variables:

DOCKER_HOST: tcp://docker:2375 # Required for dind

DOCKER_TLS_CERTDIR: "" # Required for dind

script:

- echo "Building application..."

- npm run build # Assumes a 'build' script in package.json

- echo "Building and pushing Docker image..."

- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY # Login to GitLab Container Registry

- export IMAGE_TAG="$CI_REGISTRY/$CI_PROJECT_PATH/$DOCKER_IMAGE_NAME:$CI_COMMIT_SHORT_SHA"

- docker build -t $IMAGE_TAG .

- docker push $IMAGE_TAG

- echo "DOCKER_IMAGE_TAG=$IMAGE_TAG" >> build.env # Store image tag for subsequent stages

artifacts:

paths:

- dist/ # Adjust path to your build output

reports:

dotenv: build.env # Pass variables between stages

deploy_staging:

stage: deploy

image: alpine/helm:3.12.0 # Example: using Helm for Kubernetes deployments

# image: python:3.9-slim-buster # Example: using Python for AWS CLI deployments

environment:

name: staging

url: https://staging.your-app.com

rules:

- if: '$CI_COMMIT_BRANCH == "develop"'

script:

- echo "Deploying to Staging environment..."

- export DOCKER_IMAGE_TAG=$(cat build.env | grep DOCKER_IMAGE_TAG | cut -d '=' -f 2)

- echo "Deploying image: $DOCKER_IMAGE_TAG"

# Example: AWS S3 deployment

# - apk add --no-cache curl python3 py3-pip

# - pip install awscli

# - aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID_STAGING

# - aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY_STAGING

# - aws configure set default.region $AWS_REGION

# - aws s3 sync ./dist s3://your-staging-bucket-name --delete

# Example: Kubernetes deployment using Helm

# - helm upgrade --install my-app-staging ./helm-charts/my-app --namespace staging --set image.tag=$DOCKER_IMAGE_TAG --set ingress.host=staging.your-app.com

variables:

# Define protected variables in GitLab CI/CD settings

AWS_ACCESS_KEY_ID_STAGING: $AWS_ACCESS_KEY_ID_STAGING

AWS_SECRET_ACCESS_KEY_STAGING: $AWS_SECRET_ACCESS_KEY_STAGING

deploy_production:

stage: deploy

image: alpine/helm:3.12.0 # Example: using Helm for Kubernetes deployments

environment:

name: production

url: https://your-app.com

rules:

- if: '$CI_COMMIT_BRANCH == "main"'

when: manual # Manual deployment for production

allow_failure: false

script:

- echo "Deploying to Production environment..."

- export DOCKER_IMAGE_TAG=$(cat build.env | grep DOCKER_IMAGE_TAG | cut -d '=' -f 2)

- echo "Deploying image: $DOCKER_IMAGE_TAG"

# Example: AWS S3 deployment

# - apk add --

gemini Output

DevOps Pipeline Generator: Comprehensive CI/CD Pipeline Configurations

This document provides detailed, production-ready CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These configurations are designed to automate the software development lifecycle, encompassing linting, testing, building, and deployment stages to ensure high-quality, reliable, and efficient software delivery.


1. Introduction

Automating your build, test, and deployment processes is crucial for modern software development. This deliverable provides robust templates for a complete CI/CD pipeline, tailored for three popular platforms: GitHub Actions, GitLab CI, and Jenkins. Each pipeline is designed to be adaptable, covering common development workflows and offering a solid foundation for your project's specific needs.

2. Core Pipeline Stages

All provided pipeline configurations incorporate the following essential stages, ensuring a comprehensive and robust CI/CD process:

  • Linting: Static code analysis to enforce code style, identify potential errors, and maintain code quality.
  • Testing: Execution of unit, integration, and (optionally) end-to-end tests to validate application functionality and prevent regressions.
  • Building: Compiling source code, packaging artifacts (e.g., Docker images, JARs, bundles), and preparing the application for deployment.
  • Deployment: Releasing the application to various environments (e.g., staging, production), typically involving container registry pushes, cloud provider deployments, or server updates.

3. GitHub Actions Pipeline Configuration

This configuration is suitable for projects hosted on GitHub. It uses a Node.js application as an example, building a Docker image and pushing it to a container registry, then simulating a deployment.

File Location: .github/workflows/main.yml


name: CI/CD Pipeline

on:
  push:
    branches:
      - main
      - develop
    tags:
      - 'v*.*.*' # Trigger on version tags
  pull_request:
    branches:
      - main
      - develop

env:
  NODE_VERSION: '18.x' # Specify Node.js version
  DOCKER_IMAGE_NAME: my-app # Your application's Docker image name
  REGISTRY: ghcr.io # Or your Docker Hub username, AWS ECR URL, etc.
  # For GHCR, REGISTRY should be ghcr.io/${{ github.repository_owner }}

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm' # Cache npm dependencies

      - name: Install dependencies
        run: npm ci

      - name: Run ESLint
        run: npm run lint # Assumes 'lint' script in package.json

  test:
    runs-on: ubuntu-latest
    needs: lint # This job depends on 'lint' completing successfully
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run tests
        run: npm test # Assumes 'test' script in package.json

  build:
    runs-on: ubuntu-latest
    needs: test # This job depends on 'test' completing successfully
    outputs:
      image_tag: ${{ steps.set_image_tag.outputs.tag }}
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to the Container registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GH_CR_TOKEN }} # Use GH_CR_TOKEN for GHCR, or DOCKER_HUB_TOKEN for Docker Hub, AWS_ACCESS_KEY_ID for ECR

      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}
          tags: |
            type=schedule
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=sha,format=long

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: Set image tag output
        id: set_image_tag
        run: echo "tag=${{ steps.meta.outputs.tags }}" >> $GITHUB_OUTPUT

  deploy-staging:
    runs-on: ubuntu-latest
    needs: build # This job depends on 'build' completing successfully
    environment:
      name: Staging
      url: https://staging.your-app.com # Optional: URL for the environment
    if: github.ref == 'refs/heads/develop' || github.event_name == 'pull_request' # Deploy to staging on develop branch or PRs
    steps:
      - name: Deploy to Staging Environment
        run: |
          echo "Deploying image ${{ env.REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ needs.build.outputs.image_tag }} to Staging..."
          # Example: Use AWS CLI, kubectl, or SSH to deploy
          # For AWS ECS/EKS:
          # aws ecs update-service --cluster your-cluster --service your-service --force-new-deployment --task-definition your-task-definition:${{ needs.build.outputs.image_tag }}
          # For Kubernetes:
          # kubectl set image deployment/your-deployment your-container=${{ env.REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ needs.build.outputs.image_tag }} --namespace staging
          echo "Deployment to Staging complete!"

  deploy-production:
    runs-on: ubuntu-latest
    needs: build # This job depends on 'build' completing successfully
    environment:
      name: Production
      url: https://your-app.com
    if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v') # Deploy to production on main branch or tags
    # Example: Manual approval for production deployment
    # environment:
    #   name: Production
    #   url: https://your-app.com
    #   required_contexts: [ "manual" ] # Requires manual approval
    steps:
      - name: Deploy to Production Environment
        run: |
          echo "Deploying image ${{ env.REGISTRY }}/${{ env.DOCKER_IMAGE_NAME }}:${{ needs.build.outputs.image_tag }} to Production..."
          # Example: Use AWS CLI, kubectl, or SSH to deploy
          # aws ecs update-service --cluster your-prod-cluster --service your-prod-service --force-new-deployment --task-definition your-prod-task-definition:${{ needs.build.outputs.image_tag }}
          echo "Deployment to Production complete!"

Key Features:

  • Triggering: On push to main, develop, or version tags, and on pull_request to main or develop.
  • Parallel Jobs: Lint, Test, and Build can run in parallel (though test depends on lint, and build depends on test for sequential execution of stages).
  • Dependency Caching: npm dependencies are cached to speed up subsequent runs.
  • Docker Integration: Uses docker/build-push-action for efficient Docker image building and pushing to a registry (e.g., GitHub Container Registry).
  • Environment Management: Uses GitHub Environments for better visibility and optional manual approvals or environment-specific secrets.
  • Secrets Management: Uses GitHub Secrets (secrets.GH_CR_TOKEN) for secure credentials.

4. GitLab CI Pipeline Configuration

This configuration is for projects hosted on GitLab. It also uses a Node.js application example, building a Docker image and pushing it to GitLab's Container Registry, then simulating a Kubernetes deployment.

File Location: .gitlab-ci.yml


stages:
  - lint
  - test
  - build
  - deploy

variables:
  NODE_VERSION: '18.x'
  DOCKER_IMAGE_NAME: $CI_REGISTRY_IMAGE # Predefined GitLab variable for image name
  DOCKER_TAG: $CI_COMMIT_SHA # Use commit SHA as default tag

default:
  image: node:${NODE_VERSION}-alpine # Default image for all jobs
  before_script:
    - npm ci --cache .npm --prefer-offline # Install dependencies once, use cache

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - .npm/
    - node_modules/

lint:
  stage: lint
  script:
    - npm run lint # Assumes 'lint' script in package.json

test:
  stage: test
  script:
    - npm test # Assumes 'test' script in package.json
  artifacts:
    when: always
    reports:
      junit: junit.xml # Example for JUnit test reports

build_image:
  stage: build
  image: docker:24.0.5-alpine3.18 # Use a Docker image for building
  services:
    - docker:24.0.5-dind-alpine3.18 # Docker in Docker for building images
  variables:
    DOCKER_HOST: tcp://docker:2375
    DOCKER_TLS_CERTDIR: "" # Disable TLS for DIND
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $DOCKER_IMAGE_NAME:$DOCKER_TAG .
    - docker push $DOCKER_IMAGE_NAME:$DOCKER_TAG
  rules:
    - if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH == "develop" || $CI_COMMIT_TAG =~ /^v\d+\.\d+\.\d+$/ # Run on main, develop, or tags

deploy_staging:
  stage: deploy
  image: alpine/git # Or kubectl image for Kubernetes
  environment:
    name: staging
    url: https://staging.your-app.com
  script:
    - echo "Deploying $DOCKER_IMAGE_NAME:$DOCKER_TAG to Staging..."
    # Example:
devops_pipeline_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}