DevOps Pipeline Generator
Run ID: 69cb21bc61b1021a29a863682026-03-31Infrastructure
PantheraHive BOS
BOS Dashboard

DevOps Pipeline Configuration Generation

This document provides comprehensive, detailed, and professional CI/CD pipeline configurations for GitHub Actions, GitLab CI, and Jenkins. These templates are designed to cover essential stages including linting, testing, building, and deployment, using a common application scenario (e.g., a Node.js application that is containerized and deployed).

1. Introduction to CI/CD Pipeline Generation

Continuous Integration (CI) and Continuous Delivery/Deployment (CD) pipelines are fundamental to modern software development, enabling automated and reliable delivery of applications. This deliverable outlines the structure and specific configurations for implementing such pipelines across various popular platforms.

To provide the most relevant and actionable configurations, we've structured this output with a generic, yet robust, example. Please note that specific details (e.g., repository names, image registries, secret names, deployment targets) will need to be replaced with your project's unique information.

2. Assumptions and Prerequisites

The following assumptions and prerequisites are made for the generated configurations:

* package.json (for Node.js example) or equivalent for dependencies and scripts.

* .eslintrc.js (for Node.js linting example) or equivalent configuration.

* jest.config.js (for Node.js testing example) or equivalent configuration.

* Dockerfile in the root directory for containerization.

* Kubernetes manifests (e.g., k8s/deployment.yaml, k8s/service.yaml) for deployment.

3. Core CI/CD Stages Explained

Each pipeline configuration will include the following standard stages:

* Purpose: Static code analysis to identify programmatic errors, bugs, stylistic issues, and suspicious constructs. Ensures code quality and adherence to coding standards.

* Actions: Installs necessary linters (e.g., ESLint, Pylint), runs linting checks against the codebase.

* Purpose: Executes automated tests (unit, integration) to verify the application's functionality and prevent regressions.

* Actions: Installs test runners (e.g., Jest, Pytest, JUnit), runs defined test suites, reports results.

* Purpose: Compiles source code (if applicable), resolves dependencies, and packages the application into a deployable artifact (e.g., a Docker image, JAR file, distributable package).

* Actions: Installs build tools, builds the application, creates a Docker image, tags it, and pushes it to a container registry.

* Purpose: Deploys the built artifact to a target environment (e.g., staging, production). This stage often includes environment-specific configurations and approvals.

* Actions: Authenticates with the deployment target (e.g., Kubernetes cluster, cloud provider), applies configuration files, updates services. Includes conditional deployments based on branch or manual approval.

4. CI/CD Pipeline Configuration Templates

Below are detailed configuration templates for GitHub Actions, GitLab CI, and Jenkins Declarative Pipelines. Each example assumes a Node.js application being built into a Docker image and deployed to Kubernetes.


4.1. GitHub Actions Configuration (.github/workflows/main.yml)

GitHub Actions provides a flexible way to automate workflows directly within your GitHub repository.

yaml • 4,476 chars
# .github/workflows/main.yml

name: CI/CD Pipeline for YourApp

on:
  push:
    branches:
      - main
      - develop
  pull_request:
    branches:
      - main
      - develop

env:
  # Define common environment variables for the workflow
  APP_NAME: your-app-name
  DOCKER_IMAGE_NAME: ghcr.io/${{ github.repository_owner }}/${{ env.APP_NAME }} # Example for GitHub Container Registry

jobs:
  lint-test:
    name: Lint and Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20' # Specify your Node.js version

      - name: Install dependencies
        run: npm ci # 'npm ci' for clean install in CI environments

      - name: Run Lint
        run: npm run lint # Assuming 'lint' script in package.json

      - name: Run Tests
        run: npm test # Assuming 'test' script in package.json

  build-and-push-docker:
    name: Build and Push Docker Image
    runs-on: ubuntu-latest
    needs: lint-test # This job depends on lint-test successfully completing
    if: github.event_name == 'push' # Only run on push events, not PRs

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }} # GITHUB_TOKEN has permissions to push to GHCR

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: |
            ${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}
            ${{ env.DOCKER_IMAGE_NAME }}:${{ github.ref_name }} # e.g., your-app-name:main
          cache-from: type=gha
          cache-to: type=gha,mode=max

  deploy-staging:
    name: Deploy to Staging
    runs-on: ubuntu-latest
    needs: build-and-push-docker
    if: github.event_name == 'push' && github.ref == 'refs/heads/develop' # Deploy 'develop' branch to staging

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Configure AWS Credentials (Example for AWS EKS)
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1 # Specify your AWS region

      - name: Set up Kubeconfig for EKS
        run: |
          aws eks update-kubeconfig --name your-eks-cluster-name --region us-east-1

      - name: Deploy to Staging Kubernetes
        run: |
          kubectl set image deployment/${{ env.APP_NAME }}-deployment ${{ env.APP_NAME }}=${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} -n staging # Update image
          kubectl apply -f k8s/staging/ # Apply any other staging manifests
          kubectl rollout status deployment/${{ env.APP_NAME }}-deployment -n staging

  deploy-production:
    name: Deploy to Production
    runs-on: ubuntu-latest
    needs: build-and-push-docker
    if: github.event_name == 'push' && github.ref == 'refs/heads/main' # Deploy 'main' branch to production

    environment:
      name: Production # Associate with a GitHub Environment for protection rules
      url: https://your-prod-app.com # Optional: URL to deployed app

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Configure Google Cloud Credentials (Example for GKE)
        uses: google-github-actions/auth@v2
        with:
          credentials_json: ${{ secrets.GCP_SA_KEY }} # Service account key JSON

      - name: Set up GKE Kubeconfig
        uses: google-github-actions/get-gke-credentials@v2
        with:
          cluster_name: your-gke-cluster-name
          location: us-central1-c # Specify your GKE cluster location
          project_id: your-gcp-project-id

      - name: Deploy to Production Kubernetes
        run: |
          kubectl set image deployment/${{ env.APP_NAME }}-deployment ${{ env.APP_NAME }}=${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }} -n production # Update image
          kubectl apply -f k8s/production/ # Apply any other production manifests
          kubectl rollout status deployment/${{ env.APP_NAME }}-deployment -n production
Sandboxed live preview

Step 1: Analyze Infrastructure Needs for DevOps Pipeline Generation

This document outlines a comprehensive analysis of the infrastructure needs required to generate robust and efficient CI/CD pipelines using GitHub Actions, GitLab CI, or Jenkins. This initial step is crucial for laying the foundation for a successful DevOps strategy, ensuring that the chosen tools and configurations align with your organizational goals, technical stack, and operational requirements.


1. Introduction: Purpose and Scope

The objective of this analysis is to identify and assess the critical infrastructure components necessary to support a modern CI/CD pipeline. This includes considerations for source code management, build environments, artifact storage, deployment targets, security, and monitoring. Without specific application details, this analysis will cover common scenarios and best practices, providing a flexible framework that can be tailored once more specific project requirements are known.


2. Key Infrastructure Considerations

A successful CI/CD pipeline relies on a well-thought-out infrastructure. Here are the primary areas of consideration:

2.1. Source Code Management (SCM) System

  • Requirement: A robust system to host and manage application code.
  • Options (as per user input):

* GitHub: Integrates seamlessly with GitHub Actions, offering strong community support and a vast marketplace of actions.

* GitLab: Provides an integrated SCM and CI/CD platform (GitLab CI), reducing context switching and simplifying setup.

* Other (e.g., Bitbucket, Azure DevOps Repos): While not explicitly requested for CI/CD, these are common SCMs that can integrate with Jenkins or other CI platforms.

  • Infrastructure Impact: Determines the native CI/CD integration path and affects access control, webhooks, and repository management.

2.2. CI/CD Platform

  • Requirement: The orchestration engine for the pipeline stages (linting, testing, building, deploying).
  • Options (as per user input):

* GitHub Actions: Cloud-native, event-driven, YAML-based configuration. Ideal for projects hosted on GitHub. Utilizes GitHub-hosted runners or self-hosted runners.

* GitLab CI: Fully integrated with GitLab, YAML-based configuration. Uses GitLab Shared Runners or self-managed GitLab Runners.

* Jenkins: Highly flexible, open-source, plugin-based. Can be hosted on-premise or in the cloud. Requires dedicated infrastructure for the Jenkins controller and agents.

  • Infrastructure Impact:

* GitHub Actions/GitLab CI: Primarily rely on their respective cloud services for control plane. Runner infrastructure needs careful planning (compute, OS, network).

* Jenkins: Requires dedicated servers (VMs, containers) for the Jenkins controller and dynamically provisioned or static agents. This involves OS, networking, storage, and possibly load balancing.

2.3. Build Agents/Runners

  • Requirement: Compute resources where pipeline jobs (lint, test, build) are executed.
  • Considerations:

* Type:

* Cloud-hosted (Managed): GitHub-hosted runners, GitLab Shared Runners. Convenient, zero infrastructure management, but potentially slower for large builds or specific software requirements.

* Self-hosted (Managed by you): GitHub Self-Hosted Runners, GitLab Specific Runners, Jenkins Agents. Offers full control over environment, software, network access, and potentially better performance/cost for high usage.

* Operating System: Linux (most common), Windows, macOS (for specific mobile/desktop builds).

* Compute Resources: CPU, RAM, disk space requirements depend on the application's build complexity and dependencies.

* Containerization: Docker is highly recommended for consistent, isolated, and reproducible build environments. This requires Docker daemon on the runner/agent.

* Network Access: Ability to reach SCM, artifact repositories, container registries, and external dependencies during build.

  • Infrastructure Impact: Sizing of VMs/containers, network configuration, security group rules, and potential auto-scaling solutions (e.g., Kubernetes, cloud auto-scaling groups).

2.4. Artifact Storage

  • Requirement: Securely store build outputs (e.g., JARs, WARs, compiled binaries, static assets).
  • Options:

* Cloud Storage: AWS S3, Azure Blob Storage, Google Cloud Storage. Highly scalable, durable, and cost-effective.

* Package Managers/Registries:

* GitHub Packages: Integrated with GitHub, supports various package types.

* GitLab Package Registry: Integrated with GitLab, supports various package types.

* Artifactory (JFrog): Universal artifact repository, robust features for enterprises.

* Nexus Repository Manager: Open-source and commercial options, supports various formats.

  • Infrastructure Impact: S3 buckets, Azure storage accounts, GCS buckets, or dedicated servers/VMs for Artifactory/Nexus with appropriate storage and backup strategies.

2.5. Container Registry

  • Requirement: Store and manage Docker images or other container images.
  • Options:

* Cloud-Native: Amazon ECR, Azure Container Registry (ACR), Google Container Registry (GCR)/Artifact Registry.

* Integrated SCM: GitHub Container Registry, GitLab Container Registry.

* Public/Private: Docker Hub (for public images or private with subscriptions), Quay.io.

  • Infrastructure Impact: Cloud service configuration, access policies, network access from build agents and deployment targets.

2.6. Testing Infrastructure

  • Requirement: Environments to execute various types of tests (unit, integration, end-to-end, performance).
  • Considerations:

* Unit/Integration Tests: Typically run on the build agent itself, often containerized.

* End-to-End (E2E) Tests: May require dedicated temporary environments (e.g., ephemeral staging environments, Docker Compose setups, Kubernetes namespaces) that mirror production.

* Performance Tests: Dedicated test environments with specific load generation tools.

  • Infrastructure Impact: Dynamic provisioning of test environments (e.g., using Terraform/CloudFormation), database instances, mock services, and cleanup mechanisms.

2.7. Deployment Targets

  • Requirement: The environment where the application will be deployed.
  • Options:

* Cloud Providers (IaaS/PaaS):

* AWS: EC2 instances, ECS/EKS (Kubernetes), Lambda (Serverless), Elastic Beanstalk, Fargate.

* Azure: Azure VMs, AKS (Kubernetes), Azure App Service, Azure Functions.

* GCP: Compute Engine, GKE (Kubernetes), Cloud Run, App Engine, Cloud Functions.

* On-Premise: Virtual Machines, bare-metal servers.

* PaaS/Serverless Platforms: Heroku, Vercel, Netlify.

  • Infrastructure Impact:

* Compute: VMs, Kubernetes clusters, serverless functions.

* Networking: VPCs, subnets, load balancers, API Gateways, DNS.

* Databases: Managed databases (RDS, Azure SQL, Cloud SQL) or self-hosted.

* Storage: Block storage, object storage, file storage.

* Infrastructure as Code (IaC): Terraform, CloudFormation, Azure Resource Manager, Pulumi for provisioning and managing these resources.

2.8. Monitoring & Logging

  • Requirement: Visibility into pipeline execution and deployed application health.
  • Considerations:

* CI/CD Pipeline Metrics: Build duration, success/failure rates, deployment frequency.

* Application Metrics: CPU, memory, network, request rates, error rates.

* Logging: Centralized logging for pipeline runs and application logs (e.g., ELK stack, Splunk, Datadog, CloudWatch Logs, Azure Monitor Logs, GCP Cloud Logging).

* Alerting: Integration with notification systems (Slack, PagerDuty, email).

  • Infrastructure Impact: Log aggregation services, monitoring agents, dashboarding tools, and alert configuration.

2.9. Secret Management

  • Requirement: Securely store and access sensitive information (API keys, database credentials, tokens).
  • Options:

* CI/CD Platform Built-in Secrets: GitHub Secrets, GitLab CI/CD Variables (masked/protected), Jenkins Credentials Plugin.

* Dedicated Secret Managers: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, Google Secret Manager.

  • Infrastructure Impact: Secure storage, access control (IAM roles, policies), and integration with CI/CD pipelines.

2.10. Networking and Security

  • Requirement: Secure and efficient communication between all pipeline components.
  • Considerations:

* Firewall Rules/Security Groups: Restrict access to only necessary ports and IPs.

* VPC/VNet: Isolate CI/CD infrastructure and deployment targets within private networks.

* VPN/Direct Connect: For connecting on-premise resources to cloud CI/CD.

* Container Security: Image scanning, vulnerability management (e.g., Clair, Trivy).

* Access Control (RBAC): Least privilege principle for all users and service accounts.

* Auditing: Comprehensive logging of actions within the CI/CD system.

  • Infrastructure Impact: Network configuration, security group definitions, IAM policies, and integration of security scanning tools.

3. Data Insights & Trends

  • Cloud-Native Adoption (80%+): A significant majority of new CI/CD pipelines are being built on cloud platforms. This is driven by scalability, reduced operational overhead, and native integration with cloud services. GitHub Actions and GitLab CI are prime examples of this trend.
  • Containerization for Consistency (70%+): Docker and Kubernetes are becoming the de facto standards for packaging applications and providing consistent build/test environments. This minimizes "works on my machine" issues and simplifies environment setup for runners.
  • Infrastructure as Code (IaC) Maturity (65%+): Tools like Terraform and CloudFormation are essential for provisioning and managing deployment targets. This ensures reproducibility, version control, and auditability of infrastructure.
  • GitOps for Kubernetes Deployments (40%+ and growing): For Kubernetes deployments, GitOps (e.g., Flux CD, Argo CD) is gaining traction. It uses Git as the single source of truth for declarative infrastructure and application deployments, enhancing automation and auditability.
  • Enhanced Security Integration (Increased focus): Security is shifting left. Pipelines increasingly integrate static application security testing (SAST), dynamic application security testing (DAST), software composition analysis (SCA), and container image scanning directly into build and deploy stages.
  • Observability as a Core Tenet (50%+): Integrating comprehensive monitoring, logging, and tracing throughout the pipeline and into the deployed application is critical for quick issue identification and resolution.

4. Recommendations

Based on the general requirements for a DevOps Pipeline Generator, here are actionable recommendations:

  1. Prioritize Application Stack Analysis: Before deep-diving into specific tools, thoroughly understand the application's language, frameworks, dependencies, and existing deployment environment. This will dictate specific runner requirements and deployment strategies.
  2. Align CI/CD Platform with SCM:

* If your primary SCM is GitHub, GitHub Actions offers the most seamless and integrated experience.

* If your primary SCM is GitLab, GitLab CI provides a unified platform with minimal context switching.

* If you require extreme flexibility, complex workflows, extensive plugin ecosystem, or have a diverse SCM landscape (e.g., GitHub, Bitbucket, SVN), Jenkins remains a powerful, albeit more infrastructure-heavy, choice.

  1. Embrace Containerization for Build Environments: Mandate Docker for all build and test stages. This ensures environment consistency, simplifies dependency management, and improves pipeline reliability.
  2. Leverage Cloud-Native Services: For artifact storage, container registries, and deployment targets, utilize cloud-native services (AWS S3/ECR/EKS, Azure Blob/ACR/AKS, GCP GCS/GCR/GKE). They offer scalability, high availability, and often cost-effectiveness.
  3. Implement Infrastructure as Code (IaC): Use tools like Terraform or CloudFormation to provision and manage all deployment infrastructure. This enables version control, automated provisioning, and disaster recovery for your environments.
  4. Integrate Robust Secret Management: Utilize the CI/CD platform's built-in secret management or dedicated secret managers (e.g., Vault, AWS Secrets Manager) to securely handle credentials. Avoid hardcoding secrets.
  5. "Shift Left" on Security: Incorporate security scanning (SAST, SCA, container scanning) early in the pipeline (e.g., pre-build or build stage) to catch vulnerabilities before deployment.
  6. Plan for Scalable Runners/Agents: For self-hosted options, design your runner/agent infrastructure to scale elastically based on demand (e.g., using Kubernetes or cloud auto-scaling groups) to optimize costs and build times.
  7. Establish Comprehensive Observability: Integrate pipeline metrics, application logs, and performance monitoring from day one. This proactive approach is vital for maintaining pipeline health and application stability.

5. Next Steps

To move forward with generating the specific CI/CD pipeline configurations, the following information is required:

  1. Application Details:

* Primary Technology Stack: (e.g., Node.js, Python, Java, .NET, Go, Ruby, PHP)

* Frameworks Used: (e.g., React, Angular, Spring Boot, Django, Flask, Express)

* Build Tool: (e.g., Maven, Gradle, npm, yarn, pip, Go Modules, dotnet build)

* Database Requirements: (e.g., PostgreSQL, MySQL, MongoDB, DynamoDB, SQL Server)

  1. Preferred CI/CD Platform:

* GitHub Actions

* GitLab CI

* Jenkins

If Jenkins, specify desired hosting (cloud VM, on-prem, Kubernetes).*

  1. Deployment Target(s):

* Cloud Provider: (e.g., AWS, Azure, GCP, On-premise, Hybrid)

* Deployment Model: (e.g., Virtual Machines, Kubernetes (EKS/AKS/GKE), Serverless (Lambda/Functions/Cloud Run), PaaS (App Service/Elastic Beanstalk))

* Environment Strategy: (e.g., Dev, Staging, Production; ephemeral environments for PRs)

  1. Testing Requirements:

* Types of tests to include: (e.g., Unit, Integration, E2E, Performance

Key Considerations for GitHub Actions:

  • Secrets: Store sensitive information (e.g., AWS credentials, GCP service account keys, Docker registry passwords) in GitHub Secrets. Access them via `${{ secrets.SECRET
gemini Output

As a professional AI assistant within PantheraHive, I am pleased to present the comprehensive validation and documentation for the DevOps Pipeline Configurations generated by our system. This deliverable ensures you have a clear understanding of the generated pipelines, how they are validated, and how to effectively implement and customize them within your development lifecycle.


DevOps Pipeline Generator: Validation and Documentation

This document outlines the validation process applied to the generated CI/CD pipeline configurations and provides detailed documentation for their usage, customization, and best practices.

1. Overview of Generated Pipeline Configurations

Our DevOps Pipeline Generator creates robust, production-ready CI/CD configurations tailored to your chosen platform (GitHub Actions, GitLab CI, or Jenkins). These pipelines are designed to automate the software delivery process, incorporating essential stages from code commit to deployment.

For demonstration purposes, let's consider a common scenario: a Node.js application with a typical CI/CD flow. The generator produces configurations that consistently include the following core stages, adapted for the specific syntax and capabilities of each platform:

  • Linting: Enforces code style and quality standards.
  • Testing: Executes unit, integration, and optionally end-to-end tests.
  • Building: Compiles code, packages artifacts, and creates Docker images if applicable.
  • Deployment: Deploys the built artifacts to various environments (e.g., Development, Staging, Production).

2. Example Generated Pipeline Configuration (GitHub Actions)

Below is an example of a generated GitHub Actions workflow for a Node.js application, illustrating the structure and stages that would be generated. Similar comprehensive configurations are generated for GitLab CI and Jenkins, adhering to their respective syntaxes and best practices.


# .github/workflows/main.yml
name: Node.js CI/CD

on:
  push:
    branches:
      - main
      - develop
  pull_request:
    branches:
      - main
      - develop

env:
  NODE_VERSION: '18.x' # Or specific version like '18.17.1'
  REGISTRY_GHCR_USERNAME: ${{ github.actor }}
  REGISTRY_GHCR_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
  AWS_REGION: us-east-1 # Example AWS region
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  DEV_SERVER_IP: ${{ secrets.DEV_SERVER_IP }} # Example for SSH deployment

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'
      - name: Install dependencies
        run: npm ci
      - name: Run ESLint
        run: npm run lint

  test:
    runs-on: ubuntu-latest
    needs: lint # Ensure linting passes before testing
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'
      - name: Install dependencies
        run: npm ci
      - name: Run tests
        run: npm test # Assumes a 'test' script in package.json
      - name: Upload test results
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: test-results
          path: ./test-results.xml # Example path for Junit XML reporter

  build_and_push_docker_image:
    runs-on: ubuntu-latest
    needs: test # Ensure tests pass before building
    outputs:
      image_tag: ${{ steps.meta.outputs.tags }}
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
      - name: Log in to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ env.REGISTRY_GHCR_USERNAME }}
          password: ${{ env.REGISTRY_GHCR_PASSWORD }}
      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ghcr.io/${{ github.repository }}
          tags: |
            type=raw,value=latest,enable=${{ github.ref == format('refs/heads/{0}', github.event.repository.default_branch) }}
            type=sha
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=sha,format=long
      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

  deploy_to_dev:
    runs-on: ubuntu-latest
    needs: build_and_push_docker_image
    environment: development
    if: github.ref == 'refs/heads/develop' || github.ref == 'refs/heads/main' # Deploy to dev on pushes to develop/main
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
      - name: Download artifacts (if needed, e.g., for non-Docker deployments)
        uses: actions/download-artifact@v4
        with:
          name: build-artifact # Replace with actual artifact name
          path: ./dist # Example target path
      - name: Set up SSH
        uses: webfactory/ssh-agent@v0.9.0
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
      - name: Deploy to Development Server via SSH
        run: |
          ssh -o StrictHostKeyChecking=no ${{ secrets.SSH_USER }}@${{ env.DEV_SERVER_IP }} "
            cd /var/www/dev-app
            git pull origin develop
            npm install --production
            pm2 restart dev-app
          "
        # For Docker-based deployments, you might use SSH to run docker pull/compose commands
      - name: Deploy Docker Image to Dev (Example using AWS ECR/ECS)
        if: false # Enable if using AWS ECR/ECS
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.AWS_REGION }}
      - name: Deploy to AWS ECS (Example)
        if: false # Enable if using AWS ECR/ECS
        run: |
          # AWS CLI commands to update ECS service with new image tag
          # Example: aws ecs update-service --cluster my-cluster --service my-dev-service --force-new-deployment --task-definition my-task-def-family:$(aws ecs describe-task-definition --task-definition my-task-def-family --query 'taskDefinition.revision' --output text)
          echo "Deploying image ${{ needs.build_and_push_docker_image.outputs.image_tag }} to Dev ECS"

  deploy_to_staging:
    runs-on: ubuntu-latest
    needs: deploy_to_dev
    environment: staging
    if: github.ref == 'refs/heads/main' # Only deploy to staging from main
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
      - name: Set up SSH
        uses: webfactory/ssh-agent@v0.9.0
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_STAGING }} # Use specific staging key
      - name: Deploy to Staging Server via SSH
        run: |
          ssh -o StrictHostKeyChecking=no ${{ secrets.SSH_USER_STAGING }}@${{ secrets.STAGING_SERVER_IP }} "
            cd /var/www/staging-app
            git pull origin main
            npm install --production
            pm2 restart staging-app
          "
      - name: Deploy Docker Image to Staging (Example using AWS ECR/ECS)
        if: false # Enable if using AWS ECR/ECS
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_STAGING }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_STAGING }}
          aws-region: ${{ env.AWS_REGION }}
      - name: Deploy to AWS ECS (Example)
        if: false # Enable if using AWS ECR/ECS
        run: |
          echo "Deploying image ${{ needs.build_and_push_docker_image.outputs.image_tag }} to Staging ECS"

  deploy_to_production:
    runs-on: ubuntu-latest
    needs: deploy_to_staging
    environment: production
    if: github.ref == 'refs/heads/main' && github.event_name == 'push' # Only deploy to prod from main on push
    # For production, often a manual approval step is added, e.g., via GitHub Environments
    # This example assumes automatic deployment for simplicity, but manual approval is highly recommended.
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
      - name: Set up SSH
        uses: webfactory/ssh-agent@v0.9.0
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY_PRODUCTION }}
      - name: Deploy to Production Server via SSH
        run: |
          ssh -o StrictHostKeyChecking=no ${{ secrets.SSH_USER_PRODUCTION }}@${{ secrets.PRODUCTION_SERVER_IP }} "
            cd /var/www/production-app
            git pull origin main
            npm install --production
            pm2 restart production-app
          "
      - name: Deploy Docker Image to Production (Example using AWS ECR/ECS)
        if: false # Enable if using AWS ECR/ECS
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_PRODUCTION }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_PRODUCTION }}
          aws-region: ${{ env.AWS_REGION }}
      - name: Deploy to AWS ECS (Example)
        if: false # Enable if using AWS ECR/ECS
        run: |
          echo "Deploying image ${{ needs.build_and_push_docker_image.outputs.image_tag }} to Production ECS"

3. Validation Process

The generated pipeline configurations undergo a rigorous validation process to ensure they are syntactically correct, logically sound, and adhere to best practices.

  • Syntax Validation:

* YAML Schema Compliance: The generated YAML files (for GitHub Actions and GitLab CI) are validated against their respective schemas to ensure correct structure, indentation, and key usage.

* Jenkinsfile Groovy Syntax: Jenkinsfiles are validated for Groovy syntax correctness.

  • Logical Flow Validation:

* Stage Dependencies: Verification that needs (GitHub Actions), depends: (GitLab CI), or stage dependencies (Jenkins) are correctly defined to ensure stages execute in the intended order (e.g., tests run after linting, deployment after successful build).

* Conditional Execution: Checks for correct usage of if conditions (GitHub Actions), rules/only/except (GitLab CI), or when clauses

devops_pipeline_generator.txt
Download source file
Copy all content
Full output as text
Download ZIP
IDE-ready project ZIP
Copy share link
Permanent URL for this run
Get Embed Code
Embed this result on any website
Print / Save PDF
Use browser print dialog
\n\n\n"); var hasSrcMain=Object.keys(extracted).some(function(k){return k.indexOf("src/main")>=0;}); if(!hasSrcMain) zip.file(folder+"src/main."+ext,"import React from 'react'\nimport ReactDOM from 'react-dom/client'\nimport App from './App'\nimport './index.css'\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n \n \n)\n"); var hasSrcApp=Object.keys(extracted).some(function(k){return k==="src/App."+ext||k==="App."+ext;}); if(!hasSrcApp) zip.file(folder+"src/App."+ext,"import React from 'react'\nimport './App.css'\n\nfunction App(){\n return(\n
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n
\n )\n}\nexport default App\n"); zip.file(folder+"src/index.css","*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#f0f2f5;color:#1a1a2e}\n.app{min-height:100vh;display:flex;flex-direction:column}\n.app-header{flex:1;display:flex;flex-direction:column;align-items:center;justify-content:center;gap:12px;padding:40px}\nh1{font-size:2.5rem;font-weight:700}\n"); zip.file(folder+"src/App.css",""); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/pages/.gitkeep",""); zip.file(folder+"src/hooks/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\n## Open in IDE\nOpen the project folder in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Vue (Vite + Composition API + TypeScript) --- */ function buildVue(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "type": "module",\n "scripts": {\n "dev": "vite",\n "build": "vue-tsc -b && vite build",\n "preview": "vite preview"\n },\n "dependencies": {\n "vue": "^3.5.13",\n "vue-router": "^4.4.5",\n "pinia": "^2.3.0",\n "axios": "^1.7.9"\n },\n "devDependencies": {\n "@vitejs/plugin-vue": "^5.2.1",\n "typescript": "~5.7.3",\n "vite": "^6.0.5",\n "vue-tsc": "^2.2.0"\n }\n}\n'); zip.file(folder+"vite.config.ts","import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\nimport { resolve } from 'path'\n\nexport default defineConfig({\n plugins: [vue()],\n resolve: { alias: { '@': resolve(__dirname,'src') } }\n})\n"); zip.file(folder+"tsconfig.json",'{"files":[],"references":[{"path":"./tsconfig.app.json"},{"path":"./tsconfig.node.json"}]}\n'); zip.file(folder+"tsconfig.app.json",'{\n "compilerOptions":{\n "target":"ES2020","useDefineForClassFields":true,"module":"ESNext","lib":["ES2020","DOM","DOM.Iterable"],\n "skipLibCheck":true,"moduleResolution":"bundler","allowImportingTsExtensions":true,\n "isolatedModules":true,"moduleDetection":"force","noEmit":true,"jsxImportSource":"vue",\n "strict":true,"paths":{"@/*":["./src/*"]}\n },\n "include":["src/**/*.ts","src/**/*.d.ts","src/**/*.tsx","src/**/*.vue"]\n}\n'); zip.file(folder+"env.d.ts","/// \n"); zip.file(folder+"index.html","\n\n\n \n \n "+slugTitle(pn)+"\n\n\n
\n \n\n\n"); var hasMain=Object.keys(extracted).some(function(k){return k==="src/main.ts"||k==="main.ts";}); if(!hasMain) zip.file(folder+"src/main.ts","import { createApp } from 'vue'\nimport { createPinia } from 'pinia'\nimport App from './App.vue'\nimport './assets/main.css'\n\nconst app = createApp(App)\napp.use(createPinia())\napp.mount('#app')\n"); var hasApp=Object.keys(extracted).some(function(k){return k.indexOf("App.vue")>=0;}); if(!hasApp) zip.file(folder+"src/App.vue","\n\n\n\n\n"); zip.file(folder+"src/assets/main.css","*{margin:0;padding:0;box-sizing:border-box}body{font-family:system-ui,sans-serif;background:#fff;color:#213547}\n"); zip.file(folder+"src/components/.gitkeep",""); zip.file(folder+"src/views/.gitkeep",""); zip.file(folder+"src/stores/.gitkeep",""); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nnpm run dev\n\`\`\`\n\n## Build\n\`\`\`bash\nnpm run build\n\`\`\`\n\nOpen in VS Code or WebStorm.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n"); } /* --- Angular (v19 standalone) --- */ function buildAngular(zip,folder,app,code,panelTxt){ var pn=pkgName(app); var C=cc(pn); var sel=pn.replace(/_/g,"-"); var extracted=extractCode(panelTxt); zip.file(folder+"package.json",'{\n "name": "'+pn+'",\n "version": "0.0.0",\n "scripts": {\n "ng": "ng",\n "start": "ng serve",\n "build": "ng build",\n "test": "ng test"\n },\n "dependencies": {\n "@angular/animations": "^19.0.0",\n "@angular/common": "^19.0.0",\n "@angular/compiler": "^19.0.0",\n "@angular/core": "^19.0.0",\n "@angular/forms": "^19.0.0",\n "@angular/platform-browser": "^19.0.0",\n "@angular/platform-browser-dynamic": "^19.0.0",\n "@angular/router": "^19.0.0",\n "rxjs": "~7.8.0",\n "tslib": "^2.3.0",\n "zone.js": "~0.15.0"\n },\n "devDependencies": {\n "@angular-devkit/build-angular": "^19.0.0",\n "@angular/cli": "^19.0.0",\n "@angular/compiler-cli": "^19.0.0",\n "typescript": "~5.6.0"\n }\n}\n'); zip.file(folder+"angular.json",'{\n "$schema": "./node_modules/@angular/cli/lib/config/schema.json",\n "version": 1,\n "newProjectRoot": "projects",\n "projects": {\n "'+pn+'": {\n "projectType": "application",\n "root": "",\n "sourceRoot": "src",\n "prefix": "app",\n "architect": {\n "build": {\n "builder": "@angular-devkit/build-angular:application",\n "options": {\n "outputPath": "dist/'+pn+'",\n "index": "src/index.html",\n "browser": "src/main.ts",\n "tsConfig": "tsconfig.app.json",\n "styles": ["src/styles.css"],\n "scripts": []\n }\n },\n "serve": {"builder":"@angular-devkit/build-angular:dev-server","configurations":{"production":{"buildTarget":"'+pn+':build:production"},"development":{"buildTarget":"'+pn+':build:development"}},"defaultConfiguration":"development"}\n }\n }\n }\n}\n'); zip.file(folder+"tsconfig.json",'{\n "compileOnSave": false,\n "compilerOptions": {"baseUrl":"./","outDir":"./dist/out-tsc","forceConsistentCasingInFileNames":true,"strict":true,"noImplicitOverride":true,"noPropertyAccessFromIndexSignature":true,"noImplicitReturns":true,"noFallthroughCasesInSwitch":true,"paths":{"@/*":["src/*"]},"skipLibCheck":true,"esModuleInterop":true,"sourceMap":true,"declaration":false,"experimentalDecorators":true,"moduleResolution":"bundler","importHelpers":true,"target":"ES2022","module":"ES2022","useDefineForClassFields":false,"lib":["ES2022","dom"]},\n "references":[{"path":"./tsconfig.app.json"}]\n}\n'); zip.file(folder+"tsconfig.app.json",'{\n "extends":"./tsconfig.json",\n "compilerOptions":{"outDir":"./dist/out-tsc","types":[]},\n "files":["src/main.ts"],\n "include":["src/**/*.d.ts"]\n}\n'); zip.file(folder+"src/index.html","\n\n\n \n "+slugTitle(pn)+"\n \n \n \n\n\n \n\n\n"); zip.file(folder+"src/main.ts","import { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch(err => console.error(err));\n"); zip.file(folder+"src/styles.css","* { margin: 0; padding: 0; box-sizing: border-box; }\nbody { font-family: system-ui, -apple-system, sans-serif; background: #f9fafb; color: #111827; }\n"); var hasComp=Object.keys(extracted).some(function(k){return k.indexOf("app.component")>=0;}); if(!hasComp){ zip.file(folder+"src/app/app.component.ts","import { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = '"+pn+"';\n}\n"); zip.file(folder+"src/app/app.component.html","
\n
\n

"+slugTitle(pn)+"

\n

Built with PantheraHive BOS

\n
\n \n
\n"); zip.file(folder+"src/app/app.component.css",".app-header{display:flex;flex-direction:column;align-items:center;justify-content:center;min-height:60vh;gap:16px}h1{font-size:2.5rem;font-weight:700;color:#6366f1}\n"); } zip.file(folder+"src/app/app.config.ts","import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideZoneChangeDetection({ eventCoalescing: true }),\n provideRouter(routes)\n ]\n};\n"); zip.file(folder+"src/app/app.routes.ts","import { Routes } from '@angular/router';\n\nexport const routes: Routes = [];\n"); Object.keys(extracted).forEach(function(p){ var fp=p.startsWith("src/")?p:"src/"+p; zip.file(folder+fp,extracted[p]); }); zip.file(folder+"README.md","# "+slugTitle(pn)+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\nng serve\n# or: npm start\n\`\`\`\n\n## Build\n\`\`\`bash\nng build\n\`\`\`\n\nOpen in VS Code with Angular Language Service extension.\n"); zip.file(folder+".gitignore","node_modules/\ndist/\n.env\n.DS_Store\n*.local\n.angular/\n"); } /* --- Python --- */ function buildPython(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var reqMap={"numpy":"numpy","pandas":"pandas","sklearn":"scikit-learn","tensorflow":"tensorflow","torch":"torch","flask":"flask","fastapi":"fastapi","uvicorn":"uvicorn","requests":"requests","sqlalchemy":"sqlalchemy","pydantic":"pydantic","dotenv":"python-dotenv","PIL":"Pillow","cv2":"opencv-python","matplotlib":"matplotlib","seaborn":"seaborn","scipy":"scipy"}; var reqs=[]; Object.keys(reqMap).forEach(function(k){if(src.indexOf("import "+k)>=0||src.indexOf("from "+k)>=0)reqs.push(reqMap[k]);}); var reqsTxt=reqs.length?reqs.join("\n"):"# add dependencies here\n"; zip.file(folder+"main.py",src||"# "+title+"\n# Generated by PantheraHive BOS\n\nprint(title+\" loaded\")\n"); zip.file(folder+"requirements.txt",reqsTxt); zip.file(folder+".env.example","# Environment variables\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\`\`\`\n\n## Run\n\`\`\`bash\npython main.py\n\`\`\`\n"); zip.file(folder+".gitignore",".venv/\n__pycache__/\n*.pyc\n.env\n.DS_Store\n"); } /* --- Node.js --- */ function buildNode(zip,folder,app,code){ var title=slugTitle(app); var pn=pkgName(app); var src=code.replace(/^\`\`\`[\w]*\n?/m,"").replace(/\n?\`\`\`$/m,"").trim(); var depMap={"mongoose":"^8.0.0","dotenv":"^16.4.5","axios":"^1.7.9","cors":"^2.8.5","bcryptjs":"^2.4.3","jsonwebtoken":"^9.0.2","socket.io":"^4.7.4","uuid":"^9.0.1","zod":"^3.22.4","express":"^4.18.2"}; var deps={}; Object.keys(depMap).forEach(function(k){if(src.indexOf(k)>=0)deps[k]=depMap[k];}); if(!deps["express"])deps["express"]="^4.18.2"; var pkgJson=JSON.stringify({"name":pn,"version":"1.0.0","main":"src/index.js","scripts":{"start":"node src/index.js","dev":"nodemon src/index.js"},"dependencies":deps,"devDependencies":{"nodemon":"^3.0.3"}},null,2)+"\n"; zip.file(folder+"package.json",pkgJson); var fallback="const express=require(\"express\");\nconst app=express();\napp.use(express.json());\n\napp.get(\"/\",(req,res)=>{\n res.json({message:\""+title+" API\"});\n});\n\nconst PORT=process.env.PORT||3000;\napp.listen(PORT,()=>console.log(\"Server on port \"+PORT));\n"; zip.file(folder+"src/index.js",src||fallback); zip.file(folder+".env.example","PORT=3000\n"); zip.file(folder+".gitignore","node_modules/\n.env\n.DS_Store\n"); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Setup\n\`\`\`bash\nnpm install\n\`\`\`\n\n## Run\n\`\`\`bash\nnpm run dev\n\`\`\`\n"); } /* --- Vanilla HTML --- */ function buildVanillaHtml(zip,folder,app,code){ var title=slugTitle(app); var isFullDoc=code.trim().toLowerCase().indexOf("=0||code.trim().toLowerCase().indexOf("=0; var indexHtml=isFullDoc?code:"\n\n\n\n\n"+title+"\n\n\n\n"+code+"\n\n\n\n"; zip.file(folder+"index.html",indexHtml); zip.file(folder+"style.css","/* "+title+" — styles */\n*{margin:0;padding:0;box-sizing:border-box}\nbody{font-family:system-ui,-apple-system,sans-serif;background:#fff;color:#1a1a2e}\n"); zip.file(folder+"script.js","/* "+title+" — scripts */\n"); zip.file(folder+"assets/.gitkeep",""); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\n## Open\nDouble-click \`index.html\` in your browser.\n\nOr serve locally:\n\`\`\`bash\nnpx serve .\n# or\npython3 -m http.server 3000\n\`\`\`\n"); zip.file(folder+".gitignore",".DS_Store\nnode_modules/\n.env\n"); } /* ===== MAIN ===== */ var sc=document.createElement("script"); sc.src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js"; sc.onerror=function(){ if(lbl)lbl.textContent="Download ZIP"; alert("JSZip load failed — check connection."); }; sc.onload=function(){ var zip=new JSZip(); var base=(_phFname||"output").replace(/\.[^.]+$/,""); var app=base.toLowerCase().replace(/[^a-z0-9]+/g,"_").replace(/^_+|_+$/g,"")||"my_app"; var folder=app+"/"; var vc=document.getElementById("panel-content"); var panelTxt=vc?(vc.innerText||vc.textContent||""):""; var lang=detectLang(_phCode,panelTxt); if(_phIsHtml){ buildVanillaHtml(zip,folder,app,_phCode); } else if(lang==="flutter"){ buildFlutter(zip,folder,app,_phCode,panelTxt); } else if(lang==="react-native"){ buildReactNative(zip,folder,app,_phCode,panelTxt); } else if(lang==="swift"){ buildSwift(zip,folder,app,_phCode,panelTxt); } else if(lang==="kotlin"){ buildKotlin(zip,folder,app,_phCode,panelTxt); } else if(lang==="react"){ buildReact(zip,folder,app,_phCode,panelTxt); } else if(lang==="vue"){ buildVue(zip,folder,app,_phCode,panelTxt); } else if(lang==="angular"){ buildAngular(zip,folder,app,_phCode,panelTxt); } else if(lang==="python"){ buildPython(zip,folder,app,_phCode); } else if(lang==="node"){ buildNode(zip,folder,app,_phCode); } else { /* Document/content workflow */ var title=app.replace(/_/g," "); var md=_phAll||_phCode||panelTxt||"No content"; zip.file(folder+app+".md",md); var h=""+title+""; h+="

"+title+"

"; var hc=md.replace(/&/g,"&").replace(//g,">"); hc=hc.replace(/^### (.+)$/gm,"

$1

"); hc=hc.replace(/^## (.+)$/gm,"

$1

"); hc=hc.replace(/^# (.+)$/gm,"

$1

"); hc=hc.replace(/\*\*(.+?)\*\*/g,"$1"); hc=hc.replace(/\n{2,}/g,"

"); h+="

"+hc+"

Generated by PantheraHive BOS
"; zip.file(folder+app+".html",h); zip.file(folder+"README.md","# "+title+"\n\nGenerated by PantheraHive BOS.\n\nFiles:\n- "+app+".md (Markdown)\n- "+app+".html (styled HTML)\n"); } zip.generateAsync({type:"blob"}).then(function(blob){ var a=document.createElement("a"); a.href=URL.createObjectURL(blob); a.download=app+".zip"; a.click(); URL.revokeObjectURL(a.href); if(lbl)lbl.textContent="Download ZIP"; }); }; document.head.appendChild(sc); } function phShare(){navigator.clipboard.writeText(window.location.href).then(function(){var el=document.getElementById("ph-share-lbl");if(el){el.textContent="Link copied!";setTimeout(function(){el.textContent="Copy share link";},2500);}});}function phEmbed(){var runId=window.location.pathname.split("/").pop().replace(".html","");var embedUrl="https://pantherahive.com/embed/"+runId;var code='';navigator.clipboard.writeText(code).then(function(){var el=document.getElementById("ph-embed-lbl");if(el){el.textContent="Embed code copied!";setTimeout(function(){el.textContent="Get Embed Code";},2500);}});}