debian-mirror-gitlab/doc/topics/autodevops/requirements.md
2023-07-09 08:55:56 +05:30

8.8 KiB

stage group info
Deploy Environments To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments

Requirements for Auto DevOps (FREE)

Before enabling Auto DevOps, we recommend you to prepare it for deployment. If you don't, you can use it to build and test your apps, and then configure the deployment later.

To prepare the deployment:

  1. Define the deployment strategy.

  2. Prepare the base domain.

  3. Define where you want to deploy it:

    1. Kubernetes.
    2. Amazon Elastic Container Service (ECS).
    3. Amazon Elastic Kubernetes Service (EKS).
    4. Amazon EC2.
    5. Google Kubernetes Engine.
    6. Bare metal.
  4. Enable Auto DevOps.

Auto DevOps deployment strategy

Introduced in GitLab 11.0.

When using Auto DevOps to deploy your applications, choose the continuous deployment strategy that works best for your needs:

Deployment strategy Setup Methodology
Continuous deployment to production Enables Auto Deploy with the default branch continuously deployed to production. Continuous deployment to production.
Continuous deployment to production using timed incremental rollout Sets the INCREMENTAL_ROLLOUT_MODE variable to timed. Continuously deploy to production with a 5 minutes delay between rollouts.
Automatic deployment to staging, manual deployment to production Sets STAGING_ENABLED to 1 and INCREMENTAL_ROLLOUT_MODE to manual. The default branch is continuously deployed to staging and continuously delivered to production.

You can choose the deployment method when enabling Auto DevOps or later:

  1. In GitLab, on the top bar, select Main menu > Projects and find your project.
  2. On the left sidebar, select Settings > CI/CD.
  3. Expand Auto DevOps.
  4. Choose the deployment strategy.
  5. Select Save changes.

NOTE: Use the blue-green deployment technique to minimize downtime and risk.

Auto DevOps base domain

The Auto DevOps base domain is required to use Auto Review Apps, Auto Deploy, and Auto Monitoring.

To define the base domain, either:

  • In the project, group, or instance level: go to your cluster settings and add it there.
  • In the project or group level: add it as an environment variable: KUBE_INGRESS_BASE_DOMAIN.
  • In the instance level: go to Main menu > Admin > Settings > CI/CD > Continuous Integration and Delivery and add it there.

The base domain variable KUBE_INGRESS_BASE_DOMAIN follows the same order of precedence as other environment variables.

If you don't specify the base domain in your projects and groups, Auto DevOps uses the instance-wide Auto DevOps domain.

Auto DevOps requires a wildcard DNS A record that matches the base domains. For a base domain of example.com, you'd need a DNS entry like:

*.example.com   3600     A     1.2.3.4

In this case, the deployed applications are served from example.com, and 1.2.3.4 is the IP address of your load balancer, generally NGINX (see requirements). Setting up the DNS record is beyond the scope of this document; check with your DNS provider for information.

Alternatively, you can use free public services like nip.io which provide automatic wildcard DNS without any configuration. For nip.io, set the Auto DevOps base domain to 1.2.3.4.nip.io.

After completing setup, all requests hit the load balancer, which routes requests to the Kubernetes pods running your application.

Auto DevOps requirements for Kubernetes

To make full use of Auto DevOps with Kubernetes, you need:

If you don't have Kubernetes or Prometheus configured, then Auto Review Apps, Auto Deploy, and Auto Monitoring are skipped.

After all requirements are met, you can enable Auto DevOps.

Auto DevOps requirements for bare metal

According to the Kubernetes Ingress-NGINX docs:

In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.

The docs linked above explain the issue and present possible solutions, for example: