5.4 KiB
stage | group | info |
---|---|---|
Configure | Configure | To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments |
GitLab-managed clusters (deprecated) (FREE)
- Introduced in GitLab 11.5.
- Became optional in GitLab 11.11.
- Deprecated in GitLab 14.5.
- Disabled on self-managed in GitLab 15.0.
WARNING: This feature was deprecated in GitLab 14.5. To connect your cluster to GitLab, use the GitLab agent. To manage applications, use the Cluster Project Management Template.
FLAG:
On self-managed GitLab, by default this feature is not available. To make it available, ask an administrator to enable the feature flag named certificate_based_clusters
.
You can choose to allow GitLab to manage your cluster for you. If your cluster is managed by GitLab, resources for your projects are automatically created. See the Access controls section for details about the created resources.
If you choose to manage your own cluster, project-specific resources aren't created
automatically. If you are using Auto DevOps, you must
explicitly provide the KUBE_NAMESPACE
deployment variable
for your deployment jobs to use. Otherwise, a namespace is created for you.
WARNING: Be aware that manually managing resources that have been created by GitLab, like namespaces and service accounts, can cause unexpected errors. If this occurs, try clearing the cluster cache.
Clearing the cluster cache
Introduced in GitLab 12.6.
If you allow GitLab to manage your cluster, GitLab stores a cached version of the namespaces and service accounts it creates for your projects. If you modify these resources in your cluster manually, this cache can fall out of sync with your cluster. This can cause deployment jobs to fail.
To clear the cache:
- Navigate to your project's Infrastructure > Kubernetes clusters page, and select your cluster.
- Expand the Advanced settings section.
- Select Clear cluster cache.
Base domain
Introduced in GitLab 11.8.
Specifying a base domain automatically sets KUBE_INGRESS_BASE_DOMAIN
as an deployment variable.
If you are using Auto DevOps, this domain is used for the different
stages. For example, Auto Review Apps and Auto Deploy.
The domain should have a wildcard DNS configured to the Ingress IP address. You can either:
- Create an
A
record that points to the Ingress IP address with your domain provider. - Enter a wildcard DNS address using a service such as
nip.io
orxip.io
. For example,192.168.1.1.xip.io
.
To determine the external Ingress IP address, or external Ingress hostname:
-
If the cluster is on GKE:
- Select the Google Kubernetes Engine link in the Advanced settings, or go directly to the Google Kubernetes Engine dashboard.
- Select the proper project and cluster.
- Select Connect.
- Execute the
gcloud
command in a local terminal or using the Cloud Shell.
-
If the cluster is not on GKE: Follow the specific instructions for your Kubernetes provider to configure
kubectl
with the right credentials. The output of the following examples show the external endpoint of your cluster. This information can then be used to set up DNS entries and forwarding rules that allow external access to your deployed applications.
Depending an your Ingress, the external IP address can be retrieved in various ways. This list provides a generic solution, and some GitLab-specific approaches:
-
In general, you can list the IP addresses of all load balancers by running:
kubectl get svc --all-namespaces -o jsonpath='{range.items[?(@.status.loadBalancer.ingress)]}{.status.loadBalancer.ingress[*].ip} '
-
If you installed Ingress using the Applications, run:
kubectl get service --namespace=gitlab-managed-apps ingress-nginx-ingress-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
-
Some Kubernetes clusters return a hostname instead, like Amazon EKS. For these platforms, run:
kubectl get service --namespace=gitlab-managed-apps ingress-nginx-ingress-controller -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
If you use EKS, an Elastic Load Balancer is also created, which incurs additional AWS costs.
-
Istio/Knative uses a different command. Run:
kubectl get svc --namespace=istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip} '
If you see a trailing %
on some Kubernetes versions, do not include it.