Merge pull request #1358 from OwenTuz/issue-1132-initial-kubernetes-documentation-improvements

Kubernetes docs: clarify steps around use/creation of TLS assets.
This commit is contained in:
Stephan Renatus 2018-11-26 13:54:44 +01:00 committed by GitHub
commit 007e4dae3c
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 101 additions and 17 deletions

View file

@ -264,7 +264,7 @@ connectors:
# freeIPA server's CA
rootCA: ca.crt
userSearch:
# Would translate to the query "(&(objectClass=person)(uid=<username>))".
# Would translate to the query "(&(objectClass=posixAccount)(uid=<username>))".
baseDN: cn=users,dc=freeipa,dc=example,dc=com
filter: "(objectClass=posixAccount)"
username: uid

View file

@ -3,6 +3,7 @@
## Overview
This document covers setting up the [Kubernetes OpenID Connect token authenticator plugin][k8s-oidc] with dex.
It also contains a worked example showing how the Dex server can be deployed within Kubernetes.
Token responses from OpenID Connect providers include a signed JWT called an ID Token. ID Tokens contain names, emails, unique identifiers, and in dex's case, a set of groups that can be used to identify the user. OpenID Connect providers, like dex, publish public keys; the Kubernetes API server understands how to use these to verify ID Tokens.
@ -13,7 +14,7 @@ The authentication flow looks like:
3. Kubernetes uses dex's public keys to verify the ID Token.
4. A claim designated as the username (and optionally group information) will be associated with that request.
Username and group information can be combined with Kubernetes [authorization plugins][k8s-authz], such as roles based access control (RBAC), to enforce policy.
Username and group information can be combined with Kubernetes [authorization plugins][k8s-authz], such as role based access control (RBAC), to enforce policy.
## Configuring the OpenID Connect plugin
@ -21,7 +22,7 @@ Configuring the API server to use the OpenID Connect [authentication plugin][k8s
* Deploying an API server with specific flags.
* Dex is running on HTTPS.
* Custom CA files must be accessible by the API server (likely through volume mounts).
* Custom CA files must be accessible by the API server.
* Dex is accessible to both your browser and the Kubernetes API server.
Use the following flags to point your API server(s) at dex. `dex.example.com` should be replaced by whatever DNS name or IP address dex is running under.
@ -29,7 +30,7 @@ Use the following flags to point your API server(s) at dex. `dex.example.com` sh
```
--oidc-issuer-url=https://dex.example.com:32000
--oidc-client-id=example-app
--oidc-ca-file=/etc/kubernetes/ssl/openid-ca.pem
--oidc-ca-file=/etc/ssl/certs/openid-ca.pem
--oidc-username-claim=email
--oidc-groups-claim=groups
```
@ -47,7 +48,7 @@ Additional notes:
The dex repo contains scripts for running dex on a Kubernetes cluster with authentication through GitHub. The dex service is exposed using a [node port][node-port] on port 32000. This likely requires a custom `/etc/hosts` entry pointed at one of the cluster's workers.
Because dex uses `ThirdPartyResources` to store state, no external database is needed. For more details see the [storage documentation](storage.md#kubernetes-third-party-resources).
Because dex uses [CRDs](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) to store state, no external database is needed. For more details see the [storage documentation](storage.md#kubernetes-third-party-resources).
There are many different ways to spin up a Kubernetes development cluster, each with different host requirements and support for API server reconfiguration. At this time, this guide does not have copy-pastable examples, but can recommend the following methods for spinning up a cluster:
@ -58,18 +59,64 @@ To run dex on Kubernetes perform the following steps:
1. Generate TLS assets for dex.
2. Spin up a Kubernetes cluster with the appropriate flags and CA volume mount.
3. Create a secret containing your [GitHub OAuth2 client credentials][github-oauth2].
3. Create secrets for TLS and for your [GitHub OAuth2 client credentials][github-oauth2].
4. Deploy dex.
5. Create and assign 'dex' cluster role to dex service account ([to enable dex to manage its CRDs, if RBAC authorization is used](https://github.com/dexidp/dex/blob/master/Documentation/storage.md#kubernetes-custom-resource-definitions-crds)).
The TLS assets can be created using the following command:
### Generate TLS assets
Running Dex with HTTPS enabled requires a valid SSL certificate, and the API server needs to trust the certificate of the signing CA using the `--oidc-ca-file` flag.
For our example use case, the TLS assets can be created using the following command:
```
$ cd examples/k8s
$ ./gencert.sh
```
The created `ssl/ca.pem` must then be mounted into your API server deployment. Once the cluster is up and correctly configured, use kubectl to add the serving certs as secrets.
This will generate several files under the `ssl` directory, the important ones being `cert.pem` ,`key.pem` and `ca.pem`. The generated SSL certificate is for 'dex.example.com', although you could change this by editing `gencert.sh` if required.
### Configure the API server
#### Ensure the CA certificate is available to the API server
The CA file which was used to sign the SSL certificates for Dex needs to be copied to a location where the API server can read it, and the API server configured to look for it with the flag `--oidc-ca-file`.
There are several options here but if you run your API server as a container probably the easiest method is to use a [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) volume to mount the CA file directly from the host.
The example pod manifest below assumes that you copied the CA file into `/etc/ssl/certs`. Adjust as necessary:
```
spec:
containers:
[...]
volumeMounts:
- mountPath: /etc/ssl/certs
name: etc-ssl-certs
readOnly: true
[...]
volumes:
- name: ca-certs
hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
```
Depending on your installation you may also find that certain folders are already mounted in this way and that you can simply copy the CA file into an existing folder for the same effect.
#### Configure API server flags
Configure the API server as in [Configuring the OpenID Connect Plugin](#configuring-the-openid-connect-plugin) above.
Note that the `ca.pem` from above has been renamed to `openid-ca.pem` in this example - this is just to separate it from any other CA certificates that may be in use.
### Create cluster secrets
Once the cluster is up and correctly configured, use kubectl to add the serving certs as secrets.
```
$ kubectl create secret tls dex.example.com.tls --cert=ssl/cert.pem --key=ssl/key.pem
@ -84,14 +131,14 @@ $ kubectl create secret \
--from-literal=client-secret=$GITHUB_CLIENT_SECRET
```
Create the dex deployment, configmap, and node port service.
### Deploy the Dex server
Create the dex deployment, configmap, and node port service. This will also create RBAC bindings allowing the Dex pod access to manage [Custom Resource Definitions](storage.md#kubernetes-custom-resource-definitions-crds) within Kubernetes.
```
$ kubectl create -f dex.yaml
```
Assign cluster role to dex service account so it can create third party resources [Kubernetes third party resources](storage.md).
__Caveats:__ No health checking is configured because dex does its own TLS termination complicating the setup. This is a known issue and can be tracked [here][dex-healthz].
## Logging into the cluster

View file

@ -38,9 +38,9 @@ Etcd storage can be customized further using the following options:
## Kubernetes custom resource definitions (CRDs)
__NOTE:__ CRDs are only supported by Kubernetes version 1.7+.
Kubernetes [custom resource definitions](crd) are a way for applications to create new resources types in the Kubernetes API.
Kubernetes [custom resource definitions](crd) are a way for applications to create new resources types in the Kubernetes API. The Custom Resource Definition (CRD) API object was introduced in Kubernetes version 1.7 to replace the Third Party Resource (TPR) extension. CRDs allows dex to run on top of an existing Kubernetes cluster without the need for an external database. While this storage may not be appropriate for a large number of users, it's extremely effective for many Kubernetes use cases.
The Custom Resource Definition (CRD) API object was introduced in Kubernetes version 1.7 to replace the Third Party Resource (TPR) extension. CRDs allow dex to run on top of an existing Kubernetes cluster without the need for an external database. While this storage may not be appropriate for a large number of users, it's extremely effective for many Kubernetes use cases.
The rest of this section will explore internal details of how dex uses CRDs. __Admins should not interact with these resources directly__, except while debugging. These resources are only designed to store state and aren't meant to be consumed by end users. For modifying dex's state dynamically see the [API documentation](api.md).
@ -115,11 +115,15 @@ subjects:
```
## Kubernetes third party resources(TPRs)
## DEPRECATED: Kubernetes third party resources(TPRs)
__NOTE:__ TPRs will be deprecated by Kubernetes version 1.8.
__NOTE:__ TPRs are deprecated as of Kubernetes version 1.8.
The default behavior of dex from release v2.7.0 onwards is to utitlize CRDs to manage its custom resources. If users would like to use dex with a Kubernetes version lower than 1.7, they will have to force dex to use TPRs instead of CRDs by setting the `UseTPR` flag in the storage configuration as shown below:
The default behavior of dex from release v2.7.0 onwards is to utilize CRDs to manage its custom resources. If users would like to use dex with a Kubernetes version lower than 1.7, they will have to force dex to use TPRs instead of CRDs.
These instructions have been preserved for anybody who needs to use an older version of Dex and/or Kubernetes, but this is not the recommended approach. See [Migrating from TPRs to CRDs](#migrating-from-tprs-to-crds) below for information on migrating an existing installation to the new approach.
If you do wish to use TPRs, you may do so by setting the `UseTPR` flag in the storage configuration as shown below:
```
storage:

View file

@ -11,6 +11,7 @@ spec:
labels:
app: dex
spec:
serviceAccountName: dex # This is created below
containers:
- image: quay.io/dexidp/dex:v2.10.0
name: dex
@ -104,3 +105,35 @@ spec:
nodePort: 32000
selector:
app: dex
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: dex
name: dex
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: dex
rules:
- apiGroups: ["dex.coreos.com"] # API group created by dex
resources: ["*"]
verbs: ["*"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create"] # To manage its own resources, dex must be able to create customresourcedefinitions
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dex
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: dex
subjects:
- kind: ServiceAccount
name: dex # Service account assigned to the dex pod, created above
namespace: default # The namespace dex is running in