Merge pull request #684 from ericchiang/examples-k8s-fixup

examples/k8s: update kubernetes examples
This commit is contained in:
Eric Chiang 2016-11-17 15:28:00 -08:00 committed by GitHub
commit e6b54250db
8 changed files with 222 additions and 313 deletions

116
Documentation/kubernetes.md Normal file
View file

@ -0,0 +1,116 @@
# Kubernetes authentication through dex
## Overview
This document covers setting up the [Kubernetes OpenID Connect token authenticator plugin][k8s-oidc] with dex.
Token responses from OpenID Connect providers include a signed JWT called an ID Token. ID Tokens contain names, emails, unique identifiers, and in dex's case, a set of groups that can be used to identify the user. OpenID Connect providers, like dex, publish public keys; the Kubernetes API server understands how to use these to verify ID Tokens.
The authentication flow looks like:
1. OAuth2 client logs a user in through dex.
2. That client uses the returned ID Token as a bearer token when talking to the Kubernetes API.
3. Kubernetes uses dex's public keys to verify the ID Token.
4. A claim designated as the username (and optionally group information) will be associated with that request.
Username and group information can be combined with Kubernetes [authorization plugins][k8s-authz], such as roles based access control (RBAC), to enforce policy.
## Configuring the OpenID Connect plugin
Configuring the API server to use the OpenID Connect [authentcation plugin][k8s-oidc] requires:
* Deploying an API server with specific flags.
* Dex is running on HTTPS.
* Custom CA files must be accessible by the API server (likely through volume mounts).
* Dex is accessible to both your browser and the Kubernetes API server.
Use the following flags to point your API server(s) at dex. `dex.example.com` should be replaced by whatever DNS name or IP address dex is running under.
```
--oidc-issuer-url=https://dex.example.com:32000
--oidc-client-id=example-app
--oidc-ca-file=/etc/kubernetes/ssl/openid-ca.pem
--oidc-username-claim=email
--oidc-groups-claim=groups
```
Additional notes:
* The API server configured with OpenID Connect flags doesn't require dex to be available upfront.
* Other authenticators, such as client certs, can still be used.
* Dex doesn't need to be running when you start your API server.
* Kubernetes only trusts ID Tokens issued to a single client.
* As a work around dex allows clients to [trust other clients][trusted-peers] to mint tokens on their behalf.
* If a claim other than "email" is used for username, for example "sub", it will be prefixed by `"(value of --oidc-issuer-url)#"`. This is to namespace user controlled claims which may be used for privilege escalation.
## Deploying dex on Kubernetes
The dex repo contains scripts for running dex on a Kubernetes cluster with authentication through GitHub. The dex service is exposed using a [node port][node-port] on port 32000. This likely requires a custom `/etc/hosts` entry pointed at one of the cluster's workers.
There are many different ways to spin up a Kubernetes development cluster, each with different host requirements and support for API server reconfiguration. At this time, this guide does not have copy-pastable examples, but can recommend the following methods for spinning up a cluster:
* [coreos-kubernetes][coreos-kubernetes] repo for vagrant and VirtualBox users.
* [coreos-baremetal][coreos-baremetal] repo for Linux QEMU/KVM users.
To run dex on Kubernetes perform the following steps:
1. Generate TLS assets for dex.
2. Spin up a Kubernetes cluster with the appropriate flags and CA volume mount.
3. Create a secret containing your [GitHub OAuth2 client credentials][github-oauth2].
4. Deploy dex.
The TLS assets can be created using the following command:
```
$ cd examles/k8s
$ ./gencert.sh
```
The created `ssl/ca.pem` must then be mounted into your API server deployment. Once the cluster is up and correctly configured, use kubectl to add the serving certs as secrets.
```
$ kubectl create secret tls dex.example.com.tls --cert=ssl/cert.pem --key=ssl/key.pem
```
Then create a secret for the GitHub OAuth2 client.
```
$ kubectl create secret \
generic github-client \
--from-literal=client-id=$GITHUB_CLIENT_ID \
--from-literal=client-secret=$GITHUB_CLIENT_SECRET
```
Finally, create the dex deployment, configmap, and node port service.
```
$ kubectl create -f dex.yaml
```
__Caveats:__ No health checking is configured because dex does its own TLS termination complicating the setup. This is a known issue and can be tracked [here][dex-healthz].
## Logging into the cluster
The example app can be used to log into the cluster. Choose the GitHub option and grant access to dex to view your profile.
```
$ ./bin/example-app --issuer https://dex.example.com:32000 --issuer-root-ca examples/k8s/ssl/ca.pem
```
The printed ID Token can then be used as a bearer token to authenticate against the API server.
```
$ token='(id token)'
$ curl -H "Authorization: Bearer $token" -k https://( API server host ):443/api/v1/nodes
```
[k8s-authz]: http://kubernetes.io/docs/admin/authorization/
[k8s-oidc]: http://kubernetes.io/docs/admin/authentication/#openid-connect-tokens
[trusted-peers]: https://godoc.org/github.com/coreos/dex/storage#Client
[coreos-kubernetes]: https://github.com/coreos/coreos-kubernetes/
[coreos-baremetal]: https://github.com/coreos/coreos-baremetal/
[dex-healthz]: https://github.com/coreos/dex/issues/682
[github-oauth2]: https://github.com/settings/applications/new
[node-port]: http://kubernetes.io/docs/user-guide/services/#type-nodeport
[coreos-kubernetes]: https://github.com/coreos/coreos-kubernetes
[coreos-baremetal]: https://github.com/coreos/coreos-baremetal

View file

@ -1,124 +0,0 @@
# Running dex as the Kubernetes authenticator
Running dex as the Kubernetes authenticator requires.
* dex is running on HTTPS.
* Your browser can navigate to dex at the same address Kubernetes refers to it as.
To accomplish this locally, these scripts assume you're using the single host
vagrant setup provided by the [coreos-kubernetes](
https://github.com/coreos/coreos-kubernetes) repo with a couple of changes (a
complete diff is provided at the bottom of this document). Namely that:
* The API server isn't running on host port 443.
* The virtual machine has a populated `/etc/hosts`
The following entry must be added to your host's `/etc/hosts` file as well as
the VM.
```
172.17.4.99 dex.example.com
```
In the future this document will provide instructions for a more general
Kubernetes installation.
Once you have Kubernetes configured, set up the ThirdPartyResources and a
ConfigMap for dex to use. These run dex as a deployment with configuration and
storage, allowing it to get started.
```
kubectl create configmap dex-config --from-file=config.yaml=config-k8s.yaml
kubectl create -f deployment.yaml
```
To get dex running at an HTTPS endpoint, create an ingress controller, some
self-signed TLS assets and an ingress rule for dex. These TLS assest should
normally be provided by an actual CA (public or internal).
```
kubectl create -f https://raw.githubusercontent.com/kubernetes/contrib/master/ingress/controllers/nginx/rc.yaml
./gencert.sh
kubectl create secret tls dex.example.com.tls --cert=ssl/cert.pem --key=ssl/key.pem
kubectl create -f dex-ingress.yaml
```
To test that the everything has been installed correctly. Configure a client
with some credentials, and run the `example-app` (run `make` at the top level
of this repo if you haven't already). The second command will error out if your
example-app can't find dex.
```
kubectl create -f client.yaml
../../bin/example-app --issuer https://dex.example.com --issuer-root-ca ssl/ca.pem
```
Navigate to `127.0.0.1:5555` and try to login. You should be redirected to
`dex.example.com` with lots of TLS errors. Proceed around them, authorize the
`example-app`'s OAuth2 client and you should be redirected back to the
`example-app` with valid OpenID Connect credentials.
Finally, to configure Kubernetes to use dex as its authenticator, copy
`ssl/ca.pem` to `/etc/kubernetes/ssl/openid-ca.pem` onto the VM and update the
API server's manifest at `/etc/kubernetes/manifests/kube-apiserver.yaml` to add
the following flags.
```
--oidc-issuer-url=https://dex.example.com
--oidc-client-id=example-app
--oidc-ca-file=/etc/kubernetes/ssl/openid-ca.pem
--oidc-username-claim=email
--oidc-groups-claim=groups
```
Kick the API server by killing its Docker container, and when it comes up again
it should be using dex. Login again through the `example-app` and you should be
able to use the provided token as a bearer token to hit the Kubernetes API.
## Changes to coreos-kubernetes
The following is a diff to the [coreos-kubernetes](https://github.com/coreos/coreos-kubernetes)
repo that accomplishes the required changes.
```diff
diff --git a/single-node/user-data b/single-node/user-data
index f419f09..ed42055 100644
--- a/single-node/user-data
+++ b/single-node/user-data
@@ -80,6 +80,15 @@ function init_flannel {
}
function init_templates {
+ local TEMPLATE=/etc/hosts
+ if [ ! -f $TEMPLATE ]; then
+ echo "TEMPLATE: $TEMPLATE"
+ mkdir -p $(dirname $TEMPLATE)
+ cat << EOF > $TEMPLATE
+172.17.4.99 dex.example.com
+EOF
+ fi
+
local TEMPLATE=/etc/systemd/system/kubelet.service
if [ ! -f $TEMPLATE ]; then
echo "TEMPLATE: $TEMPLATE"
@@ -195,7 +204,7 @@ spec:
- --etcd-servers=${ETCD_ENDPOINTS}
- --allow-privileged=true
- --service-cluster-ip-range=${SERVICE_IP_RANGE}
- - --secure-port=443
+ - --secure-port=8443
- --advertise-address=${ADVERTISE_IP}
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
@@ -211,8 +220,8 @@ spec:
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- - containerPort: 443
- hostPort: 443
+ - containerPort: 8443
+ hostPort: 8443
name: https
- containerPort: 8080
hostPort: 8080
```

View file

@ -1,10 +0,0 @@
kind: OAuth2Client
apiVersion: oauth2clients.oidc.coreos.com/v1
metadata:
name: example-app
namespace: default
secret: ZXhhbXBsZS1hcHAtc2VjcmV0
redirectURIs:
- http://127.0.0.1:5555/callback
name: Example App

View file

@ -1,13 +0,0 @@
issuer: https://dex.example.com
storage:
type: kubernetes
config:
inCluster: true
web:
http: 0.0.0.0:5556
connectors:
- type: mock
id: mock
name: Mock

View file

@ -1,38 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: dex
name: dex
spec:
replicas: 1
template:
metadata:
labels:
app: dex
spec:
containers:
- image: quay.io/ericchiang/dex
name: dex
command:
- "/dex"
- "serve"
- "/dex/config.yaml"
env:
# A value required for dex's Kubernetes client.
- name: KUBERNETES_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 5556
name: worker-port
volumeMounts:
- name: config-volume
mountPath: /dex
volumes:
- name: config-volume
configMap:
name: dex-config

View file

@ -1,28 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: dex
spec:
ports:
- name: dex
port: 5556
selector:
app: dex
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dex
spec:
tls:
- secretName: dex.example.com.tls
hosts:
- dex.example.com
rules:
- host: dex.example.com
http:
paths:
- backend:
serviceName: dex
servicePort: 5556
path: /

106
examples/k8s/dex.yaml Normal file
View file

@ -0,0 +1,106 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: dex
name: dex
spec:
replicas: 3
template:
metadata:
labels:
app: dex
spec:
containers:
- image: quay.io/coreos/dex:v2.0.0-beta.1
name: dex
command: ["/usr/local/bin/dex", "serve", "/etc/dex/cfg/config.yaml"]
ports:
- name: https
containerPort: 5556
volumeMounts:
- name: config
mountPath: /etc/dex/cfg
- name: tls
mountPath: /etc/dex/tls
env:
- name: GITHUB_CLIENT_ID
valueFrom:
secretKeyRef:
name: github-client
key: client-id
- name: GITHUB_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: github-client
key: client-secret
volumes:
- name: config
configMap:
name: dex
items:
- key: config.yaml
path: config.yaml
- name: tls
secret:
secretName: dex.example.com.tls
---
kind: ConfigMap
apiVersion: v1
metadata:
name: dex
data:
config.yaml: |
issuer: https://dex.example.com:32000
storage:
type: kubernetes
config:
inCluster: true
web:
https: 0.0.0.0:5556
tlsCert: /etc/dex/tls/tls.crt
tlsKey: /etc/dex/tls/tls.key
connectors:
- type: github
id: github
name: GitHub
config:
clientID: $GITHUB_CLIENT_ID
clientSecret: $GITHUB_CLIENT_SECRET
redirectURI: https://dex.example.com:32000/callback
org: kubernetes
oauth2:
skipApprovalScreen: true
staticClients:
- id: example-app
redirectURIs:
- 'http://127.0.0.1:5555/callback'
name: 'Example App'
secret: ZXhhbXBsZS1hcHAtc2VjcmV0
enablePasswordDB: true
staticPasswords:
- email: "admin@example.com"
# bcrypt hash of the string "password"
hash: "$2a$10$33EMT0cVYVlPy6WAMCLsceLYjWhuHpbz5yuZxu/GAFj03J9Lytjuy"
username: "admin"
userID: "08a8684b-db88-4b73-90a9-3cd1661f5466"
---
apiVersion: v1
kind: Service
metadata:
name: dex
spec:
type: NodePort
ports:
- name: dex
port: 5556
protocol: TCP
targetPort: 5556
nodePort: 32000
selector:
app: dex

View file

@ -1,100 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
web-frontend
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.2
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10249
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=default/default-http-backend