initial commit

This commit is contained in:
Tobias Brunner 2021-11-03 15:28:07 +01:00
commit 6ea0ea8dc6
7 changed files with 647 additions and 0 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
kubeconfig.yaml

132
README.md Normal file
View File

@ -0,0 +1,132 @@
# vcluster on OpenShift
See also https://github.com/loft-sh/vcluster/issues/171
## Notes
* Images do not run properly as non-root - workaround with multiple `emptyDir` mounts.
* Default K3s CoreDNS deployment doesn't work on OpenShift Host Cluster -> Custom deployment in `in-cluster/coredns.yaml`.
## Installation
- Create OpenShift Project
- Create vcluster: `vcluster create vcluster-1 -n tobru-vcluster-poc -f values.yaml`
- Get kubeconfig: `oc get secret vc-vcluster-1 --template={{.data.config}} | base64 -d > kubeconfig.yaml`
- Get CA for re-encryption of Route: `kubectl --kubeconfig=$(pwd)/kubeconfig.yaml config view -o jsonpath='{.clusters[].cluster.certificate-authority-data}' --flatten | base64 -d`
- Edit `host-cluster/route.yaml`, include the retrieved CA and install the route: `oc apply -f host-cluster/route.yaml`
- Remove CA (we're using Let's Encrypt): `kubectl --kubeconfig=$(pwd)/kubeconfig.yaml config unset clusters.local.certificate-authority-data`
- Set kubeconfig: `export KUBECONFIG=$(pwd)/kubeconfig.yaml`
- Install custom CoreDNS in vcluster: `kubectl apply -f in-cluster/coredns.yaml`
- Configure OIDC: `kubectl config set-credentials ...` (see below)
## OIDC Authentication
* Blog: https://aaron-pejakovic.medium.com/kubernetes-authenticating-to-your-cluster-using-keycloak-eba81710f49b
* K8s Docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
* kubectl plugin: https://github.com/int128/kubelogin
### vcluster config
```
vcluster:
baseArgs:
[...]
- --kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle
- --kube-apiserver-arg=oidc-client-id=tobru-vcluster-test
- --kube-apiserver-arg=oidc-groups-claim=groups
- --kube-apiserver-arg=oidc-issuer-url=https://id.dev.appuio.cloud/auth/realms/appuio-cloud-dev
- --kube-apiserver-arg=oidc-username-claim=email
```
### kubectl plugin
```
# kubectl oidc-login setup --oidc-issuer-url=https://id.dev.appuio.cloud/auth/realms/appuio-cloud-dev --oidc-client-id=tobru-vcluster-test --oidc-client-secret=63410f24-0721-447b-a290-4b0169c414e0
authentication in progress...
## 2. Verify authentication
You got a token with the following claims:
{
"exp": 000,
"iat": 000,
"auth_time": 000,
"jti": "XXX",
"iss": "https://id.dev.appuio.cloud/auth/realms/appuio-cloud-dev",
"aud": "tobru-vcluster-test",
"sub": "UUID",
"typ": "ID",
"azp": "tobru-vcluster-test",
"nonce": "XXX",
"session_state": "XXX",
"at_hash": "XXX",
"acr": "1",
"sid": "XXX",
"email_verified": true,
"name": "Tobias Brunner",
"groups": [
"admin"
],
"preferred_username": "tobias.brunner",
"given_name": "Tobias",
"family_name": "Brunner",
"email": "tobias.brunner@vshn.net"
}
## 3. Bind a cluster role
Run the following command:
kubectl create clusterrolebinding oidc-cluster-admin --clusterrole=cluster-admin --user='XXX'
## 4. Set up the Kubernetes API server
Add the following options to the kube-apiserver:
--oidc-issuer-url=https://id.dev.appuio.cloud/auth/realms/appuio-cloud-dev
--oidc-client-id=tobru-vcluster-test
## 5. Set up the kubeconfig
Run the following command:
kubectl config set-credentials oidc \
--exec-api-version=client.authentication.k8s.io/v1beta1 \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=https://id.dev.appuio.cloud/auth/realms/appuio-cloud-dev \
--exec-arg=--oidc-client-id=tobru-vcluster-test \
--exec-arg=--oidc-client-secret=63410f24-0721-447b-a290-4b0169c414e0
## 6. Verify cluster access
Make sure you can access the Kubernetes cluster.
kubectl --user=oidc get nodes
You can switch the default context to oidc.
kubectl config set-context --current --user=oidc
You can share the kubeconfig to your team members for on-boarding.
# kubectl --kubeconfig ./kubeconfig.yaml config set-credentials oidc \
--exec-api-version=client.authentication.k8s.io/v1beta1 \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=https://id.dev.appuio.cloud/auth/realms/appuio-cloud-dev \
--exec-arg=--oidc-client-id=tobru-vcluster-test \
--exec-arg=--oidc-client-secret=63410f24-0721-447b-a290-4b0169c414e0
# kubectl --kubeconfig ./kubeconfig.yaml --user=oidc get po -A
```
## Keycloak client
* "Access Type" confidential
* Valid Redirect URIs:
http://localhost:18000
http://localhost:8000
(they are for the kubelogin plugin)

28
host-cluster/route.yaml Normal file
View File

@ -0,0 +1,28 @@
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: vcluster
namespace: tobru-vcluster-poc
spec:
host: vcluster-poc.apps.cloudscale-lpg-1.appuio.cloud
port:
targetPort: https
tls:
destinationCACertificate: |-
-----BEGIN CERTIFICATE-----
MIIBdzCCAR2gAwIBAgIBADAKBggqhkjOPQQDAjAjMSEwHwYDVQQDDBhrM3Mtc2Vy
dmVyLWNhQDE2MzU5NDkzNDQwHhcNMjExMTAzMTQyMjI0WhcNMzExMTAxMTQyMjI0
WjAjMSEwHwYDVQQDDBhrM3Mtc2VydmVyLWNhQDE2MzU5NDkzNDQwWTATBgcqhkjO
PQIBBggqhkjOPQMBBwNCAASvwosJwebm6BfvLa5SmRljewWxmtxrEVqiLxxylpi0
HnRD9Mf+V51woXJnLD67ZudhtNi9Yo5aMUJRCCUmKgNTo0IwQDAOBgNVHQ8BAf8E
BAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQURg8+LoXFHvDbK4p7cA3m
6jhd0gAwCgYIKoZIzj0EAwIDSAAwRQIhAK51H0HiF+MmKDpHxZa4QsmaKhJmibZx
Y3ulMnr5JBnaAiBfVaJANaLLYex+HHncQf/O1BG8+ksezljAQYTyVCEFiw==
-----END CERTIFICATE-----
insecureEdgeTerminationPolicy: None
termination: reencrypt
to:
kind: Service
name: vcluster-1
weight: 100
wildcardPolicy: None

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-role-binding
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: admin
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,219 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
ttl 60
reload 15s
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/name: CoreDNS
name: coredns
namespace: kube-system
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: rancher/mirrored-coredns-coredns:1.8.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: NodeHosts
path: NodeHosts
name: coredns
name: config-volume
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
name: kube-dns
namespace: kube-system
spec:
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
- name: metrics
port: 9153
protocol: TCP
targetPort: 9153
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP

207
in-cluster/coredns.yaml Normal file
View File

@ -0,0 +1,207 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
data:
Corefile: |
.:8053 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/name: CoreDNS
name: coredns
namespace: kube-system
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: rancher/mirrored-coredns-coredns:1.8.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: coredns
ports:
- containerPort: 8053
name: dns
protocol: UDP
- containerPort: 8053
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
name: coredns
name: config-volume
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
name: kube-dns
namespace: kube-system
spec:
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 8053
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 8053
- name: metrics
port: 9153
protocol: TCP
targetPort: 9153
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP

48
values.yaml Normal file
View File

@ -0,0 +1,48 @@
vcluster:
image: rancher/k3s:v1.22.2-k3s1
baseArgs:
- server
- --write-kubeconfig=/k3s-config/kube-config.yaml
- --data-dir=/data
- --disable=traefik,servicelb,metrics-server,local-storage,coredns
- --disable-network-policy
- --disable-agent
- --disable-scheduler
- --disable-cloud-controller
- --flannel-backend=none
- --kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle
- --kube-apiserver-arg=oidc-client-id=tobru-vcluster-test
- --kube-apiserver-arg=oidc-groups-claim=groups
- --kube-apiserver-arg=oidc-issuer-url=https://id.dev.appuio.cloud/auth/realms/appuio-cloud-dev
- --kube-apiserver-arg=oidc-username-claim=preferred_username
volumeMounts:
- mountPath: /data
name: data
- mountPath: /k3s-config
name: k3s-config
- mountPath: /.kube
name: kubeconfig
syncer:
extraArgs:
- --tls-san=vcluster-poc.apps.cloudscale-lpg-1.appuio.cloud
- --out-kube-config-server=https://vcluster-poc.apps.cloudscale-lpg-1.appuio.cloud
volumeMounts:
- mountPath: /data
name: data
- mountPath: /.kube
name: kubeconfig
- mountPath: /root
name: roothome
- mountPath: /var/lib/vcluster
name: vclusterdata
volumes:
- name: data
emptyDir: {}
- name: k3s-config
emptyDir: {}
- name: kubeconfig
emptyDir: {}
- name: roothome
emptyDir: {}
- name: vclusterdata
emptyDir: {}