Last update:
I wrote about Rook storage a few weeks ago, but maybe you already have Ceph cluster running in your datacenter. Or you prefer to use Ceph on separate nodes and without Kubernetes. Also, currently Rook is alpha software and not ready for production use. I would assume that this large Ceph cluster if you have one, is also used for other services outside Kubernetes. Whatever is the case it is simple to connect Ceph and Kubernetes together to provision persistent volumes on Kubernetes.
Previous blog post
RBD client is used for interaction between Kubernetes and Ceph. Unfortunately, it is not available in official kube-controller-manager
container. You could change kube controller manager image to include RBD, but that is not recommended. Instead, I will use external storage plugin for Ceph. This will create a separate rbd-provisioner pod which has rbd
installed. My Kubernetes test cluster is RBAC enabled. If not, you can only create Deployment resource and skip the rest. In that case, don't forget to delete service account from deployment definition. Let's create all resources for rbd-provisioner with RBAC in kube-system
namespace:
⚡ cat <<EOF | kubectl create -n kube-system -f -kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: rbd-provisionerrules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns"] verbs: ["list", "get"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: rbd-provisionersubjects: - kind: ServiceAccount name: rbd-provisioner namespace: kube-systemroleRef: kind: ClusterRole name: rbd-provisioner apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata: name: rbd-provisionerrules:- apiGroups: [""] resources: ["secrets"] verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: name: rbd-provisionerroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rbd-provisionersubjects:- kind: ServiceAccount name: rbd-provisioner namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata: name: rbd-provisioner---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: rbd-provisionerspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: rbd-provisioner spec: containers: - name: rbd-provisioner image: "quay.io/external_storage/rbd-provisioner:latest" env: - name: PROVISIONER_NAME value: ceph.com/rbd serviceAccount: rbd-provisionerEOF
Please check that quay.io/external_storage/rbd-provisioner:latest
image has the same Ceph version installed as your Ceph cluster. You can check it like this on any machine running docker:
⚡ docker pull quay.io/external_storage/rbd-provisioner:latest⚡ docker history quay.io/external_storage/rbd-provisioner:latest | grep CEPH_VERSION<missing> 15 hours ago /bin/sh -c #(nop) ENV CEPH_VERSION=luminous 0B
Wait a few minutes for RBD volume provisioner to be up and running:
⚡ kubectl get pods -l app=rbd-provisioner -n kube-systemNAME READY STATUS RESTARTS AGErbd-provisioner-77d75fdc5b-mpbpn 1/1 Running 1 1m
RBD volume provisioner needs admin key from Ceph to provision storage. To get the admin key from Ceph cluster use this command:
sudo ceph --cluster ceph auth get-key client.admin
NOTE: Run all commands that start with sudo
on Ceph MON node. Also I'm using Jewel version of Ceph and rbd-provisioner
is based on Jewel as well.
Then add this key to Kubernetes secrets:
⚡ kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-literal=key='AQBwruNY/lEmCxAAKS7tzZHSforkUE85htnA/g==' --namespace=kube-system
I will also create a separate Ceph pool for Kubernetes and the new client key as this Ceph cluster has cephx
authentication enabled:
sudo ceph --cluster ceph osd pool create kube 1024 1024sudo ceph --cluster ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'sudo ceph --cluster ceph auth get-key client.kube
Add the new client secret for kube pool into Kubernetes secrets:
⚡ kubectl create secret generic ceph-secret-kube --type="kubernetes.io/rbd" --from-literal=key='AQC/c+dYsXNUNBAAMTEW1/WnzXdmDZIBhcw6ug==' --namespace=kube-system
When both secrets are present create the new storage class. Let's call it fast-rbd
:
⚡ cat <<EOF | kubectl create -f -apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: fast-rbdprovisioner: ceph.com/rbdparameters: monitors: <monitor-1-ip>:6789, <monitor-2-ip>:6789, <monitor-3-ip>:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-secret-kube userSecretNamespace: kube-system imageFormat: "2" imageFeatures: layeringEOF
And the last step is to create a simple PVC to test RBD volume provisioner:
⚡ cat <<EOF | kubectl create -f -kind: PersistentVolumeClaimapiVersion: v1metadata: name: myclaimspec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi storageClassName: fast-rbdEOF
That's it, the new volume created on Ceph cluster:
⚡ kubectl get pvc myclaimNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmyclaim Bound pvc-11559e19-2541-11e8-94dc-525400474652 8Gi RWO fast-rbd 1h
For any troubleshooting, run pvc describe
command on a particular PVC.
This was a simple how-to guide to help you to connect Ceph and Kubernetes together. RBD volume provisioner is simple to deploy, but either way, I might create a Helm chart later. Stay tuned for the next one.
联系客服