打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
Using Existing Ceph Cluster for Kubernetes Persistent Storage
userphoto

2019.01.14

关注

Last update:

I wrote about Rook storage a few weeks ago, but maybe you already have Ceph cluster running in your datacenter. Or you prefer to use Ceph on separate nodes and without Kubernetes. Also, currently Rook is alpha software and not ready for production use. I would assume that this large Ceph cluster if you have one, is also used for other services outside Kubernetes. Whatever is the case it is simple to connect Ceph and Kubernetes together to provision persistent volumes on Kubernetes.

Previous blog post

Connect Ceph and Kubernetes

RBD client is used for interaction between Kubernetes and Ceph. Unfortunately, it is not available in official kube-controller-manager container. You could change kube controller manager image to include RBD, but that is not recommended. Instead, I will use external storage plugin for Ceph. This will create a separate rbd-provisioner pod which has rbd installed. My Kubernetes test cluster is RBAC enabled. If not, you can only create Deployment resource and skip the rest. In that case, don't forget to delete service account from deployment definition. Let's create all resources for rbd-provisioner with RBAC in kube-system namespace:

⚡ cat <<EOF | kubectl create -n kube-system -f -kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: rbd-provisionerrules:  - apiGroups: [""]    resources: ["persistentvolumes"]    verbs: ["get", "list", "watch", "create", "delete"]  - apiGroups: [""]    resources: ["persistentvolumeclaims"]    verbs: ["get", "list", "watch", "update"]  - apiGroups: ["storage.k8s.io"]    resources: ["storageclasses"]    verbs: ["get", "list", "watch"]  - apiGroups: [""]    resources: ["events"]    verbs: ["list", "watch", "create", "update", "patch"]  - apiGroups: [""]    resources: ["services"]    resourceNames: ["kube-dns"]    verbs: ["list", "get"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: rbd-provisionersubjects:  - kind: ServiceAccount    name: rbd-provisioner    namespace: kube-systemroleRef:  kind: ClusterRole  name: rbd-provisioner  apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata:  name: rbd-provisionerrules:- apiGroups: [""]  resources: ["secrets"]  verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:  name: rbd-provisionerroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: rbd-provisionersubjects:- kind: ServiceAccount  name: rbd-provisioner  namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:  name: rbd-provisioner---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: rbd-provisionerspec:  replicas: 1  strategy:    type: Recreate  template:    metadata:      labels:        app: rbd-provisioner    spec:      containers:      - name: rbd-provisioner        image: "quay.io/external_storage/rbd-provisioner:latest"        env:        - name: PROVISIONER_NAME          value: ceph.com/rbd      serviceAccount: rbd-provisionerEOF

Please check that quay.io/external_storage/rbd-provisioner:latest image has the same Ceph version installed as your Ceph cluster. You can check it like this on any machine running docker:

⚡ docker pull quay.io/external_storage/rbd-provisioner:latest⚡ docker history quay.io/external_storage/rbd-provisioner:latest | grep CEPH_VERSION<missing>           15 hours ago        /bin/sh -c #(nop)  ENV CEPH_VERSION=luminous    0B

Wait a few minutes for RBD volume provisioner to be up and running:

⚡ kubectl get pods -l app=rbd-provisioner -n kube-systemNAME                               READY     STATUS    RESTARTS   AGErbd-provisioner-77d75fdc5b-mpbpn   1/1       Running   1          1m

RBD volume provisioner needs admin key from Ceph to provision storage. To get the admin key from Ceph cluster use this command:

sudo ceph --cluster ceph auth get-key client.admin

NOTE: Run all commands that start with sudo on Ceph MON node. Also I'm using Jewel version of Ceph and rbd-provisioner is based on Jewel as well.

Then add this key to Kubernetes secrets:

⚡ kubectl create secret generic ceph-secret     --type="kubernetes.io/rbd"     --from-literal=key='AQBwruNY/lEmCxAAKS7tzZHSforkUE85htnA/g=='     --namespace=kube-system

I will also create a separate Ceph pool for Kubernetes and the new client key as this Ceph cluster has cephx authentication enabled:

sudo ceph --cluster ceph osd pool create kube 1024 1024sudo ceph --cluster ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'sudo ceph --cluster ceph auth get-key client.kube

Add the new client secret for kube pool into Kubernetes secrets:

⚡ kubectl create secret generic ceph-secret-kube     --type="kubernetes.io/rbd"     --from-literal=key='AQC/c+dYsXNUNBAAMTEW1/WnzXdmDZIBhcw6ug=='     --namespace=kube-system

When both secrets are present create the new storage class. Let's call it fast-rbd:

⚡ cat <<EOF | kubectl create -f -apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:  name: fast-rbdprovisioner: ceph.com/rbdparameters:  monitors: <monitor-1-ip>:6789, <monitor-2-ip>:6789, <monitor-3-ip>:6789  adminId: admin  adminSecretName: ceph-secret  adminSecretNamespace: kube-system  pool: kube  userId: kube  userSecretName: ceph-secret-kube  userSecretNamespace: kube-system  imageFormat: "2"  imageFeatures: layeringEOF

And the last step is to create a simple PVC to test RBD volume provisioner:

⚡ cat <<EOF | kubectl create -f -kind: PersistentVolumeClaimapiVersion: v1metadata:  name: myclaimspec:  accessModes:    - ReadWriteOnce  resources:    requests:      storage: 8Gi  storageClassName: fast-rbdEOF

That's it, the new volume created on Ceph cluster:

⚡ kubectl get pvc myclaimNAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGEmyclaim   Bound     pvc-11559e19-2541-11e8-94dc-525400474652   8Gi        RWO            fast-rbd       1h

For any troubleshooting, run pvc describe command on a particular PVC.

Summary

This was a simple how-to guide to help you to connect Ceph and Kubernetes together. RBD volume provisioner is simple to deploy, but either way, I might create a Helm chart later. Stay tuned for the next one.

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
Kubernetes 使用 ceph-csi 消费 RBD 作为持久化存储
K8S statefulset 详解
附013.Kubernetes永久存储Rook部署
Kubernetes 如何动态配置本地存储?
Pod的数据持久化3 PV与PVC
044.Kubernetes集群存储-StorageClass
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服