在每个k8s node上要安装ceph-common,确保有rbd命令
安装secret(集群级别)
apiVersion: v1 kind: Secret metadata: name: ceph-secret type: kubernetes.io/rbd #非常重要,如果想让storageclass识别必须加这个,文档示例上没写,但是example里写了 data: key: QVFDZ2ZOOVkza3VyR3hBQXNYYmx6Mi9xVlBZNzN0VWZvMUlFRlE9PQ== # ceph auth get-key client.admin | base64
安装storageClass(集群级别)
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast provisioner: kubernetes.io/rbd parameters: monitors: 10.1.4.208:6789 adminId: admin adminSecretName: ceph-secret pool: rbd userId: admin userSecretName: ceph-secret
安装PVC(namespace级别)
- 向配置的stroageClass申请PV
- 绑定后PV相当于卷,pod挂载时是挂载对PVC的引用
- 目前只支持指定大小,实际上代码修改后可以支持对fsType等的指定(目前不支持)
- 挂载方式是使用docker HostConfig.Binds来挂载,所以
- 在node上看mount会看到:/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/rbd-image-kubernetes-dynamic-pvc-59474c44-1a77-11e7-8b1a-fa163e0dfa6d type ext4 (rw,relatime,stripe=4096,data=ordered)
- HostConfig.Binds:"/var/lib/kubelet/pods/b84a50db-1aa7-11e7-a0c8-fa163e0dfa6d/volumes/kubernetes.io~rbd/pvc-3c34a5f2-1a95-11e7-a0c8-fa163e0dfa6d:/mnt/rbd-rox"
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-rox spec: accessModes: - ReadOnlyMany resources: requests: storage: 100Mi storageClassName: fast
安装pod:
apiVersion: v1 kind: Pod metadata: name: rbd-test5 spec: containers: - name: rbd-rw image: docker.cloudin.com/cloudin/alpine:latest command: ["/bin/sleep", "10000"] volumeMounts: - mountPath: "/mnt/rbd" name: rbd-rox # 也可以配置subPath,使用该pvc中的一个子目录 volumes: - name: rbd-rox persistentVolumeClaim: claimName: pvc-rox
疑难问题:
- 检查锁定的image: rbd lock list aaa.img --pool kube
- 检查当前节点的rbd map信息rbd showmapped
- osd节点支持使用目录(系统原生fs)或块设备来创建,如果原生fs为ext4时,会有可能出现:File name too long
- 配置调整ceph.conf
- osd max object name len = 256
- osd max object namespace len = 64
- 配置调整ceph.conf
- rbd
- 稳定性依赖于ceph
- 仅支持rox / rwo
- 需要rwx就上ceph fs (文件系统级互斥锁)
- ceph:rbd image有4个 features,layering, exclusive-lock, object-map, fast-diff, deep-flatten 因为目前内核仅支持layering
- 修改默认配置 每个ceph node的/etc/ceph/ceph.conf 添加一行 rbd_default_features = 1 这样之后创建的image 只有这一个feature
- 验证方式:ceph --show-config|grep rbd|grep features
- rbd_default_features = 1
- ceph osd crush show-tunables -f json-pretty
- ceph osd crush tunables legacy
- 修改默认配置 每个ceph node的/etc/ceph/ceph.conf 添加一行 rbd_default_features = 1 这样之后创建的image 只有这一个feature
- k8s < 1.6是不支持在pvc中使用storageClassName的
- 对于k8s 1.6,docker版本最好是1.12,在centos7是存在于默认yum源中的
- 如果使用kubeadm部署的k8s,controll-manager会在容器内运行
- failed to create rbd image: executable file not found in $PATH
- 需要在容器内安装rbd或改为系统环境运行
- 跟踪日志:journalctl -fu kubelet / journalctl -f
- dowload_image.sh
- docker pull zeewell/$1
docker tag zeewell/$1 gcr.io/google_containers/$1
- docker pull zeewell/$1
- 需要下载的镜像
- bash download.sh etcd-amd64:3.0.17
bash download.sh kube-controller-manager-amd64:v1.6.0
bash download.sh kube-proxy-amd64:v1.6.0
bash download.sh k8s-dns-sidecar-amd64:1.14.1
bash download.sh k8s-dns-kube-dns-amd64:1.14.1
bash download.sh k8s-dns-dnsmasq-nanny-amd64:1.14.1
- bash download.sh etcd-amd64:3.0.17
- vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
- 注释掉 #Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
- KUBELET_KUBECONFIG_ARGS需要添加 --cgroup-driver=systemd
refs:
- http://foxhound.blog.51cto.com/1167932/1899545
- http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
- http://docs.ceph.com/docs/master/install/manual-deployment/
- https://github.com/kubernetes/kubernetes/tree/master/examples/persistent-volume-provisioning/rbd
- http://www.tuicool.com/articles/vQr6zaV
- http://www.tuicool.com/articles/feyiMr6
- http://tonybai.com/2016/11/07/integrate-kubernetes-with-ceph-rbd/
- http://webcache.googleusercontent.com/search?q=cache:XESMwMuMZTEJ:lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/002815.html+&cd=1&hl=en&ct=clnk
- https://kubernetes.io/docs/concepts/storage/persistent-volumes/