本文最后编辑于 前,其中的内容可能需要更新。
存储相关概念
pv: 持久化卷,可以使用ceph和nfs等
pvc: 持久化卷声明,用于调度pv资源
StorageClass: 定义存储类型,动态分配存储资源
环境准备
首先建立一个nfs的server,此处不做多讲,kubernetes节点上也安装nfs服务
1 2
| systemctl enable --now rpcbind systemctl enable --now nfs
|
pv相关属性:
pv相关属性包括了Capacity(存储能力)、AccessModes(访问模式)、ReclaimPolicy(回收策略)。
基本的capacity指标为storage=”存储容量”。
访问模式包含以下三种:
ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
ReadWriteMany(RWX):读写权限,可以被多个节点挂载
注: 不同的存储方式支持的访问模式不同,请参阅相关指南。
回收策略:
Retain(保留)- 保留数据,需要管理员手工清理数据
Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /yourdir/*
Delete(删除)- 与 PV 相连的后端存储完成 volume 的删除操作
pv的状态
通常有以下几种:
Available(可用):表示可用状态,还未被任何 PVC 绑定
Bound(已绑定):表示 PV 已经被 PVC 绑定
Released(已释放):PVC 被删除,但是资源还未被集群重新声明
Failed(失败): 表示该 PV 的自动回收失败
pv实践
接下来新建pv对象(pv1.yaml):
1 2 3 4 5 6 7 8 9 10 11 12 13
| apiVersion: v1 kind: PersistentVolume metadata: name: pv1 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle nfs: path: /data/k8s server: 192.168.1.42
|
应用:
1 2
| kubectl create -f pv1.yaml persistentvolume "pv1" created
|
创建pvc
1 2 3 4 5 6 7 8 9 10
| kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-nfs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
|
注: pvc会自动寻找available状态的pv,无需额外声明。如果pv为2Gi,pvc为1Gi,同样会进行绑定,且pvc的容量会变成2Gi。
使用pvc作为服务的存储
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
| apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 name: web volumeMounts: - name: www subPath: nginx-test mountPath: /usr/share/nginx/html volumes: - name: www persistentVolumeClaim: claimName: pvc2-nfs
|
nginx-service.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13
| apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: type: NodePort ports: - port: 80 targetPort: web selector: app: nginx
|
nginx-ingress.ymal
1 2 3 4 5 6 7 8 9 10 11 12 13
| apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: nginx-route spec: entryPoints: - web routes: - match: Host(`nginx.k8s.local`) kind: Rule services: - name: nginx port: 80
|
StorageClass实践
新建nfs-client的deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
| apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.1.42 - name: NFS_PATH value: /data/k8s volumes: - name: nfs-client-root nfs: server: 192.168.1.42 path: /data/k8s
|
创建nfs-client的serviceaccount:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
| apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner
--- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["endpoints"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
--- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io
|
此处新建了一个serviceaccount同时绑定了一个clusterrole(用于声明权限)
创建sc对象
1 2 3 4 5 6
| apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage provisioner: fuseim.pri/ifs
|
1 2 3
| kubectl create -f nfs-client.yaml kubectl create -f nfs-client-sa.yaml kubectl create -f nfs-client-class.yaml
|
使用StorageClass 创建服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
| apiVersion: apps/v1 kind: StatefulSet metadata: name: nfs-web spec: serviceName: "nginx" replicas: 3 selector: matchLabels: app: nfs-web template: metadata: labels: app: nfs-web spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: nfs-storage spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi
|
1
| kubectl create -f statefulset-nfs.yaml
|