设置时间同步,每天执行,如果可以上网直接阿里云ntpserver,不能上网安装ntpd同步

yum install ntpdate -y
ntpdate ntp.aliyun.com
echo '*/3 * * * * /usr/bin/ntpdate ntp.aliyun.com' >> /var/spool/cron/root

关闭selinux每台执行

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭防火墙每台执行

systemctl stop firewalld && systemctl disable firewalld

设置主机名和hosts文件,每台执行

hostnamectl set-hostname gluster01
hostnamectl set-hostname gluster02
hostnamectl set-hostname gluster03
cat >>/etc/hosts <<EOF
192.168.2.233 gluster01
192.168.2.234 gluster02
192.168.2.235 gluster03
EOF

gluster 服务端+客户端 配(gluster fs官方源)每台执行(gluster9才执行)忽略

yum install  fuse rpcbind -y
yum install ./*.rpm  (rpm放在123盘,glusterfs目录里)
systemctl enable glusterd --now
systemctl status glusterd

k8snode节点需要安装glusterfs-fuse

yum install  glusterfs-fuse -y

3台服务器建立连接,只需要找其中1台,连接另外2台即可,在gluster01上执行

gluster peer probe gluster02
gluster peer probe gluster03
验证
gluster peer status

创建分布式卷

gluster volume create mydata replica 3 gluster01:/glusterdb gluster02:/glusterdb gluster03:/glusterdb force
gluster volume start mydata
gluster volume info

Heketi安装

两种部署方式

1、物理机部署

1.1、物理机部署(在一台服务器上执行)

配置Heketi YUM源
yum -y install centos-release-gluster
安装Heketi
1、k8s集群master节点安装
yum -y install heketi heketi-client

1.2、heketi服务器节点修改Heketi配置文件

需要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。我们这里将glusterfs和heketi独立部署,使用ssh的方式

cp /etc/heketi/heketi.json{,.bak}
cat /etc/heketi/heketi.json
{
  "_port_comment": "Heketi Server Port Number",
  "port": "18080", 修改为18080,防止与其它端口冲突

  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": true, 开启用户认证

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "adminkey" 用户认证的key
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "My Secret"
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    "executor": "ssh", 访问glusterfs集群的方法

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key", 
      访问glusterfs集群使用的私钥,需要提前在k8s集群master节点生成
      并copy到glusterfs集群所有节点,需要从/root/.ssh/id_rsa复制到此处才可以使用。
      "user": "root", 认证使用的用户
      "port": "22", ssh连接使用的端口
      "fstab": "/etc/fstab" 挂载的文件系统
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db", 数据库位置

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "warning" 修改日志级别
  }
}

1.3、设置ssh免密登陆,一台上执行,这里k8smaster上执行

ssh-keygen
for i in 192.168.2.{233..235};do echo ">>> $i";ssh-copy-id $i;done
for i in gluster{01..03};do echo ">>> $i";ssh-copy-id $i;done

复制私密到/etc/heketi目录
cp .ssh/id_rsa /etc/heketi/heketi_key

1.4、启动Heketi,这里k8smaster上执行

chown heketi:heketi /etc/heketi/ -R || chown heketi:heketi /var/lib/heketi -R
systemctl enable heketi
systemctl start heketi
systemctl status heketi

1.5、为 Heketi 创建拓扑配置文件,该文件包含添加到 Heketi 的集群、节点和磁盘的信息

vi /etc/heketi/topology.json

示例文件:
{

  "clusters": [

    {

      "nodes": [

        {

          "node": {

            "hostnames": {

              "manage": [

                "192.168.0.2" 

              ],

              "storage": [

                "192.168.0.2" 

              ]

            },

            "zone": 1

          },

          "devices": [

            "/dev/vdd" 

          ]

        },

        {

          "node": {

            "hostnames": {

              "manage": [

                "192.168.0.3" 

              ],

              "storage": [

                "192.168.0.3"

              ]

            },

            "zone": 1

          },

          "devices": [

            "/dev/vdd" 

          ]

        },

        {

          "node": {

            "hostnames": {

              "manage": [

                "192.168.0.4"

              ],

              "storage": [

                "192.168.0.4"  

              ]

            },

            "zone": 1

          },

          "devices": [

            "/dev/vdd"

          ]

        }

      ]

    }

  ]

}

1.6、加载 topology.json 文件

export HEKETI_CLI_SERVER=http://192.168.2.230:18080
heketi-cli --user admin --secret 'adminkey' topology load --json=/etc/heketi/topology.json
查看集群信息
heketi-cli --user admin --secret 'adminkey' topology info

1.7、查看集群信息

heketi-cli --user admin --secret 'adminkey' cluster info 2d9e11adede04fe6d07cb81c5a1a7ea4

1.8、k8s调用glusterfs

vi glusterfs-sc.yaml

apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: kube-system
type: kubernetes.io/glusterfs
data:
  key: "MTIzNDU2"    #请替换为您自己的密钥。Base64 编码。
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
    storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
  name: glusterfs
parameters:
  clusterid: "21240a91145aee4d801661689383dcd1"    #请替换为您自己的 GlusterFS 集群 ID。
  gidMax: "50000"
  gidMin: "40000"
  restauthenabled: "true"
  resturl: "http://192.168.0.2:8080"    #Gluster REST 服务/Heketi 服务 URL 可按需供应 gluster 存储卷。请替换为您自己的 URL。
  restuser: admin
  secretName: heketi-secret
  secretNamespace: kube-system
  volumetype: "replicate:2"    #请替换为您自己的存储卷类型。
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

2.1、在k8s中部署

2.1.1、Heketi配置文件

需要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。我们这里将glusterfs和heketi独立部署,使用ssh的方式。vim heketi.json

{
  "_port_comment": "Heketi Server Port Number",
  "port": "18080", 修改为18080,防止与其它端口冲突

  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": true, 开启用户认证

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "adminkey" 用户认证的key
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "userkey"
    }
  },

"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
  "_executor_comment": [
    "Execute plugin. Possible choices: mock, ssh",
    "mock: This setting is used for testing and development.",
    "      It will not send commands to any node.",
    "ssh:  This setting will notify Heketi to ssh to the nodes.",
    "      It will need the values in sshexec to be configured.",
    "kubernetes: Communicate with GlusterFS containers over",
    "            Kubernetes exec api."
  ],
  "executor": "ssh",

  "_sshexec_comment": "SSH username and private key file information",
  "sshexec": {
    "keyfile": "/usr/share/keys/heketi_key",
    "user": "root",
    "port": "22",
    "fstab": "/etc/fstab"
  },

  "_kubeexec_comment": "Kubernetes configuration",
  "kubeexec": {
    "host" :"https://kubernetes.host:8443",
    "cert" : "/path/to/crt.file",
    "insecure": false,
    "user": "kubernetes username",
    "password": "password for kubernetes user",
    "namespace": "OpenShift project or Kubernetes namespace",
    "fstab": "/etc/fstab"
  },

  "_db_comment": "Database file name",
  "db": "/var/lib/heketi/heketi.db",

  "_loglevel_comment": [
    "Set log level. Choices are:",
    "  none, critical, error, warning, info, debug",
    "Default is warning"
  ],
  "loglevel" : "debug"
}
}

kubectl create configmap heketi-config --from-file=/root/heketi.json

2.1.2、设置ssh免密登陆,一台上执行,这里k8smaster上执行

ssh-keygen
for i in 192.168.2.{233..235};do echo ">>> $i";ssh-copy-id $i;done
for i in gluster{01..03};do echo ">>> $i";ssh-copy-id $i;done
cp .ssh/id_rsa .ssh/heketi_key
kubectl create secret generic ssh-key-secret --from-file=.ssh/heketi_key

2.1.3、heketi deploy部署yaml,这里先设置hostpath方式挂在,待把gluster 磁盘初始化完成和创建heketi的pvc之后改成动态存储 vim heketi-deployment.yaml

---
kind: Service
apiVersion: v1
metadata:
  name: heketi
  namespace: heketi
  labels:
    glusterfs: heketi-service
    heketi: service
spec:
  selector:
    app: heketi
  ports:
  - name: heketi
    port: 18080
    targetPort: 18080
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: heketi-deployment
  namespace: heketi
  labels:
    app: heketi
  annotations:
    description: Defines how to deploy Heketi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: heketi
  template:
    metadata:
      name: heketi
      labels:
        app: heketi
    spec:
      containers:
      - name: heketi
        image: heketi/heketi:dev
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: keys
          mountPath: /usr/share/keys/heketi_key
          subPath: heketi_key 
        - name: config
          mountPath: /etc/heketi
        - name: db
          mountPath: /var/lib/heketi
      volumes:
        - name: keys
          secret:
            secretName: ssh-key-secret
            defaultMode: 0600
        - name: config
          configMap:
            name: heketi-config
        - name: db
          hostPath:
            path: /root/heketi

2.1.3、为 Heketi 创建拓扑配置文件,该文件包含添加到 Heketi 的集群、节点和磁盘的信息

vi topology.json

示例文件:
   {

    "clusters": [

       {

         "nodes": [

           {

             "node": {

               "hostnames": {

                 "manage": [

                   "192.168.0.2" 

                ],

                "storage": [

                  "192.168.0.2" 

                ]

              },

              "zone": 1

            },

            "devices": [

              "/dev/vdd" 

            ]

          },

          {

            "node": {

              "hostnames": {

                "manage": [

                  "192.168.0.3" 

                ],

                "storage": [

                  "192.168.0.3"

                ]

              },

              "zone": 1

            },

            "devices": [

              "/dev/vdd" 

            ]

          },

          {

             "node": {

               "hostnames": {

                 "manage": [

                   "192.168.0.4"

                ],

                "storage": [

                  "192.168.0.4"  

                ]

              },

              "zone": 1

            },

            "devices": [

              "/dev/vdd"

            ]

          }

        ]

      }

    ]

  }

2.1.4、加载 topology.json 文件

kubectl exec -it heketi-deployment-6fd99bdcf6-drcd8 -n heketi -- bash
export HEKETI_CLI_SERVER=http://192.168.2.230:18080
heketi-cli --user admin --secret 'adminkey' topology load --json=/etc/heketi/topology.json
查看集群信息
heketi-cli --user admin --secret 'adminkey' topology info

2.1.5、查看集群信息

heketi-cli --user admin --secret 'adminkey' cluster info 2d9e11adede04fe6d07cb81c5a1a7ea4

2.1.6、k8s调用glusterfs

vi glusterfs-sc.yaml

apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: kube-system
type: kubernetes.io/glusterfs
data:
  key: "MTIzNDU2"    #请替换为您自己的密钥。Base64 编码。
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
    storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
  name: glusterfs
parameters:
  clusterid: "21240a91145aee4d801661689383dcd1"    #请替换为您自己的 GlusterFS 集群 ID。
  gidMax: "50000"
  gidMin: "40000"
  restauthenabled: "true"
  resturl: "http://192.168.0.2:8080"    #Gluster REST 服务/Heketi 服务 URL 可按需供应 gluster 存储卷。请替换为您自己的 URL。
  restuser: admin
  secretName: heketi-secret
  secretNamespace: kube-system
  volumetype: "replicate:2"    #请替换为您自己的存储卷类型。
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

2.1.7、创建heketi pvc

vim heketi-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: heketi-glusterfs-pvc
  namespace: heketi
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 40Gi
  storageClassName: glusterfs

2.1.8、修改heketi存储方式,并复制数据到pvc

kubectl delete -f heketi-deployment.yaml

- name: db
  hostPath:
    path: /root/heketi
修改为
- name: db
  persistentVolumeClaim:
    claimName: heketi-glusterfs-pvc
使用df -h 命令查看heketi pvc挂在在哪台服务器的brick里面,拷贝/root/heketi/heketi.db,到brick里面,重新创建heketi pod 

kubectl create -f heketi-deployment.yaml

通过heketi-cli删除卷

export HEKETI_CLI_SERVER=http://192.168.2.230:18080
heketi-cli volume list --user admin --secret 'adminkey'
heketi-cli volume delete 67b94a72c0fd55533d242b48080e03e6 --user admin --secret 'adminkey'