基于 k8s 搭建 kubeSphere 平台

基于 k8s 搭建 kubeSphere 平台,第1张

基于 k8s 搭建 kubeSphere 平台

文章目录
  • 一、部署 K8S
    • 1. 环境准备
    • 2. 安装 docker
    • 3. 配置阿里云 K8S repo 源(三个节点)
    • 4. 安装 kubeadm,kubelet,kubectl(三个节点)
    • 5. 部署 kubernetes Master 节点(master 节点上执行)
    • 6. k8s-node 节点加入 master 节点(两个 node 执行)
    • 7. 安装 Pod 网络插件(CNI 插件,master 节点)
  • 二、部署 KubeSphere
    • 1. kubesphere 简介
    • 2. 环境需求
    • 3. 安装 helm 和 tiller
      • 3.1 安装helm
      • 3.2 创建 rbac 权限文件
      • 3.3 安装 tiller
      • 3.4 检查
    • 4. 安装 OpenEBS
      • 4.1 去除 master上污点
      • 4.2 安装 OpenEBS
    • 5. 部署 KubeSphere
      • 5.1 执行以下命令以开始安装
      • 5.2 检查安装日志
      • 5.3 查看运行状态
      • 5.4 浏览器访问
      • 5.5 查看集群管理界面
      • 5.6 重新给 master 打上污点
    • 6. 可插拔安裝插件


一、部署 K8S 1. 环境准备

三台服务器
192.168.8.12 master
192.168.8.13 node01
192.168.8.14 node02

#关闭防火墙,selinux,swap
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
swapoff -a

#修改主机名,并写入三台服务器的 host 中
hostnamectl set-hostname master && su
hostnamectl set-hostname node01 && su
hostnamectl set-hostname node02 && su

cat >> /etc/hosts << EOF
192.168.8.12 master
192.168.8.13 node01
192.168.8.14 node02
EOF

#将桥接的 IPV4 流量传递到 iptables 链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

#时间同步
yum -y install ntpdate
ntpdate time.windows.com
2. 安装 docker

三台服务器上 *** 作

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum install -y docker-ce docker-ce-cli containerd.io

yum -y install epel-release && yum clean all && yum makecache		#如果无法安装docker再执行

systemctl start docker && systemctl enable docker

cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

systemctl restart docker 
3. 配置阿里云 K8S repo 源(三个节点
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


yum clean all && yum makecache
4. 安装 kubeadm,kubelet,kubectl(三个节点)
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3 --nogpgcheck
systemctl enable kubelet && systemctl start kubelet
5. 部署 kubernetes Master 节点(master 节点上执行)

初始化 kubeadm

kubeadm init  	
--apiserver-advertise-address=192.168.8.12 
--image-repository registry.aliyuncs.com/google_containers 
--kubernetes-version v1.17.3 
--service-cidr=10.96.0.0/12 
--pod-network-cidr=10.244.0.0/16

参数说明

kubeadm init 
--apiserver-advertise-address=10.0.0.116                   #指定master监听的地址,修改为自己的master地址
--image-repository registry.aliyuncs.com/google_containers    #指定为aliyun的下载源,最好用国内的
--kubernetes-version v1.17.3                  		
--service-cidr=10.96.0.0/12                   		#设置集群内部的网络
--pod-network-cidr=10.244.0.0/16                		#设置pod的网络
# service-cidr 和 pod-network-cidr 最好就用这个,不然需要修改后面的 kube-flannel.yaml 文件

出现 Your Kubernetes control-plane has initialized successfully! 为初始化成功,初始化失败先排查原因

#如果忘了token,可以执行下面的命令进行查看
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

node 节点加入集群需要生成的 token,token 有效期为 24 小时,过期需要重新创建,创建命令为 kubeadm token create --print-join-command

执行以下命令可使用 kubectl 管理工具

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   4m18s   v1.17.3


[root@master ~]#docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.3   ae853e93800d   22 months ago   116MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.3   b0f1517c1f4b   22 months ago   161MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.3   90d27391b780   22 months ago   171MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.3   d109c0821a2b   22 months ago   94.4MB
registry.aliyuncs.com/google_containers/coredns                   1.6.5     70f311871ae1   2 years ago     41.6MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0   303ce5db0e90   2 years ago     288MB
registry.aliyuncs.com/google_containers/pause                     3.1       da86e6ba6ca1   4 years ago     742kB

6. k8s-node 节点加入 master 节点(两个 node 执行)

node01

[root@node01 ~]# kubeadm join 192.168.8.12:6443 --token l4kasn.nsm4gmn7yyvfzj1s 
    --discovery-token-ca-cert-hash sha256:59e3b2efafd295b638844633e8e18ab294bb853fca0712e6795cd8f6a5f3f635 

node02

[root@node02 ~]# kubeadm join 192.168.8.12:6443 --token l4kasn.nsm4gmn7yyvfzj1s 
    --discovery-token-ca-cert-hash sha256:59e3b2efafd295b638844633e8e18ab294bb853fca0712e6795cd8f6a5f3f635 


master 查看

[root@master ~]#kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   10m   v1.17.3
node01   NotReady      48s   v1.17.3
node02   NotReady      27s   v1.17.3

可以看到节点显示 NotReady 状态,需要安装网络插件

7. 安装 Pod 网络插件(CNI 插件,master 节点)

下载插件 yaml 文件

#国外网站
wget https://raw.githubusercontent.com/coreos/flannel/master/documentation/kube-flannel.yml
#国内网站
wget http://120.78.77.38/file/kube-flannel.yaml

执行安装

#需要时间安装
kubectl apply -f kube-flannel.yaml

执行 kubectl get pods -n kube-system 查看出错是因为 quay.io 网站目前国内无法访问,资源下载不成功,解决方案参考:https://blog.csdn.net/K_520_W/article/details/116566733

这里提供一个方便的解决方法

#修改 flannel 插件文件
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yaml
kubectl apply -f kube-flannel.yaml
kubectl get pods -n kube-system
kubectl get node  									#部署好网络插件,node 准备就绪

查看

[root@master ~]#kubectl get pods -n kube-system
NAME                             READY   STATUS     RESTARTS   AGE
coredns-7ff77c879f-j52kj         0/1     Pending    0          6m40s
coredns-7ff77c879f-sm7t4         0/1     Pending    0          6m40s
etcd-master                      1/1     Running    0          6m50s
kube-apiserver-master            1/1     Running    0          6m50s
kube-controller-manager-master   1/1     Running    0          6m50s
kube-flannel-ds-amd64-2z7sl      0/1     Init:0/1   0          16s
kube-flannel-ds-amd64-65n5x      0/1     Init:0/1   0          16s
kube-flannel-ds-amd64-s5kf5      0/1     Init:0/1   0          16s
kube-proxy-dvkff                 1/1     Running    0          3m20s
kube-proxy-jl84d                 1/1     Running    0          4m22s
kube-proxy-kvktr                 1/1     Running    0          6m40s
kube-scheduler-master            1/1     Running    0          6m50s

[root@master ~]#kubectl get node
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   8m21s   v1.18.0
node01   Ready       5m44s   v1.18.0
node02   Ready       4m42s   v1.18.0

测试 k8s 集群,在集群中创建一个 pod,验证是否能正常运行

[root@master ~]# kubectl get pod	#默认命名空间现在是没有 pod 的
No resources found in default namespace.

[root@master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort	#暴露端口供外网访问
service/nginx exposed
[root@master ~]#kubectl get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-g76rt   1/1     Running   0          6m8s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1              443/TCP        15m
service/nginx        NodePort    10.104.42.45           80:30547/TCP   6m


# nginx 暴露了 32141 端口,http://nodeIP:port 访问,任意节点 IP 都可以
[root@master ~]# curl 192.168.8.12:30547

二、部署 KubeSphere 1. kubesphere 简介

KubeSphere 是一款面向云原生设计的开源项目,在目前主流容器调度平台 Kubernetes 之上构建的分布式多租户容器管理平台,提供简单易用的 *** 作界面以及向导式 *** 作方式,在降低用户使用容器调度平台学习成本的同时,极大降低开发、测试、运维的日常工作的复杂度。

官网地址

3.0 版本安装文档 和 2.1 版本安装文档

2. 环境需求

3. 安装 helm 和 tiller

Helm 是 Kubernetes 的包管理器。包管理器类似于我们在 Ubuntu 中使用的 apt、Centos中使用的 yum 或者 Python 中的 pip 一样,能快速查找、下载和安装软件包。
Helm 由客 户端组件 helm 和服务端组件 Tiller 组成,能够将一组 K8S 资源打包统一管理,是查找、共享和使用为 Kubernetes 构建的软件的最佳方式

3.1 安装helm
#上传安装包到目录下
[root@master ~]#ll helm-v2.17.0-linux-amd64.tar.gz 
-rw-r--r-- 1 root root 25097357 12月 31 10:39 helm-v2.17.0-linux-amd64.tar.gz
[root@master ~]#tar zxvf helm-v2.17.0-linux-amd64.tar.gz 
[root@master ~]#ls
helm-v2.17.0-linux-amd64.tar.gz  kube-flannel.yaml  linux-amd64
[root@master ~]#cp linux-amd64/helm /usr/local/bin/
[root@master ~]#cp linux-amd64/tiller /usr/local/bin/
[root@master ~]#helm version
Client: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
Error: could not find tiller
3.2 创建 rbac 权限文件
cat > helm-rbac.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EOF
[root@master ~]#kubectl apply -f helm-rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
3.3 安装 tiller
helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.17.0  --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

[root@master ~]#kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
default       nginx-86c57db685-gxnmf           1/1     Running   0          87m
kube-system   coredns-9d85f5447-s8f77          1/1     Running   0          109m
kube-system   coredns-9d85f5447-t6fd4          1/1     Running   0          109m
kube-system   etcd-master                      1/1     Running   0          109m
kube-system   kube-apiserver-master            1/1     Running   0          109m
kube-system   kube-controller-manager-master   1/1     Running   0          109m
kube-system   kube-flannel-ds-amd64-c9mst      1/1     Running   0          89m
kube-system   kube-flannel-ds-amd64-hfmgw      1/1     Running   0          97m
kube-system   kube-flannel-ds-amd64-nsnc7      1/1     Running   0          97m
kube-system   kube-proxy-kcpcj                 1/1     Running   0          99m
kube-system   kube-proxy-kjsg8                 1/1     Running   0          99m
kube-system   kube-proxy-ms6rg                 1/1     Running   0          109m
kube-system   kube-scheduler-master            1/1     Running   0          109m
kube-system   tiller-deploy-59665c97b6-h9bd8   1/1     Running   0          2m53s
3.4 检查
[root@master ~]#tiller 
[main] 2021/12/31 21:57:16 Starting Tiller v2.17.0 (tls=false)
[main] 2021/12/31 21:57:16 GRPC listening on :44134
[main] 2021/12/31 21:57:16 Probes listening on :44135
[main] 2021/12/31 21:57:16 Storage driver is ConfigMap
[main] 2021/12/31 21:57:16 Max history per release is 0
[root@master ~]#helm version
Client: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
4. 安装 OpenEBS

官方文档供参考

4.1 去除 master上污点
[root@master ~]#kubectl get node -o wide
NAME     STATUS   ROLES    AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
master   Ready    master   114m   v1.17.3   192.168.8.12           CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://20.10.12
node01   Ready       103m   v1.17.3   192.168.8.13           CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://20.10.12
node02   Ready       103m   v1.17.3   192.168.8.14           CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://20.10.12

[root@master ~]#kubectl describe node master | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

[root@master ~]#kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-
node/master untainted

[root@master ~]#kubectl describe node master | grep Taint
Taints:             
4.2 安装 OpenEBS

使用阿里的源会导致在后续安装openebs时,资源库没有1.5的镜像资源,错误如下

[root@k8s-node1 ~]# helm install --namespace openebs --name openebs stable/openebs --version 1.5.0
Error: failed to download "stable/openebs" (hint: running `helm repo update` may help)

解决方法如下:

  • 先下载要用到的docker镜像如下
docker pull openebs/m-apiserver:1.5.0
docker pull openebs/openebs-k8s-provisioner:1.5.0
docker pull openebs/snapshot-controller:1.5.0
docker pull openebs/snapshot-provisioner:1.5.0
docker pull openebs/node-disk-manager-amd64:v0.4.5
docker pull openebs/node-disk-operator-amd64:v0.4.5
docker pull openebs/admission-server:1.5.0
docker pull openebs/provisioner-localpv:1.5.0
  • 下载修改后的 yaml 文件如下
# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context
 
# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:
  name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: openebs-maya-operator
  namespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: openebs-maya-operator
rules:
- apiGroups: ["*"]
  resources: ["nodes", "nodes/proxy"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["statefulsets", "daemonsets"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["resourcequotas", "limitranges"]
  verbs: ["list", "watch"]
- apiGroups: ["*"]
  resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]
  verbs: ["list", "watch"]
- apiGroups: ["*"]
  resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
  verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
  resources: ["volumesnapshots", "volumesnapshotdatas"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["*"]
  resources: [ "disks", "blockdevices", "blockdeviceclaims"]
  verbs: ["*" ]
- apiGroups: ["*"]
  resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"]
  verbs: ["*" ]
- apiGroups: ["*"]
  resources: [ "castemplates", "runtasks"]
  verbs: ["*" ]
- apiGroups: ["*"]
  resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]
  verbs: ["*" ]
- apiGroups: ["*"]
  resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"]
  verbs: ["*" ]
- apiGroups: ["*"]
  resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]
  verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]
  resources: ["leases"]
  verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]
  resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]
  verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
- apiGroups: ["*"]
  resources: [ "upgradetasks"]
  verbs: ["*" ]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: openebs-maya-operator
subjects:
- kind: ServiceAccount
  name: openebs-maya-operator
  namespace: openebs
roleRef:
  kind: ClusterRole
  name: openebs-maya-operator
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: maya-apiserver
  namespace: openebs
  labels:
    name: maya-apiserver
    openebs.io/component-name: maya-apiserver
    openebs.io/version: 1.5.0
spec:
  selector:
    matchLabels:
      name: maya-apiserver
      openebs.io/component-name: maya-apiserver
  replicas: 1
  strategy:
    type: Recreate
    rollingUpdate: null
  template:
    metadata:
      labels:
        name: maya-apiserver
        openebs.io/component-name: maya-apiserver
        openebs.io/version: 1.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
      - name: maya-apiserver
        imagePullPolicy: IfNotPresent
        image: openebs/m-apiserver:1.5.0
        ports:
        - containerPort: 5656
        env:
        # OPENEBS_IO_KUBE_ConFIG enables maya api service to connect to K8s
        # based on this config. This is ignored if empty.
        # This is supported for maya api server version 0.5.2 onwards
        #- name: OPENEBS_IO_KUBE_ConFIG
        #  value: "/home/ubuntu/.kube/config"
        # OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s
        # based on this address. This is ignored if empty.
        # This is supported for maya api server version 0.5.2 onwards
        #- name: OPENEBS_IO_K8S_MASTER
        #  value: "http://172.28.128.3:8080"
        # OPENEBS_NAMESPACE provides the namespace of this deployment as an
        # environment variable
        - name: OPENEBS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
        # environment variable
        - name: OPENEBS_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        # OPENEBS_MAYA_POD_NAME provides the name of this pod as
        # environment variable
        - name: OPENEBS_MAYA_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        # If OPENEBS_IO_CREATE_DEFAULT_STORAGE_ConFIG is false then OpenEBS default
        # storageclass and storagepool will not be created.
        - name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_ConFIG
          value: "true"
        # OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
        # configured as a part of openebs installation.
        # If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
        # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
        # is set to true
        - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
          value: "false"
        # OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath
        # to be used for saving the shared content between the side cars
        # of cstor volume pod.
        # The default path used is /var/openebs/sparse
        #- name: OPENEBS_IO_CSTOR_TARGET_DIR
        #  value: "/var/openebs/sparse"
        # OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath
        # to be used for saving the shared content between the side cars
        # of cstor pool pod. This ENV is also used to indicate the location
        # of the sparse devices.
        # The default path used is /var/openebs/sparse
        #- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR
        #  value: "/var/openebs/sparse"
        # OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath
        # to be used for default Jiva StoragePool loaded by OpenEBS
        # The default path used is /var/openebs
        # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
        # is set to true
        #- name: OPENEBS_IO_JIVA_POOL_DIR
        #  value: "/var/openebs"
        # OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath
        # to be used for default openebs-hostpath storageclass loaded by OpenEBS
        # The default path used is /var/openebs/local
        # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
        # is set to true
        #- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR
        #  value: "/var/openebs/local"
        - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
          value: "openebs/jiva:1.5.0"
        - name: OPENEBS_IO_JIVA_REPLICA_IMAGE
          value: "openebs/jiva:1.5.0"
        - name: OPENEBS_IO_JIVA_REPLICA_COUNT
          value: "3"
        - name: OPENEBS_IO_CSTOR_TARGET_IMAGE
          value: "openebs/cstor-istgt:1.5.0"
        - name: OPENEBS_IO_CSTOR_POOL_IMAGE
          value: "openebs/cstor-pool:1.5.0"
        - name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE
          value: "openebs/cstor-pool-mgmt:1.5.0"
        - name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE
          value: "openebs/cstor-volume-mgmt:1.5.0"
        - name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
          value: "openebs/m-exporter:1.5.0"
        - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE
        ###################################################################################################################
          value: "openebs/m-exporter:1.5.0"
        - name: OPENEBS_IO_HELPER_IMAGE
          value: "openebs/linux-utils:1.5.0"
        # OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage
        # events to Google Analytics
        - name: OPENEBS_IO_ENABLE_ANALYTICS
          value: "true"
        - name: OPENEBS_IO_INSTALLER_TYPE
          value: "openebs-operator"
        # OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)
        # for periodic ping events sent to Google Analytics.
        # Default is 24h.
        # Minimum is 1h. You can convert this to weekly by setting 168h
        #- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL
        #  value: "24h"
        livenessProbe:
          exec:
            command:
            - /usr/local/bin/mayactl
            - version
          initialDelaySeconds: 30
          periodSeconds: 60
        readinessProbe:
          exec:
            command:
            - /usr/local/bin/mayactl
            - version
          initialDelaySeconds: 30
          periodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:
  name: maya-apiserver-service
  namespace: openebs
  labels:
    openebs.io/component-name: maya-apiserver-svc
spec:
  ports:
  - name: api
    port: 5656
    protocol: TCP
    targetPort: 5656
  selector:
    name: maya-apiserver
  sessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-provisioner
  namespace: openebs
  labels:
    name: openebs-provisioner
    openebs.io/component-name: openebs-provisioner
    openebs.io/version: 1.5.0
spec:
  selector:
    matchLabels:
      name: openebs-provisioner
      openebs.io/component-name: openebs-provisioner
  replicas: 1
  strategy:
    type: Recreate
    rollingUpdate: null
  template:
    metadata:
      labels:
        name: openebs-provisioner
        openebs.io/component-name: openebs-provisioner
        openebs.io/version: 1.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
      - name: openebs-provisioner
        imagePullPolicy: IfNotPresent
        image: openebs/openebs-k8s-provisioner:1.5.0
        env:
        # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
        # based on this address. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_K8S_MASTER
        #  value: "http://10.128.0.12:8080"
        # OPENEBS_IO_KUBE_ConFIG enables openebs provisioner to connect to K8s
        # based on this config. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_KUBE_ConFIG
        #  value: "/home/ubuntu/.kube/config"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: OPENEBS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
        # that provisioner should forward the volume create/delete requests.
        # If not present, "maya-apiserver-service" will be used for lookup.
        # This is supported for openebs provisioner version 0.5.3-RC1 onwards
        #- name: OPENEBS_MAYA_SERVICE_NAME
        #  value: "maya-apiserver-apiservice"
        livenessProbe:
          exec:
            command:
            - pgrep
            - ".*openebs"
          initialDelaySeconds: 30
          periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-snapshot-operator
  namespace: openebs
  labels:
    name: openebs-snapshot-operator
    openebs.io/component-name: openebs-snapshot-operator
    openebs.io/version: 1.5.0
spec:
  selector:
    matchLabels:
      name: openebs-snapshot-operator
      openebs.io/component-name: openebs-snapshot-operator
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: openebs-snapshot-operator
        openebs.io/component-name: openebs-snapshot-operator
        openebs.io/version: 1.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: snapshot-controller
          image: openebs/snapshot-controller:1.5.0
          imagePullPolicy: IfNotPresent
          env:
          - name: OPENEBS_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          livenessProbe:
            exec:
              command:
              - pgrep
              - ".*controller"
            initialDelaySeconds: 30
            periodSeconds: 60
        # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
        # that snapshot controller should forward the snapshot create/delete requests.
        # If not present, "maya-apiserver-service" will be used for lookup.
        # This is supported for openebs provisioner version 0.5.3-RC1 onwards
        #- name: OPENEBS_MAYA_SERVICE_NAME
        #  value: "maya-apiserver-apiservice"
        - name: snapshot-provisioner
          image: openebs/snapshot-provisioner:1.5.0
          imagePullPolicy: IfNotPresent
          env:
          - name: OPENEBS_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
        # that snapshot provisioner  should forward the clone create/delete requests.
        # If not present, "maya-apiserver-service" will be used for lookup.
        # This is supported for openebs provisioner version 0.5.3-RC1 onwards
        #- name: OPENEBS_MAYA_SERVICE_NAME
        #  value: "maya-apiserver-apiservice"
          livenessProbe:
            exec:
              command:
              - pgrep
              - ".*provisioner"
            initialDelaySeconds: 30
            periodSeconds: 60
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
  name: openebs-ndm-config
  namespace: openebs
  labels:
    openebs.io/component-name: ndm-config
data:
  # udev-probe is default or primary probe which should be enabled to run ndm
  # filterconfigs contails configs of filters - in their form fo include
  # and exclude comma separated strings
  node-disk-manager.config: |
    probeconfigs:
      - key: udev-probe
        name: udev probe
        state: true
      - key: seachest-probe
        name: seachest probe
        state: false
      - key: smart-probe
        name: smart probe
        state: true
    filterconfigs:
      - key: os-disk-exclude-filter
        name: os disk exclude filter
        state: true
        exclude: "/,/etc/hosts,/boot"
      - key: vendor-filter
        name: vendor filter
        state: true
        include: ""
        exclude: "CLOUDBYT,OpenEBS"
      - key: path-filter
        name: path filter
        state: true
        include: ""
        exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: openebs-ndm
  namespace: openebs
  labels:
    name: openebs-ndm
    openebs.io/component-name: ndm
    openebs.io/version: 1.5.0
spec:
  selector:
    matchLabels:
      name: openebs-ndm
      openebs.io/component-name: ndm
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        name: openebs-ndm
        openebs.io/component-name: ndm
        openebs.io/version: 1.5.0
    spec:
      # By default the node-disk-manager will be run on all kubernetes nodes
      # If you would like to limit this to only some nodes, say the nodes
      # that have storage attached, you could label those node and use
      # nodeSelector.
      #
      # e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"
      # kubectl label node  "openebs.io/nodegroup"="storage-node"
      #nodeSelector:
      #  "openebs.io/nodegroup": "storage-node"
      serviceAccountName: openebs-maya-operator
      hostNetwork: true
      containers:
      - name: node-disk-manager
        image: openebs/node-disk-manager-amd64:v0.4.5
        imagePullPolicy: Always
        securityContext:
          privileged: true
        volumeMounts:
        - name: config
          mountPath: /host/node-disk-manager.config
          subPath: node-disk-manager.config
          readOnly: true
        - name: udev
          mountPath: /run/udev
        - name: procmount
          mountPath: /host/proc
          readOnly: true
        - name: sparsepath
          mountPath: /var/openebs/sparse
        env:
        # namespace in which NDM is installed will be passed to NDM Daemonset
        # as environment variable
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # pass hostname as env variable using downward API to the NDM container
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        # specify the directory where the sparse files need to be created.
        # if not specified, then sparse files will not be created.
        - name: SPARSE_FILE_DIR
          value: "/var/openebs/sparse"
        # Size(bytes) of the sparse file to be created.
        - name: SPARSE_FILE_SIZE
          value: "10737418240"
        # Specify the number of sparse files to be created
        - name: SPARSE_FILE_COUNT
          value: "0"
        livenessProbe:
          exec:
            command:
            - pgrep
            - ".*ndm"
          initialDelaySeconds: 30
          periodSeconds: 60
      volumes:
      - name: config
        configMap:
          name: openebs-ndm-config
      - name: udev
        hostPath:
          path: /run/udev
          type: Directory
      # mount /proc (to access mount file of process 1 of host) inside container
      # to read mount-point of disks and partitions
      - name: procmount
        hostPath:
          path: /proc
          type: Directory
      - name: sparsepath
        hostPath:
          path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-ndm-operator
  namespace: openebs
  labels:
    name: openebs-ndm-operator
    openebs.io/component-name: ndm-operator
    openebs.io/version: 1.5.0
spec:
  selector:
    matchLabels:
      name: openebs-ndm-operator
      openebs.io/component-name: ndm-operator
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: openebs-ndm-operator
        openebs.io/component-name: ndm-operator
        openebs.io/version: 1.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: node-disk-operator
          image: openebs/node-disk-operator-amd64:v0.4.5
          imagePullPolicy: Always
          readinessProbe:
            exec:
              command:
                - stat
                - /tmp/operator-sdk-ready
            initialDelaySeconds: 4
            periodSeconds: 10
            failureThreshold: 1
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            # the service account of the ndm-operator pod
            - name: SERVICE_ACCOUNT
              valueFrom:
                fieldRef:
                  fieldPath: spec.serviceAccountName
            - name: OPERATOR_NAME
              value: "node-disk-operator"
            - name: CLEANUP_JOB_IMAGE
              value: "openebs/linux-utils:1.5.0"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-admission-server
  namespace: openebs
  labels:
    app: admission-webhook
    openebs.io/component-name: admission-webhook
    openebs.io/version: 1.5.0
spec:
  replicas: 1
  strategy:
    type: Recreate
    rollingUpdate: null
  selector:
    matchLabels:
      app: admission-webhook
  template:
    metadata:
      labels:
        app: admission-webhook
        openebs.io/component-name: admission-webhook
        openebs.io/version: 1.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: admission-webhook
          image: openebs/admission-server:1.5.0
          imagePullPolicy: IfNotPresent
          args:
            - -alsologtostderr
            - -v=2
            - 2>&1
          env:
            - name: OPENEBS_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: ADMISSION_WEBHOOK_NAME
              value: "openebs-admission-server"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-localpv-provisioner
  namespace: openebs
  labels:
    name: openebs-localpv-provisioner
    openebs.io/component-name: openebs-localpv-provisioner
    openebs.io/version: 1.5.0
spec:
  selector:
    matchLabels:
      name: openebs-localpv-provisioner
      openebs.io/component-name: openebs-localpv-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: openebs-localpv-provisioner
        openebs.io/component-name: openebs-localpv-provisioner
        openebs.io/version: 1.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
      - name: openebs-provisioner-hostpath
        imagePullPolicy: Always
        image: openebs/provisioner-localpv:1.5.0
        env:
        # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
        # based on this address. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_K8S_MASTER
        #  value: "http://10.128.0.12:8080"
        # OPENEBS_IO_KUBE_ConFIG enables openebs provisioner to connect to K8s
        # based on this config. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_KUBE_ConFIG
        #  value: "/home/ubuntu/.kube/config"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: OPENEBS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
        # environment variable
        - name: OPENEBS_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: OPENEBS_IO_ENABLE_ANALYTICS
          value: "true"
        - name: OPENEBS_IO_INSTALLER_TYPE
          value: "openebs-operator"
        - name: OPENEBS_IO_HELPER_IMAGE
          value: "openebs/linux-utils:1.5.0"
        livenessProbe:
          exec:
            command:
            - pgrep
            - ".*localpv"
          initialDelaySeconds: 30
          periodSeconds: 60
---
  • 最后执行安装这个yaml
kubectl apply -f openebs_demo.yaml
  • 查看创建的 StorageClass
[root@master ~]#kubectl get sc
NAME                        PROVISIonER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  16m
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  16m
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  16m
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  16m

  • 设置默认storageclass
[root@master ~]#kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
[root@master ~]#kubectl get pod -n openebs
NAME                                          READY   STATUS    RESTARTS   AGE
maya-apiserver-85c5fdcd96-gqsg5               1/1     Running   0          15m
openebs-admission-server-84fbd958f7-b2frv     1/1     Running   0          15m
openebs-localpv-provisioner-bf57b7fb8-c9h2c   1/1     Running   0          15m
openebs-ndm-4xhkp                             1/1     Running   0          15m
openebs-ndm-9g7cv                             1/1     Running   0          15m
openebs-ndm-operator-6bc7d8f85b-dm7hg         1/1     Running   0          15m
openebs-ndm-rgbv5                             1/1     Running   0          15m
openebs-provisioner-776bd55fdf-snmkf          1/1     Running   0          15m
openebs-snapshot-operator-6f57874cf7-ntg7b    2/2     Running   0          15m
[root@master ~]#kubectl get sc
NAME                         PROVISIonER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  33m
openebs-hostpath (default)   openebs.io/local                                           Delete          WaitForFirstConsumer   false                  33m
openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  33m
openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  33m

注意:此时不要给master加上污点,否者导致后面的pods安装不上(openldap,redis),待kubesphere安装完成后加上污点

5. 部署 KubeSphere

官方文档

5.1 执行以下命令以开始安装
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
   
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
5.2 检查安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

5.3 查看运行状态

使用 kubectl get pod --all-namespaces 查看所有 Pod 在 KubeSphere 相关的命名空间是否正常运行。如果是正常运行,请通过以下命令来检查控制台的端口(默认为 30880)

[root@master ~]#kubectl get pod --all-namespaces
NAMESPACE                      NAME                                               READY   STATUS    RESTARTS   AGE
default                        nginx-86c57db685-gxnmf                             1/1     Running   0          3h11m
kube-system                    coredns-9d85f5447-s8f77                            1/1     Running   0          3h33m
kube-system                    coredns-9d85f5447-t6fd4                            1/1     Running   0          3h33m
kube-system                    etcd-master                                        1/1     Running   0          3h33m
kube-system                    kube-apiserver-master                              1/1     Running   0          3h33m
kube-system                    kube-controller-manager-master                     1/1     Running   0          3h33m
kube-system                    kube-flannel-ds-amd64-c9mst                        1/1     Running   0          3h13m
kube-system                    kube-flannel-ds-amd64-hfmgw                        1/1     Running   0          3h21m
kube-system                    kube-flannel-ds-amd64-nsnc7                        1/1     Running   0          3h21m
kube-system                    kube-proxy-kcpcj                                   1/1     Running   0          3h23m
kube-system                    kube-proxy-kjsg8                                   1/1     Running   0          3h23m
kube-system                    kube-proxy-ms6rg                                   1/1     Running   0          3h33m
kube-system                    kube-scheduler-master                              1/1     Running   0          3h33m
kube-system                    snapshot-controller-0                              1/1     Running   0          4m58s
kube-system                    tiller-deploy-59665c97b6-h9bd8                     1/1     Running   0          106m
kubesphere-controls-system     default-http-backend-5d464dd566-r495w              1/1     Running   0          4m43s
kubesphere-controls-system     kubectl-admin-6c9bd5b454-zjns4                     1/1     Running   0          87s
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running   0          3m47s
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running   0          3m47s
kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running   0          3m47s
kubesphere-monitoring-system   kube-state-metrics-5c466fc7b6-jw9dn                3/3     Running   0          4m8s
kubesphere-monitoring-system   node-exporter-7tgmg                                2/2     Running   0          4m9s
kubesphere-monitoring-system   node-exporter-9vfgz                                2/2     Running   0          4m9s
kubesphere-monitoring-system   node-exporter-nblhm                                2/2     Running   0          4m9s
kubesphere-monitoring-system   notification-manager-deployment-7ff95b7544-nggdq   1/1     Running   0          2m4s
kubesphere-monitoring-system   notification-manager-deployment-7ff95b7544-x4n9h   1/1     Running   0          2m4s
kubesphere-monitoring-system   notification-manager-operator-5cbb58b756-hfhpc     2/2     Running   0          4m
kubesphere-monitoring-system   prometheus-k8s-0                                   3/3     Running   1          3m37s
kubesphere-monitoring-system   prometheus-k8s-1                                   3/3     Running   1          3m37s
kubesphere-monitoring-system   prometheus-operator-78c5cdbc8f-jmv6p               2/2     Running   0          4m10s
kubesphere-system              ks-apiserver-56d87c5f65-4h694                      1/1     Running   0          3m45s
kubesphere-system              ks-console-fb4c655cf-7f6r5                         1/1     Running   0          4m36s
kubesphere-system              ks-controller-manager-7547c5df5d-dm4dz             1/1     Running   0          3m44s
kubesphere-system              ks-installer-85854b8c8-jp5xg                       1/1     Running   0          7m9s
kubesphere-system              openldap-0                                         1/1     Running   0          4m51s
kubesphere-system              redis-6fd6c6d6f9-mjc7b                             1/1     Running   0          4m54s
openebs                        maya-apiserver-85c5fdcd96-gqsg5                    1/1     Running   0          28m
openebs                        openebs-admission-server-84fbd958f7-b2frv          1/1     Running   0          28m
openebs                        openebs-localpv-provisioner-bf57b7fb8-c9h2c        1/1     Running   0          28m
openebs                        openebs-ndm-4xhkp                                  1/1     Running   0          28m
openebs                        openebs-ndm-9g7cv                                  1/1     Running   0          28m
openebs                        openebs-ndm-operator-6bc7d8f85b-dm7hg              1/1     Running   0          28m
openebs                        openebs-ndm-rgbv5                                  1/1     Running   0          28m
openebs                        openebs-provisioner-776bd55fdf-snmkf               1/1     Running   0          28m
openebs                        openebs-snapshot-operator-6f57874cf7-ntg7b         2/2     Running   0          28m
[root@master ~]#kubectl get svc/ks-console -n kubesphere-system
NAME         TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
ks-console   NodePort   10.101.95.222           80:30880/TCP   5m29s
5.4 浏览器访问

访问 http://192.168.8.12:30880

  • 修改密码:

5.5 查看集群管理界面

5.6 重新给 master 打上污点
[root@master ~]#kubectl taint nodes master node-role.kubernetes.io/master=:NoSchedule
node/master tainted
[root@master ~]#kubectl describe node master | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
6. 可插拔安裝插件

官方文档

可以在 KubeSphere 安装之前或之后启用可插拔组件。
请参考示例文件 cluster-configuration.yaml 获取更多详细信息。
请确保集群中有足够的 CPU 和内存。
强烈建议安装这些可插拔组件,以体验 KubeSphere 提供的全栈功能。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5695079.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存