官网地址:https://kubernetes.io
准备工作- 准备五台centos 7的虚拟机(每台虚拟机分配2核2G,存储使用20G硬盘,必须2核不然报错):如下图机器的分配情况:
- 将5台机器的hostname分别设置成上面对应的主机名称,如192.168.2.21机器设置为如下主机名称:
[root@localhost tools]# vim /etc/hostname k8s-master21
- 设置5台机器的/etc/hosts,每台都要添加如下6行,保存退出
[root@localhost tools]# vim /etc/hosts 192.168.2.21 k8s-master21 192.168.2.22 k8s-master22 192.168.2.23 k8s-master23 192.168.2.51 k8s-node51 192.168.2.52 k8s-node52
- 从k8s-master21节点ssh免密登录其他各个节点机器
使用rsa加密,如下命令,连续回车3次即可,密钥对生成的路径为:cd ~/.ssh/ [root@k8s-master21 ~]# ssh-keygen -t rsa 将公钥拷贝到其他4台机器上,命令如下,需要输入yes和对应机器的密码: [root@k8s-master21 ~]# for i in k8s-master22 k8s-master23 k8s-node51 k8s-node52;do ssh-copy-id -i .ssh/id_rsa.pub $i;done 验证一下免密登录到其中一台机器,如下登录51机器命令: [root@k8s-master21 .ssh]# ssh k8s-node51 Last login: Mon Dec 27 09:34:34 2021 from 192.168.2.101 [root@k8s-node51 ~]#
- 所以节点关闭防护墙、NetworkManager、selinux,执行如下命令:
[root@k8s-master21 ~]# systemctl disable --now firewalld [root@k8s-master21 ~]# systemctl disable --now NetworkManager
- 关闭selinux,设置SELINUX=disabled 命令如下:
[root@k8s-master21 ~]# vim /etc/sysconfig/selinux SELINUX=disabled 临时关闭selinux,不需要重启服务 [root@k8s-master21 ~]# setenforce 0 查看selinux的值是否为disabled [root@k8s-master21 ~]# getenforce Disabled
- 因为swap会影响docker性能,一般我们会关闭swap, 命令如下:
首先查看一下swap分区情况:free -g 或者 cat /proc/swaps [root@k8s-master21 ~]# free -g total used free shared buff/cache available Mem: 1 0 1 0 0 1 Swap: 1 0 1 [root@k8s-master21 ~]# cat /proc/swaps Filename Type Size Used Priority /dev/dm-1 partition 1576956 0 -2 临时关闭swap命令如下: [root@k8s-master21 ~]# swapoff -a && sysctl -w vm.swappiness=0 vm.swappiness = 0 再次查看swap分区情况,如下表示以临时关闭:free -g 或者 cat /proc/swaps [root@k8s-master21 ~]# free -g total used free shared buff/cache available Mem: 1 0 1 0 0 1 Swap: 0 0 0 [root@k8s-master21 ~]# cat /proc/swaps Filename Type Size Used Priority
永久关闭swap: 注释掉 /etc/fstab 文件中的 swap配置,如下截图红框处:
重启后再次查看swap分区情况,则表示永久关闭swap:free -g 或者 cat /proc/swaps
[root@k8s-master21 ~]# free -g total used free shared buff/cache available Mem: 1 0 1 0 0 1 Swap: 0 0 0 [root@k8s-master21 ~]# cat /proc/swaps Filename Type Size Used Priority
- 所有节点设置ulimit, 命令如下:
先查看一下系统的ulimit值是多少,如下命令: [root@k8s-master21 ~]# ulimit -n 1024 临时生效设置如下: [root@k8s-master21 ~]# ulimit -SHn 65535 永久生效设置,添加如下两行 [root@k8s-master21 ~]# vim /etc/security/limits.conf * soft nofile 65535 * hard nofile 65535
- 设置同步时间ntpdate:
先查看一下系统是否安装ntp,如下命令: [root@k8s-master21 ~]# rpm -qa ntp 如果没有安装ntp,则如下安装: [root@k8s-master21 ~]# yum install ntp -y 先设置好时区,如下: [root@k8s-master21 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime [root@k8s-master21 ~]# echo "Asia/Shanghai" > /etc/timezone 再设置阿里云的时间同步服务器,如下: [root@k8s-master21 ~]# ntpdate time2.aliyun.com 26 Dec 15:35:57 ntpdate[7043]: step time server 203.107.6.88 offset -8.227227 sec 再将时间同步添加到系统定时任务中,如下命令后,添加 */5 * * * * ntpdate time2.aliyun.com 保存退出即可: [root@k8s-master21 ~]# crontab -e */5 * * * * ntpdate time2.aliyun.com 最后,将时间同步添加到开机自启动中,打开/etc/rc.local 添加 ntpdate time2.aliyun.com 保存退出即可: [root@k8s-master21 ~]# vim /etc/rc.local ntpdate time2.aliyun.com
- 因为k8s的一些安装包在国外的yum源仓库中,本地下载比较慢或根本无法下载,所以首先给我们的centos7设置我们国内的yum源,如下我们设置aliyun的yum源仓库:(参考博客:https://blog.csdn.net/lizz2276/article/details/110533287)
首先查看当前yum源列表信息:yum repolist
备份好默认yum源,如下:
[root@k8s-master21 ~]# mv /etc/yum.repos.d/CentOS-base.repo /etc/yum.repos.d/CentOS-base.repo.backup
再下载Centos-7.repo并设置成默认的yum源,如下:
[root@k8s-master21 ~]# curl -o /etc/yum.repos.d/CentOS-base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
运行 yum makecache 生成缓存,可以看到默认yum源已经替换成aliyun的yum源了
11. 再将CentOS-base.repo源中所有的镜像地址全部替换成mirrors.aliyun.com,如下:
[root@k8s-master21 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-base.repo
- 必备工具安装
[root@k8s-master21 ~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
- 将docker的仓库添加到yum源中,如下:
[root@k8s-master21 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- 自定义kubernetes源kubernetes.repo,如下:
cat </etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
- 内核升级
先查看内核版本,默认是3.10的
使用uname检查 [root@k8s-master21 ~]# uname -a Linux k8s-master21 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux 或者使用grubby检查 [root@k8s-master21 ~]# grubby --default-kernel /boot/vmlinuz-3.10.0-957.el7.x86_64
在当前家目录/root下,下载4.19内核rpm包(只在k8s-master21节点上 *** 作):
[root@k8s-master21 ~]# cd /root [root@k8s-master21 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm [root@k8s-master21 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
从k8s-master21节点传到其他节点:
[root@k8s-master21 ~]# for i in k8s-master22 k8s-master23 k8s-node51 k8s-node52;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
所有节点安装内核:
[root@k8s-master21 ~]# cd /root && yum localinstall -y kernel-ml*
所有节点更改内核启动顺序,因为默认是3.10的,执行如下两条命令:
[root@k8s-master21 ~]# grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg [root@k8s-master21 ~]# grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
检查默认内核是不是4.19
[root@k8s-master21 ~]# grubby --default-kernel /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
上面内核确实修改成4.19了,但是使用uname -a查看还是3.10版本的内核,需要重启才会生效,如下:
重启所有节点 [root@k8s-master21 ~]# reboot [root@k8s-master21 ~]# uname -a
- 所有节点安装ipvsadm
yum install ipvsadm ipset sysstat conntrack libseccomp -y
所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
新建ipvs.conf文件,并添加如下内容: vim /etc/modules-load.d/ipvs.conf # 加入以下内容 ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip
加载ipvsadm配置
systemctl enable --now systemd-modules-load.service
检查是否加载
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
开启一些k8s集群中必须的内核参数,所有节点配置k8s内核如下:
新建k8s.conf文件,并用 sysctl --system 命令加载内核参数生效 cat </etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
再检查所有内核参数是否加载
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
如上16个前期准备工作已全部做完,接下来我们开始做docker容器的安装
开始安装 安装Docker- 查看所有docker版本
yum list docker-ce-x86-64 --showduplicates|sort -r
- 查看系统是否有默认的docker版本
[root@k8s-master21 ~]# yum list installed | grep docker containerd.io.x86_64 1.4.3-3.1.el7 @docker-ce-stable docker-ce.x86_64 3:20.10.12-3.el7 @docker-ce-stable docker-ce-cli.x86_64 1:20.10.12-3.el7 @docker-ce-stable docker-scan-plugin.x86_64 0.12.0-3.el7 @docker-ce-stable
- 删除默认docker版本
yum -y remove docker-ce.x86_64 docker-ce-cli.x86_64 docker-scan-plugin.x86_64
- 所有节点安装Docker-ce 19.03,不需要太新,这是官方已经经过验证的版本
yum install -y docker-ce-19.03.*
- 查看docker安装版本
[root@k8s-master21 ~]# yum list installed | grep docker containerd.io.x86_64 1.4.3-3.1.el7 @docker-ce-stable docker-ce.x86_64 3:19.03.15-3.el7 @docker-ce-stable docker-ce-cli.x86_64 1:20.10.12-3.el7 @docker-ce-stable docker-scan-plugin.x86_64 0.12.0-3.el7 @docker-ce-stable
- 由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
mkdir /etc/docker cat > /etc/docker/daemon.json <
- 查看docker版本
docker info
- 所有节点设置开机自启动Docker:
systemctl daemon-reload && systemctl enable --now docker安装kubeadm组件
- 所有节点查看k8s最新版本:
yum list kubeadm.x86_64 --showduplicates | sort -r
- 所有节点安装1.20.x版本的kubeadm:
yum install kubeadm-1.20* kubelet-1.20* kubectl-1.20* -y
- 默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像:
cat >/etc/sysconfig/kubelet<
- 设置Kubelet开机自启动:
systemctl daemon-reload && systemctl enable --now kubeletKubeadm高可用组件HAProxy和Keepalived安装
- 所有Master节点通过yum安装HAProxy和KeepAlived:
yum install keepalived haproxy -y
- 所有Master节点配置HAProxy(详细配置参考HAProxy文档,所有Master节点的HAProxy配置相同):
vim /etc/haproxy/haproxy.cfg
- 删除所有内容,快捷删除命令:ggdG,再添加以下内容,注意首行global是否复制完整
global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:16443 bind 127.0.0.1:16443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master21 192.168.2.21:6443 check server k8s-master22 192.168.2.22:6443 check server k8s-master23 192.168.2.23:6443 check
- 所有Master节点配置KeepAlived,配置不一样,注意每个节点的IP和网卡(interface参数),查看网卡名称(ens33)
ip a
- 所有Master节点新建/etc/keepalived目录并新建keepalived.conf配置文件:
mkdir /etc/keepalived vim /etc/keepalived/keepalived.conf
- Master21节点的配置,删除所有内容,快捷删除命令:ggdG,再添加以下内容,注意首行是否复制完整
! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 192.168.2.21 virtual_router_id 51 priority 101 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.2.236 } track_script { chk_apiserver } }Master22节点的配置,删除所有内容,快捷删除命令:ggdG,再添加以下内容:
! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 192.168.2.22 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.2.236 } track_script { chk_apiserver } }Master23节点的配置,删除所有内容,快捷删除命令:ggdG,再添加以下内容:
! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 192.168.2.23 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.2.236 } track_script { chk_apiserver } }
- 所有master节点配置KeepAlived健康检查文件:
vim /etc/keepalived/check_apiserver.sh添加以下内容,注意首行是否复制完整
#!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi备注:我们通过KeepAlived虚拟出来一个VIP,VIP会配置到一个master节点上面,它会通过haproxy暴露的16443的端口反向代理到我们的三个master节点上面,所以我们可以通过VIP的地址加上16443访问到我们的API server。
健康检查会检查haproxy的状态,三次失败就会将KeepAlived停掉,停掉之后KeepAlived会跳到其他的节点添加执行权限
chmod +x /etc/keepalived/check_apiserver.sh
- 启动haproxy
systemctl daemon-reload && systemctl enable --now haproxy查看端口(16443)
netstat -lntp
- 启动keepalived
systemctl enable --now keepalived查看系统日志(Sending gratuitous ARP on ens33 for 192.168.2.236)
tail -f /var/log/messages cat /var/log/messages | grep 'ens33' -5
- 查看ip
ip a可以看到192.168.232.236绑定到了master21,其他两个节点是没有的
- 测试VIP
ping 192.168.2.236 -c 4 telnet 192.168.2.236 16443如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题
Kubeadm集群初始化
- 比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
- 所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
- 所有节点查看selinux状态,必须为disable:getenforce
- master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
- master节点查看监听端口:netstat -lntp
官方初始化文档:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
在生产环境中有些配置需要修改,因为使用默认的配置可能会导致网段冲突,所以我们使用配置文件的形式初始化
- 所有Master节点创建 kubeadm-config.yaml(主要是master21节点使用,其他两个节点只是用master21的配置下载镜像):
vim /root/kubeadm-config.yaml备注,如果不是高可用集群,192.168.2.236:16443改为master21的地址,16443改为apiserver的端口,默认是6443,注意更改v1.20.14自己服务器kubeadm的版本:kubeadm version
以下文件内容,宿主机网段、podSubnet网段、serviceSubnet网段不能重复,具体看前面的高可用Kubernetes集群规划
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: 7t2weq.bjbawausm0jaxury ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.2.21 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master21 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - 192.168.2.236 timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 192.168.2.236:16443 controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.14 networking: dnsDomain: cluster.local podSubnet: 172.168.0.0/12 serviceSubnet: 10.96.0.0/12 scheduler: {}备注:如果我们需要更新上面的kubeadm文件,可以使用命令:kubeadm config migrate --old-config /root/kubeadm-config.yaml --new-config /root/new.yaml 来做更新,再使用命令 kubeadm version 查看我们自己安装的kubeadm版本(GitVersion:“v1.20.x”)最后将配置文件中的 kubernetesVersion: v1.20.14 改为我们自己安装的kubeadm版本 kubernetesVersion: v1.20.x即可。
- 所有Master节点提前下载镜像,可以节省初始化时间:
kubeadm config images pull --config /root/kubeadm-config.yaml因为配置了阿里云镜像(imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers),所以下载速度比默认使用的gcr镜像快,国内访问不了gcr镜像
因为配置了token过期时间(ttl: 24h0m0s),所以可能出现今天生成token,明天加入不了集群的问题
同时master节点为我们配置了一个污点(taints),这个污点可以让我们的mater不部署容器
criSocket就是通过哪一个socket连接我们的docker,dockershim在k8s 1.20版本废弃,官方不维护,后期可能有人会维护,也可以改成其他cri的runtime
- Master21节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master21即可:
kubeadm init --config /root/kubeadm-config.yaml --upload-certskubeadm 的配置管理是通过 pod 管理的,所有的组件都是通过容器启动的,通过 /etc/kubernetes/manifests 目录下面的 yaml 文件启动,这就是 kubelet 生命周期管理的目录,在这里面配置一个 pod 的 yaml 文件,它就会为你管理 pod 的生命周期。
进入到该目录中,可以看到以下文件cd /etc/kubernetes/manifests ls etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yamlkubeadm 与二进制安装不一样的地方在于它的配置管理都在 yaml 文件中,可以编辑文件查看,二进制是一个单独的server文件,如果更改了配置,千万不要手动让它生效,kubelet 会自动帮我们加载配置,重启容器
备注:如果初始化失败,可以重置后再次初始化,命令为:kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
- 初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.2.236:16443 --token 7t2weq.bjbawausm0jaxury --discovery-token-ca-cert-hash sha256:f35df68d5af85ae073b62ca668b0c8cd8b43fbf85a2be223cf41ac8f60772c17 --control-plane --certificate-key fd756a1fa6ef431057721cf86f5a8e42a089002f0399d1e3cece67e4b9d9a142 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.2.236:16443 --token 7t2weq.bjbawausm0jaxury --discovery-token-ca-cert-hash sha256:f35df68d5af85ae073b62ca668b0c8cd8b43fbf85a2be223cf41ac8f60772c17
- Master21节点配置环境变量,用于访问Kubernetes集群:
cat <> /root/.bashrc export KUBECONFIG=/etc/kubernetes/admin.conf EOF source /root/.bashrc 管理集群的命令 kubectl 只需要在一个节点上面有就可以,这个节点可以是 k8s 节点,也可以不是,它就是通过 admin.conf 文件和 k8s 通讯的,文件中定义了一个变量 KUBECONFIG,指定了文件的地址,然后我们就可以 *** 作我们的集群了
- 查看节点状态:
kubectl get nodes可以看到它添加了一个规则 control-plane
NAME STATUS ROLES AGE VERSION k8s-master21 NotReady control-plane,master 5m23s v1.20.14
- 查看server:
kubectl get svc可以看到以下的server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1443/TCP 7m52s 采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,生产环境建议创建一个namespaces
- 此时可以查看Pod状态:
kubectl get pods -n kube-system -o wide可以看到以下的pod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-54d67798b7-pk9d8 0/1 Pending 0 8m50scoredns-54d67798b7-w7ddj 0/1 Pending 0 8m50s etcd-k8s-master21 1/1 Running 0 8m44s 192.168.2.21 k8s-master21 kube-apiserver-k8s-master21 1/1 Running 0 8m44s 192.168.2.21 k8s-master21 kube-controller-manager-k8s-master21 1/1 Running 0 8m44s 192.168.2.21 k8s-master21 kube-proxy-fts8c 1/1 Running 0 8m49s 192.168.2.21 k8s-master21 kube-scheduler-k8s-master21 1/1 Running 0 8m44s 192.168.2.21 k8s-master21
- 在master22上,初始化master22加入集群
kubeadm join 192.168.2.236:16443 --token 7t2weq.bjbawausm0jaxury --discovery-token-ca-cert-hash sha256:f35df68d5af85ae073b62ca668b0c8cd8b43fbf85a2be223cf41ac8f60772c17 --control-plane --certificate-key fd756a1fa6ef431057721cf86f5a8e42a089002f0399d1e3cece67e4b9d9a142注意:如果token过期了,则需要重新生成token:
以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行
Token过期后生成新的token:kubeadm token create --print-join-commandMaster需要生成–certificate-key
kubeadm init phase upload-certs --upload-certs在master01查看其他节点
kubectl get nodes可以看到master22节点
NAME STATUS ROLES AGE VERSION k8s-master21 NotReady control-plane,master 17m v1.20.14 k8s-master22 NotReady control-plane,master 61s v1.20.14
- 同样的在master23上,初始化master23加入集群
kubeadm join 192.168.2.236:16443 --token 7t2weq.bjbawausm0jaxury --discovery-token-ca-cert-hash sha256:f35df68d5af85ae073b62ca668b0c8cd8b43fbf85a2be223cf41ac8f60772c17 --control-plane --certificate-key fd756a1fa6ef431057721cf86f5a8e42a089002f0399d1e3cece67e4b9d9a142*注意:如果token过期了,则还是需要重新生成token
在master01查看其他节点
kubectl get nodes可以看到master23节点
k8s-master21 NotReady control-plane,master 22m v1.20.14 k8s-master22 NotReady control-plane,master 5m42s v1.20.14 k8s-master23 NotReady control-plane,master 62s v1.20.14Kubeadm Node加入集群Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。
- 初始化node01,node02加入集群(与master相比,不需要control-plane)
kubeadm join 192.168.2.236:16443 --token 7t2weq.bjbawausm0jaxury --discovery-token-ca-cert-hash sha256:f35df68d5af85ae073b62ca668b0c8cd8b43fbf85a2be223cf41ac8f60772c17所有节点初始化完成后,查看集群状态
kubectl get nodes可以看到所有节点
NAME STATUS ROLES AGE VERSION k8s-master21 NotReady control-plane,master 37m v1.20.14 k8s-master22 NotReady control-plane,master 21m v1.20.14 k8s-master23 NotReady control-plane,master 16m v1.20.14 k8s-node51 NotReadyCalico节点配置38s v1.20.14 k8s-node52 NotReady 5s v1.20.14 以下步骤只在master21执行
- 下载安装所有的源码文件:
cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git无法下载的可以通过本地访问https://github.com/dotbalo/k8s-ha-install.git 再拉取指定分支manual-installation-v1.20.x的压缩包后,再上传到服务器,解压并修改目录名称为k8s-ha-install
unzip k8s-ha-install-manual-installation-v1.20.x.zip mv k8s-ha-install-manual-installation-v1.20.x k8s-ha-install进入calico目录
cd /root/k8s-ha-install/calico/
- 修改calico-etcd.yaml的以下各个位置:
修改etcd的节点sed -i 's#etcd_endpoints: "http://: "#etcd_endpoints: "https://192.168.2.21:2379,https://192.168.2.22:2379,https://192.168.2.23:2379"#g' calico-etcd.yaml 修改默认配置
ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d 'n'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d 'n'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d 'n'` sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml修改etcd_key的存放位置,把 etcd_key 放到 secret 里面,secret 会挂载到 calico 容器的 pod 里面,挂载的名称就是 ETCD_CA,这样 calico 就能找到证书,就可以连接到 etcd,就可以把 pod 信息存储到 etcd 里面
sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml修改 pod 网段
POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释,所以更改的时候请确保这个步骤的这个网段没有被统一替换掉,如果被替换掉了,还请改回来:
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml检查文件:
cat calico-etcd.yaml可以看到etcd-key已经导入进来,它就是把证书 /etc/kubernetes/pki/etcd/ca.crt 读取出来,再经过base64加密,再填到这个位置
- 安装 calico
kubectl apply -f calico-etcd.yaml查看容器状态
kubectl get po -n kube-system成功运行
NAME READY STATUS RESTARTS AGE calico-kube-controllers-5f6d4b864b-6fg6x 1/1 Running 0 118s calico-node-8g8qm 1/1 Running 0 118s calico-node-ftcf9 1/1 Running 0 118s calico-node-g2w62 1/1 Running 0 118s calico-node-lfzxn 1/1 Running 0 118s calico-node-tm72q 1/1 Running 0 118s目前用的是阿里云的镜像,生产环境需要推荐使用自己的镜像仓库,这样速度更快
Metrics Server & Dashboard安装 Metrics Server安装在新版k8s中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
github 地址:https://github.com/kubernetes-sigs/metrics-server
以下 *** 作都在master21上执行
- 查看配置文件comp.yaml
cd /root/k8s-ha-install/metrics-server-0.4.x-kubeadm cat comp.yaml查看证书添加情况,不然可能导致获取不到度量指标,可以查看到如下正确的证书添加:
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt # change to front-proxy-ca.crt for kubeadm查看镜像地址也修改为阿里云,如下正确镜像配置:
image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:v0.4.1
- 将Master21节点的front-proxy-ca.crt复制到所有Node节点
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node51:/etc/kubernetes/pki/front-proxy-ca.crt scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node52:/etc/kubernetes/pki/front-proxy-ca.crt
- 安装metrics server
cd /root/k8s-ha-install/metrics-server-0.4.x-kubeadm/ kubectl create -f comp.yaml
- 查看状态
kubectl top node显示CPU状态,内存使用量
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master21 91m 4% 1139Mi 60% k8s-master22 109m 5% 1188Mi 63% k8s-master23 86m 4% 1085Mi 57% k8s-node51 41m 2% 682Mi 36% k8s-node52 43m 2% 649Mi 34%Dashboard安装Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
github 地址:https://github.com/kubernetes/dashboard
以下 *** 作都在master21上执行
- 安装指定版本dashboard
cd /root/k8s-ha-install/dashboard/ grep "image" dashboard.yaml可以看到只修改了镜像地址
image: registry.cn-beijing.aliyuncs.com/dotbalo/dashboard:v2.0.4 imagePullPolicy: Always image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-scraper:v1.0.4安装dashboard
kubectl create -f .备注:如果需要访问最新版本可以访问官方github获取连接,但是没必要安装最新
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml查看所有pod启动情况
kubectl get po --all-namespaces看到如下pod列表
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-5f6d4b864b-6fg6x 1/1 Running 0 177m kube-system calico-node-8g8qm 1/1 Running 0 177m kube-system calico-node-ftcf9 1/1 Running 0 177m kube-system calico-node-g2w62 1/1 Running 0 177m kube-system calico-node-lfzxn 1/1 Running 0 177m kube-system calico-node-tm72q 1/1 Running 0 177m kube-system coredns-54d67798b7-pk9d8 1/1 Running 0 4h29m kube-system coredns-54d67798b7-w7ddj 1/1 Running 0 4h29m kube-system etcd-k8s-master21 1/1 Running 0 4h28m kube-system etcd-k8s-master22 1/1 Running 0 4h12m kube-system etcd-k8s-master23 1/1 Running 0 4h7m kube-system kube-apiserver-k8s-master21 1/1 Running 0 4h28m kube-system kube-apiserver-k8s-master22 1/1 Running 0 4h12m kube-system kube-apiserver-k8s-master23 1/1 Running 0 4h7m kube-system kube-controller-manager-k8s-master21 1/1 Running 1 4h28m kube-system kube-controller-manager-k8s-master22 1/1 Running 0 4h12m kube-system kube-controller-manager-k8s-master23 1/1 Running 0 4h7m kube-system kube-proxy-9s5dm 1/1 Running 0 3h51m kube-system kube-proxy-fts8c 1/1 Running 0 4h28m kube-system kube-proxy-g4jbb 1/1 Running 0 4h12m kube-system kube-proxy-mb77q 1/1 Running 0 3h51m kube-system kube-proxy-xqnbj 1/1 Running 0 4h7m kube-system kube-scheduler-k8s-master21 1/1 Running 1 4h28m kube-system kube-scheduler-k8s-master22 1/1 Running 0 4h12m kube-system kube-scheduler-k8s-master23 1/1 Running 0 4h7m kube-system metrics-server-545b8b99c6-zp7sq 1/1 Running 0 117m kubernetes-dashboard dashboard-metrics-scraper-7645f69d8c-8l9g5 1/1 Running 0 2m26s kubernetes-dashboard kubernetes-dashboard-78cb679857-scxd7 1/1 Running 0 2m26s
- 验证整个集群网络是否可以通信
kubernetes集群ipkubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1443/TCP 4h39m kube-system集群ip
kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.1053/UDP,53/TCP,9153/TCP 4h40m metrics-server ClusterIP 10.110.41.23 443/TCP 137m 使用telnet来验证网络通信,如下表示kubernetes集群ip 10.96.0.1通信正常
[root@k8s-master21 dashboard]# telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'.如下表示kube-system集群ip 10.96.0.10通信正常
[root@k8s-master21 dashboard]# telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. Connection closed by foreign host.如下表示kube-system集群ip 10.110.41.23通信正常
[root@k8s-master21 dashboard]# telnet 10.110.41.23 443 Trying 10.110.41.23... Connected to 10.110.41.23. Escape character is '^]'.验证pod间通信,查询所有pod网络信息
[root@k8s-master21 dashboard]# kubectl get po --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-5f6d4b864b-6fg6x 1/1 Running 0 3h15m 192.168.2.22 k8s-master22kube-system calico-node-8g8qm 1/1 Running 0 3h15m 192.168.2.22 k8s-master22 kube-system calico-node-ftcf9 1/1 Running 0 3h15m 192.168.2.23 k8s-master23 kube-system calico-node-g2w62 1/1 Running 0 3h15m 192.168.2.21 k8s-master21 kube-system calico-node-lfzxn 1/1 Running 0 3h15m 192.168.2.51 k8s-node51 kube-system calico-node-tm72q 1/1 Running 0 3h15m 192.168.2.52 k8s-node52 kube-system coredns-54d67798b7-pk9d8 1/1 Running 0 4h47m 172.171.81.65 k8s-master22 kube-system coredns-54d67798b7-w7ddj 1/1 Running 0 4h47m 172.166.236.129 k8s-master21 kube-system etcd-k8s-master21 1/1 Running 0 4h47m 192.168.2.21 k8s-master21 kube-system etcd-k8s-master22 1/1 Running 0 4h30m 192.168.2.22 k8s-master22 kube-system etcd-k8s-master23 1/1 Running 0 4h26m 192.168.2.23 k8s-master23 kube-system kube-apiserver-k8s-master21 1/1 Running 0 4h47m 192.168.2.21 k8s-master21 kube-system kube-apiserver-k8s-master22 1/1 Running 0 4h30m 192.168.2.22 k8s-master22 kube-system kube-apiserver-k8s-master23 1/1 Running 0 4h26m 192.168.2.23 k8s-master23 kube-system kube-controller-manager-k8s-master21 1/1 Running 1 4h47m 192.168.2.21 k8s-master21 kube-system kube-controller-manager-k8s-master22 1/1 Running 0 4h30m 192.168.2.22 k8s-master22 kube-system kube-controller-manager-k8s-master23 1/1 Running 0 4h26m 192.168.2.23 k8s-master23 kube-system kube-proxy-9s5dm 1/1 Running 0 4h10m 192.168.2.51 k8s-node51 kube-system kube-proxy-fts8c 1/1 Running 0 4h47m 192.168.2.21 k8s-master21 kube-system kube-proxy-g4jbb 1/1 Running 0 4h30m 192.168.2.22 k8s-master22 kube-system kube-proxy-mb77q 1/1 Running 0 4h9m 192.168.2.52 k8s-node52 kube-system kube-proxy-xqnbj 1/1 Running 0 4h26m 192.168.2.23 k8s-master23 kube-system kube-scheduler-k8s-master21 1/1 Running 1 4h47m 192.168.2.21 k8s-master21 kube-system kube-scheduler-k8s-master22 1/1 Running 0 4h30m 192.168.2.22 k8s-master22 kube-system kube-scheduler-k8s-master23 1/1 Running 0 4h26m 192.168.2.23 k8s-master23 kube-system metrics-server-545b8b99c6-zp7sq 1/1 Running 0 136m 172.171.55.65 k8s-node52 kubernetes-dashboard dashboard-metrics-scraper-7645f69d8c-8l9g5 1/1 Running 0 20m 172.171.55.66 k8s-node52 kubernetes-dashboard kubernetes-dashboard-78cb679857-scxd7 1/1 Running 0 20m 172.175.67.194 k8s-node51 所有节点ping一下 172.171.81.65 coredns-54d67798b7-pk9d8的ip ,如下表示所有节点都可以ping通
[root@k8s-master21 dashboard]# ping 172.171.81.65 PING 172.171.81.65 (172.171.81.65) 56(84) bytes of data. 64 bytes from 172.171.81.65: icmp_seq=1 ttl=63 time=2.61 ms 64 bytes from 172.171.81.65: icmp_seq=2 ttl=63 time=0.170 ms 64 bytes from 172.171.81.65: icmp_seq=3 ttl=63 time=0.205 ms验证一下从pod里面访问其他pod是否可以正常通信,如下在master21进入master23的 ip为192.168.2.23的pod服务 calico-node-ftcf9
kubectl exec -it calico-node-ftcf9 -n kube-system -- /bin/bash进入master23的calico-node-ftcf9后,执行ping 172.175.67.194 如下表示pod和pod直接也可以正常通信
[root@k8s-master23 /]# ping 172.175.67.194 PING 172.175.67.194 (172.175.67.194) 56(84) bytes of data. 64 bytes from 172.175.67.194: icmp_seq=1 ttl=63 time=0.494 ms 64 bytes from 172.175.67.194: icmp_seq=2 ttl=63 time=0.126 ms 64 bytes from 172.175.67.194: icmp_seq=3 ttl=63 time=0.287 ms
- 更改dashboard的svc为NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard将ClusterIP更改为NodePort后保存(如果已经为NodePort忽略此步骤),再查看端口号:
kubectl get svc kubernetes-dashboard -n kubernetes-dashboard10.108.188.240的端口号为:30122
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.108.188.240443:30122/TCP 52m 查看容器是否启动完成
kubectl get po -A根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:
访问Dashboard:https://192.168.2.21:30122(请更改30122为自己的端口),选择登录方式为令牌(即token方式)
也可以通过宿主机的ip访问:https://192.168.2.21:30122
查看端口占用[root@k8s-master21 dashboard]# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:30122 0.0.0.0:* LISTEN 8773/kube-proxy可以看到 NodePort 所做的事情就是在宿主机上启动一个端口号 30122,这个端口号会对应到 dashboard,每一台服务器都会启动这个端口,都可以访问到 dashboard
- 创建管理员用户 vim /root/admin.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-systemapply执行创建用户
kubectl apply -f admin.yaml -n kube-system
- 登录dashboard
浏览器直接输入集群中任意一台服务ip都可以,如21机器:https://192.168.2.21:30122/
会有证书安全问题,不管可以直接继续访问,或者在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,右键–属性–快捷方式–目标"C:Program FilesGoogleChromeApplicationchrome.exe" --test-type --ignore-certificate-errors查看token值:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print }')得到token值:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InpXR0Q2TXdnYlhFRnZxWWw5QmwxM3d1V284cHNHUm5OME5pR3JoS0lxZTQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTU1cDRqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3NTliNzY5NC04ODIwLTRmYTItOTk2OC05NTE5Y2RmNGM4YzYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.eOwNypS-yTPAQcJTxeoLamzDYdvCWyAATa6ySgwvlZBfWxwUhrFbV1sTNRp1ToIAvBUTjSiDfIsP9-VVhkxt_eFKbDLsWCavHw4BMkQMmZwg9f2jR04AE9Q9LQRXrkXgUnvqLDYjFVqR-H0Jn6K8i91oUVjuINYc5mvNeG-nenNV4sQ0ASU6BGpbOcQaPzjv7L62iRNDqn-qFJXMokSWpBPKLr-NOKPHEkdZaA4TDhggoHmPPS-0xe5sDx0gqnzcSKYxwvMzDCwAXVMXBHZPRuVCcd1S__c_JxSaInGfsS5y_LW7IojvYB4Twn1N1Toi1A-wCRC6wZrmuyAlAxKOvw将token值输入到令牌后,单击登录即可访问Dashboard即可。
- 查看所有pod启动情况
kubectl get po --all-namespaces如下表示所有pod都正常运行
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-5f6d4b864b-6fg6x 1/1 Running 0 4h15m kube-system calico-node-8g8qm 1/1 Running 0 4h15m kube-system calico-node-ftcf9 1/1 Running 0 4h15m kube-system calico-node-g2w62 1/1 Running 0 4h15m kube-system calico-node-lfzxn 1/1 Running 0 4h15m kube-system calico-node-tm72q 1/1 Running 0 4h15m kube-system coredns-54d67798b7-pk9d8 1/1 Running 0 5h46m kube-system coredns-54d67798b7-w7ddj 1/1 Running 0 5h46m kube-system etcd-k8s-master21 1/1 Running 0 5h46m kube-system etcd-k8s-master22 1/1 Running 0 5h30m kube-system etcd-k8s-master23 1/1 Running 0 5h25m kube-system kube-apiserver-k8s-master21 1/1 Running 0 5h46m kube-system kube-apiserver-k8s-master22 1/1 Running 0 5h30m kube-system kube-apiserver-k8s-master23 1/1 Running 0 5h25m kube-system kube-controller-manager-k8s-master21 1/1 Running 1 5h46m kube-system kube-controller-manager-k8s-master22 1/1 Running 0 5h30m kube-system kube-controller-manager-k8s-master23 1/1 Running 0 5h25m kube-system kube-proxy-9s5dm 1/1 Running 0 5h9m kube-system kube-proxy-fts8c 1/1 Running 0 5h46m kube-system kube-proxy-g4jbb 1/1 Running 0 5h30m kube-system kube-proxy-mb77q 1/1 Running 0 5h9m kube-system kube-proxy-xqnbj 1/1 Running 0 5h25m kube-system kube-scheduler-k8s-master21 1/1 Running 1 5h46m kube-system kube-scheduler-k8s-master22 1/1 Running 0 5h30m kube-system kube-scheduler-k8s-master23 1/1 Running 0 5h25m kube-system metrics-server-545b8b99c6-zp7sq 1/1 Running 0 3h15m kubernetes-dashboard dashboard-metrics-scraper-7645f69d8c-8l9g5 1/1 Running 0 80m kubernetes-dashboard kubernetes-dashboard-78cb679857-scxd7 1/1 Running 0 80m至此整个集群已经部署完毕。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)