更换控制面节点记录。
从集群中剔除控制面节点(包括etcd中的记录,否则再加入普通节点或控制面节点均会异常)
删除控制面节点相关信息
修改集群配置更改原控制面节点ip等信息为新加入节点信息
加入新的节点作为控制面节点
# 删除节点 kubectl drain master1 --delete-emptydir-data --force --ignore-daemonsets kubectl delete node master1 # 查看etcd kubectl -n kube-system exec etcd-master2 -it -- sh -c "ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list" # 删除etcd中该节点的记录 kubectl -n kube-system exec etcd-master2 -it -- sh -c "ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member remove a9sd9asdf" #二、删除控制面节点相关信息
sudo kubeadm reset sudo rm -rf $HOME/.kube/config sudo ipvsadm -C # 远程链接,避免中断 sudo iptables -P INPUT ACCEPT && sudo iptables -P FORWARD ACCEPT sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X sudo rm /etc/cni/net.d/* sudo ifconfig cni0 down && sudo ip link delete cni0 sudo ifconfig flannel.1 down && sudo ip link delete flannel.1三、修改集群配置更改原控制面节点ip等信息为新加入节点信息
kubectl -n kube-system edit cm kubeadm-config data: ClusterConfiguration: | apiServer: certSANs: - api.k8s.local - master1 - master2 - master3 - 192.168.1.66 - 192.168.1.67 - 192.168.1.68 - 192.168.1.69四、加入新的节点作为控制面节点
# --token kubeadm token list kubeadm token create # --discovery-token-ca-cert-hash openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' # --certificate-key sudo kubeadm init phase upload-certs --upload-certs # 以控制面节点加入集群 sudo kubeadm join api.k8s.local:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key 1aa87ef98b8727d886563af6dc44db17c75abf8a9922a81e5009d4fe55b4a680
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)