一、吐槽前言:
1.最开始安装部署cri-o时,就想到的是通过yum的方式来进行部署,但是安装官方推荐的yum源配置后,根本就不能够安装成功,后面某个机缘下,看到同事就是通过yum源的方式安装部署,查看的它的步骤,突然发现在配置官方yum源的时候,少配置了两个变量,就导致配置的yum源不能够生效,所以在次编写此文章做记录分享(这方面的资料实在是太少了,都是坑)
2. 官方安装说明方法链接
https://cri-o.io/
二、CRI-O的安装部署方法
3. 配置yum源:
1. 老规矩,k8s环境下的特殊配置 systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/enforcing/disabled/' /etc/selinux/config swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab hostnamectl set-hostname k8s-master modprobe overlay modprobe br_netfilter cat > /etc/sysctl.d/k8s.conf <三、 kubelet、kubectl、cri-tools、kubeadm组件安装
8 . 老规矩,先配置k8s的yum源: cat <第四步、部署K8S集群
- 进行crio.conf文件中,pause镜像源的配置,不然部署pod容器会失败,因为无法连接国外的镜像仓库,步骤如下:(虽然在yum安装crio时,生成了该文件,但是还是建议重新生成一下)
crio config --default > /etc/crio/crio.conf 修改添加内容如下: registries = ['4v2510z7.mirror.aliyuncs.com:443/library'] pause_image = "registry.aliyuncs.com/google_containers/pause:3.2"13.生成kueadm的初始文件,如果不通过文件来部署,好像会存在问题,因为k8s默认CRI是docker,所以需要生成,然后修改文件才行, podSubnet的字段需要指定,不然部署flannel时,网段划分会存在问题,无法成功分配ip所以就无法正常创建pod
kubeadm config print init-defaults > kubeadm-config.yaml 修改后的内容如下: apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 10.0.2.93 bindPort: 6443 nodeRegistration: # criSocket: /var/run/dockershim.sock criSocket: /var/run/crio/crio.sock # name: node taints: - effect: PreferNoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: 1.21.0 networking: dnsDomain: cluster.local podSubnet: 10.85.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {}
- 提前下载k8s相关的组件镜像,提高init的速度,下载时,指定阿里云镜像源来下载,如下:
kubeadm config images pull --config kubeadm.yaml --image-repository registry.aliyuncs.com/google_containers
- 提前下载好flannel的组件镜像(每个机器都执行一下,其实后面想了一下,其实也不用提前下载,因为不用翻墙好像也可以下载,所以这一步参见吧)。
curl -O https://raw.githubusercontent.com/coreos/flannel/master/documentation/kube-flannel.yml crictl pull quay.io/coreos/flannel:v0.14.0
- 移除所有所有机器/etc/cni/net.d/下的配置文件,默认情况下,kubelet会读取这个目录下的配置文件,然后读取cni插件类型,调用相关cni插件常见网络和分配IP地址,因为我们计划使用flannel来充当网络插件,如果不移除所有配置文件,当初始化创建coredns时,会使用crio的brige来创建网络和分配ip地址,会导致后续部署flannel不生效,推断原因为:k8s init完毕后,会自动启动kubelet服务,kubelet服务会读取/etc/cni/net.d/的配置文件,然后后续创建pod,就只会使用crio的brige来创建容器,每个宿主机的pod都会在同一个子网段,根本就无法跨宿主机,这个根本就不是我们的部署k8s集群的初衷,所以当我们k8s init初始化完毕后,coredns会是未创建的状态,因为kubelet不知道该用什么cni插件来分配IP地址,但是当我们创建flannel组件后,这个问题就会被解决,创建pod,当被划分到不同的主机上时,也能够分配不同网段的IP地址,也会创建不同flannel接口、分配不同的地址,创建相关的路由,这样就可以实现集群内部pod和pod跨宿主机通信了。
27 .说了这么多废话,开始干吧
1) master执行 kubeadm init --config kubeadm-config.yaml 2) 初始化成功后的打印信息: Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.2.120:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3db371d75d6029e5527233b9ec8400cdc6826a4cb88d626216432f0943232eba 3) master执行: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 4) node上执行kubeadm join kubeadm join 10.0.2.93:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3db371d75d6029e5527233b9ec8400cdc6826a4cb88d626216432f0943232eba 5) 在master部署flannl插件 kubectl apply -f kube-flannel.yml
- 查看部署成果吧,真是各种坑:
1) 查看前的小问题解决 查看kubectl get cs状态(总觉得k8s好像就是存在该问题,修改配置文件,重启kubelet进行恢复) [root@cri-2 crio-v1.19.0]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"} 原因是kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0,只需在文件中注释掉即可。 在每个主节点上执行 vim /etc/kubernetes/manifests/kube-scheduler.yaml # and then comment this line # - --port=0 重启kubelet后的结果 [root@k8s-master ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} 2) 如果大家也是搭建测试环境,只有两个机器,但是想让pod能够在两个master和node上都能够启动,充分利用硬件资源的话,可以去除污点的方式,让pod也能够部署到master上,生产环境不推荐,因为k8s的核心组件都在master上运行,如果将业务pod部署到master上,可能会导致master的资源紧张有干扰。 kubectl taint nodes --all node-role.kubernetes.io/master- 3) 好了,好了,来让我们来查看最后的部署结果吧 [root@k8s-master ~]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready control-plane,master 34h v1.21.2 10.0.2.93CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 cri-o://1.21.4 k8s-node1 Ready 34h v1.21.2 10.0.2.94 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 cri-o://1.21.4 [root@k8s-master ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-59d64cd4d4-5llk7 1/1 Running 0 34h 10.85.0.2 k8s-master coredns-59d64cd4d4-5vmk6 1/1 Running 0 34h 10.85.0.3 k8s-master etcd-k8s-master 1/1 Running 0 34h 10.0.2.93 k8s-master kube-apiserver-k8s-master 1/1 Running 0 34h 10.0.2.93 k8s-master kube-controller-manager-k8s-master 1/1 Running 0 34h 10.0.2.93 k8s-master kube-flannel-ds-f7k6k 1/1 Running 0 34h 10.0.2.94 k8s-node1 kube-flannel-ds-pqwsg 1/1 Running 0 34h 10.0.2.93 k8s-master kube-proxy-6jz7q 1/1 Running 0 34h 10.0.2.94 k8s-node1 kube-proxy-j6hhl 1/1 Running 0 34h 10.0.2.93 k8s-master kube-scheduler-k8s-master 1/1 Running 0 34h 10.0.2.93 k8s-master [root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-4pgv4 1/1 Running 0 4h51m 10.85.0.8 k8s-master mysql-jptwn 1/1 Running 0 4h51m 10.85.0.9 k8s-master myweb1-ddttd 1/1 Running 0 4h50m 10.85.1.4 k8s-node1 myweb1-ngk9r 1/1 Running 0 4h50m 10.85.1.5 k8s-node1 [root@k8s-master ~]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 443/TCP 34h mysql NodePort 10.107.163.221 3306:30060/TCP 4h45m app=mysql-crio myweb NodePort 10.100.252.36 8080:30001/TCP 4h45m app=myweb-crio [root@k8s-master ~]# kubectl get svc -o wide -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 34h k8s-app=kube-dns
- 最后的,最后,完结撒花,但是得特别鸣谢这篇文章的博主,很多坑都是这位博主走过的,该篇文章80%的知识点,都是来源于该篇文章,大家可以给这位大哥点个赞,来个收藏。如果有问题欢迎交流学习,自己的知识还是太过于匮乏:
https://blog.csdn.net/weixin_42072280/article/details/120088219欢迎分享,转载请注明来源:内存溢出
评论列表(0条)