kubernetes 基础 -- 1. 使用 kubeadm 安装 kubernetes

kubernetes 基础 -- 1. 使用 kubeadm 安装 kubernetes,第1张

kubernetes 基础 -- 1. 使用 kubeadm 安装 kubernetes

Kubernetes 安装
  • Kubernetes 安装
    • 系统基础配置
    • kubernetes 组件安装
    • kubeadm 安装集群
      • master 节点
      • worker 节点
      • 网络插件
    • TS

Kubernetes 安装

安装方式:kubeadm
*** 作系统:centos 7.7
版本:1.22.3

节点数量:2 (1 个 Master,1 个 Worker)
IP: master:1.2.3.215,worker:1.2.3.213

系统基础配置

关闭防火墙

$ systemctl disable firewalld
$ systemctl stop firewalld

关闭 selinux

修改文件 /etc/selinux/config 中 SELINUX=disabled
$ setenforce 0

设置 ipforward
vi /etc/sysctl.conf

net.ipv4.ip_forward=1
$ sysctl -p

设置 net bridge
使流过网桥的流量也进入iptables/netfilter框架中
vi /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
$ sysctl --system

写 hosts
修改 /etc/hosts
master 节点

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
1.2.3.215   master
1.2.3.213   worker

worker 节点

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
1.2.3.215   master
1.2.3.213   worker
kubernetes 组件安装

yum 源 添加
docker-ce 源

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

kubenetes 组件源 创建文件 /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装组件
安装 kubeadm,kubectl,kubelet 和 docker

$ yum install -y docker-ce kubeadm-1.22.3 kubectl-1.22.3 kubelet-1.22.3
$ systemctl enable kubelet

启动 docker

$ systemctl start docker
$ systemctl enable docker
kubeadm 安装集群 master 节点
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

通常情况下安装时会使用 docker 拉取镜像,镜像地址为 k8s.gcr.io,国内网络访问有问题,那么我们可以如下处理;
可以先看一下需要的 image 列表

$ kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.22.3
k8s.gcr.io/kube-controller-manager:v1.22.3
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

直接使用阿里源的镜像,修改

$ kubeadm config print init-defaults > kubeadm.conf

vi kubeadm.conf

kubernetesVersion: 1.22.3
imageRepository: registry.aliyuncs.com/google_containers

手动拉取镜像

$ kubeadm config images pull --config kubeadm.conf
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.22.3
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.5
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.0-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.4

可以通过 docker tag 修改为需要的 image

$ docker tag  registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.3 k8s.gcr.io/kube-apiserver:v1.22.3
$ docker tag  registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3
$ docker tag  registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3
$ docker tag  registry.aliyuncs.com/google_containers/kube-proxy:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3
$ docker tag  registry.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
$ docker tag  registry.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5
$ docker tag  registry.aliyuncs.com/google_containers/coredns:v1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4

然后 kubeadm init

验证
将设置添加到环境变量

$ echo "export KUBEConFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
$ source ~/.bash_profile
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-78fcd69978-c4jm2         0/1     Pending   0          5m3s
kube-system   coredns-78fcd69978-f2w8l         0/1     Pending   0          5m3s
kube-system   etcd-master                      1/1     Running   1          5m16s
kube-system   kube-apiserver-master            1/1     Running   1          5m18s
kube-system   kube-controller-manager-master   1/1     Running   1          5m16s
kube-system   kube-proxy-ll8mp                 1/1     Running   0          5m3s
kube-system   kube-scheduler-master            1/1     Running   1          5m16s
worker 节点

需要做 master 除 init 外的所有工作
join master
在 master 节点上获取 join token,复制到 worker 节点运行

$ kubeadm token create --print-join-command

在 worker 节点

$ kubeadm join 1.2.3.215:6443 --token xel0co.dmq6uwmcq5z1yuac --discovery-token-ca-cert-hash sha256:86748f6c80d64da6b399cdc292ef16216578d287b4b58e95ff37423d645d48d8

验证
在 master 节点

kubectl get nodes
NAME     STATUS     ROLES                  AGE   VERSION
master   NotReady   control-plane,master   18m   v1.22.3
worker   NotReady    
网络插件

之后再学习各网络插件,先使用 flunnel 进行集群创建。
flunnel
在 master 节点

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/documentation/kube-flannel.yml
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-78fcd69978-c4jm2         1/1     Running   0          23h
kube-system   coredns-78fcd69978-qq5pd         1/1     Running   0          6h15m
kube-system   etcd-master                      1/1     Running   1          23h
kube-system   kube-apiserver-master            1/1     Running   1          23h
kube-system   kube-controller-manager-master   1/1     Running   1          23h
kube-system   kube-flannel-ds-n6gnt            1/1     Running   0          6h23m
kube-system   kube-flannel-ds-t4r2x            1/1     Running   0          6h23m
kube-system   kube-proxy-ll8mp                 1/1     Running   0          23h
kube-system   kube-proxy-rp2d8                 1/1     Running   0          23h
kube-system   kube-scheduler-master            1/1     Running   1          23h
$ kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   23h   v1.22.3
worker   Ready                     23h   v1.22.3

至此,部署完成。

TS

kubeadm init 报错,查看 kubelet 报错信息。

master kubelet[17630]: E1110 21:37:00.553914   17630 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs""

在 /etc/docker/daemon.json 中加

"exec-opts": ["native.cgroupdriver=systemd"],

然后

systemctl daemon-reload
systemctl restart docker

reset 后 重新 init

$ kubeadm reset
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zaji/5443444.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-11
下一篇 2022-12-11

发表评论

登录后才能评论

评论列表(0条)

保存