二进制包部署单节点K8S

二进制包部署单节点K8S,第1张

二进制包部署单节点K8S 二进制包部署单节点K8S

单节点可用于测试环境、学习源码等

一、版本说明

VMware : VMware Workstation 16 PlayerLinux:CentOs 8 : 4.18.0-80.el8.x86_64Container runtimes : docker-20.10.9ETCD : v3.5.1Kubernetes : v1.23.1签名工具:

cfsslcfssljsoncfssl-certinfo

可以先下载好相关软件,或者后面使用命令下载


说明:本文中命令前的 $ 符号只是做命令标记,在复制命令时请忽略


二、系统初始化准备

安装好虚拟机后需要对系统进行一些参数处理:

#1.关闭防火墙
$systemctl stop firewalld  && systemctl disable firewalld

#2.关闭selinux
$setenforce 0 && sed -i 's/enforcing/disabled/' /etc/selinux/config

# 3.关闭swap分区
$swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

# 4.设置主机名
$hostnamectl set-hostname 

#5、配置映射主机ip以及主机名
$cat >> /etc/hosts << EOF
192.168.91.132 alone
EOF

#6、将桥接的IPv4 流量传递到iptables 的链:
$cat < 
三、部署二进制包 
3.1、部署docker 
3.1.1、卸载旧版本docker 
#卸载旧版本
$yum remove -y docker 
docker-client 
docker-client-latest 
docker-common 
docker-latest 
docker-latest-logrotate 
docker-logrotate 
docker-engine 
docker-ce*

$rm -rf /var/lib/docker
3.1.2、下载二进制包
#创建一个目录放置所有包文件
$mkdir /usr/local/k8s -p

$cd /usr/local/k8s
#下载docker  如果在本地已全部下载好相关软件包,可直接用ftp工具上传到该目录,然后跳过此下载过程
$wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.9.tgz
#解压包
$tar zxvf docker-20.10.9.tgz
#复制docker二进制包到运行目录
$cp docker/* /usr/bin
3.1.3、systemd管理docker
$cat > /usr/lib/systemd/system/docker.service<< EOF
[Unit]
Description=Docker Application Container Engine
documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
3.1.4、创建docker配置文件,配置阿里云镜像仓库;配置容器的 cgroup
#创建docker默认配置文件目录
$mkdir /etc/docker
#创建docker配置,可配置其他可用的镜像仓库地址,docker默认的cgroupdriver是cgroups, 将cgroupdriver设置为systemd可以减少很多不必要的麻烦
$cat > /etc/docker/daemon.json<< EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
3.1.5、启动docker,并设置开机启动
$systemctl daemon-reload && systemctl start docker && systemctl enable docker
3.2、部署Etcd 3.2.1、生成证书

证书工具有很多种,这里使用cfssl,使用json文件生成证书,相比openssl更方便使用,在任一节点 *** 作生成即可。

3.2.1.1、下载cfssl
$cd /usr/local/k8s

$wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
$wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
$wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

#赋予可执行权限
$chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

$cp cfssl_linux-amd64 /usr/local/bin/cfssl
$cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
$cp cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
3.2.1.2、生成etcd证书 3.2.1.2.1、创建自签证书颁发机构(CA)的配置文件
#创建证书生成的工作目录(后面相关证书的生成都在此目录下 *** 作)
$mkdir -p ~/TLS/{etcd,k8s}

$cd ~/TLS/etcd
#CA配置文件
$cat > ca-config.json< ca-csr.json<< EOF
{
	"CN":"etcd CA",
	"key":{
		"algo": "rsa",
		"size": 2048
	},
	"names":[
		{
			"C":"CN",
			"L":"BeiJing",
			"ST":"BeiJing"
		}
	]
}
EOF
3.2.1.2.2、生成证书
$cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

[root@alone etcd]#  cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2022/01/03 16:51:30 [INFO] generating a new CA key and certificate from CSR
2022/01/03 16:51:30 [INFO] generate received request
2022/01/03 16:51:30 [INFO] received CSR
2022/01/03 16:51:30 [INFO] generating key: rsa-2048
2022/01/03 16:51:31 [INFO] encoded CSR
2022/01/03 16:51:31 [INFO] signed certificate with serial number 107961181349217207358501084957161849161003339379
[root@alone etcd]# 

#查看证书
[root@alone etcd]# ls *pem
ca-key.pem  ca.pem
[root@alone etcd]# 
3.2.1.2.3、使用自签CA进行签发etcd HTTPS证书

创建证书申请文件:

$cat > server-csr.json<< EOF
{
	"CN": "etcd",
	"hosts": [
		"192.168.91.132"
	],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "BeiJing",
			"ST": "BeiJing"
		}
	]
}
EOF

生成HTTPS证书:

$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

[root@alone etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2022/01/03 17:07:05 [INFO] generate received request
2022/01/03 17:07:05 [INFO] received CSR
2022/01/03 17:07:05 [INFO] generating key: rsa-2048
2022/01/03 17:07:05 [INFO] encoded CSR
2022/01/03 17:07:05 [INFO] signed certificate with serial number 28596260620932286485100787888546272944026219205
2022/01/03 17:07:05 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@alone etcd]# 

#查看http证书
[root@alone etcd]# ls server*pem
server-key.pem  server.pem
[root@alone etcd]#
3.2.1.3、部署ETCD 3.2.1.3.1、下载etcd二进制包文件
$cd /usr/local/k8s

$wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz

#创建etcd的工作目录
$mkdir /opt/etcd/{bin,cfg,ssl} -p
#解压
$tar zxvf etcd-v3.5.1-linux-amd64.tar.gz
#将二进制文件复制到工作目录
$cp etcd-v3.5.1-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
3.2.1.3.2、创建etcd 配置文件
$cat > /opt/etcd/cfg/etcd.conf<< EOF
#[Member]
ETCD_NAME="etcd"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.91.132:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.91.132:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.91.132:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.91.132:2379"
ETCD_INITIAL_CLUSTER="etcd=https://192.168.91.132:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_ENABLE_V2="true"  
EOF

ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群
ETCD_ENABLE_V2=“true” 支持api v2 ,否则与flannel不兼容(如果不计划使用flannel作为网络插件的话请忽略)

3.2.1.3.3、systemd 管理etcd
#将之前生成的证书复制到etcd的工作目录
$cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl

$cat > /usr/lib/systemd/system/etcd.service<< EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd 
--cert-file=/opt/etcd/ssl/server.pem 
--key-file=/opt/etcd/ssl/server-key.pem 
--peer-cert-file=/opt/etcd/ssl/server.pem 
--peer-key-file=/opt/etcd/ssl/server-key.pem 
--trusted-ca-file=/opt/etcd/ssl/ca.pem 
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem 
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
3.2.1.3.4、启动ETCD并设置开机启动
$systemctl daemon-reload && systemctl start etcd && systemctl enable etcd
3.2.1.3.5、检查etcd的状态
$ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.91.132:2379" endpoint health


如果节点失败可以查看日志

/var/log/message 或 journalctl -u etcd
3.3、部署Master Node

单机部署就是master节点和worker节点都在同一台机器,这里只是为了区分各个组件的部署流程

3.3.1、部署kube-apiserver 3.3.1.1、生成kube-apiserver证书

1、创建自签证书颁发机构(CA)配置

$cd /root/TLS/k8s
$cat > ca-config.json<< EOF
{
    "signing":{
        "default":{
            "expiry":"87600h"
        },
        "profiles":{
            "kubernetes":{
                "expiry":"87600h",
                "usages":[
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF


$cat > ca-csr.json<< EOF
{
    "CN":"kubernetes",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"Beijing",
            "ST":"Beijing",
            "O":"system:masters",
            "OU":"System"
        }
    ]
}
EOF

2、生成证书

$cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

$ls *pem

[root@alone k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2022/01/03 19:11:54 [INFO] generating a new CA key and certificate from CSR
2022/01/03 19:11:54 [INFO] generate received request
2022/01/03 19:11:54 [INFO] received CSR
2022/01/03 19:11:54 [INFO] generating key: rsa-2048
2022/01/03 19:11:55 [INFO] encoded CSR
2022/01/03 19:11:55 [INFO] signed certificate with serial number 15425745809170517626505445891964674368336024021
[root@bin-master k8s]# ls *pem
ca-key.pem  ca.pem
[root@alone k8s]# 

3、使用自签证书签发kube-apiserver HTTPS证书

创建证书申请文件;配置相关节点ip

$cd /root/TLS/k8s
$cat > server-csr.json<< EOF
{
    "CN":"kubernetes",
    "hosts":[
        "10.0.0.1",
        "127.0.0.1",
        "192.168.91.132",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"system:masters",
            "OU":"System"
        }
    ]
}
EOF

生成HTTPS证书

$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

$ls server*pem

[root@alone k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2022/01/03 19:17:46 [INFO] generate received request
2022/01/03 19:17:46 [INFO] received CSR
2022/01/03 19:17:46 [INFO] generating key: rsa-2048
2022/01/03 19:17:46 [INFO] encoded CSR
2022/01/03 19:17:46 [INFO] signed certificate with serial number 259446606240024715961485465717458568968042823581
2022/01/03 19:17:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@alone k8s]# ls server*pem
server-key.pem  server.pem
[root@alone k8s]# 
 
3.3.1.2、下载二进制文件

master节点所需包:kube-apiserver,kube-controller-manager,kube-scheduler,docker,etcd,kubectl

node节点所需包:kubelet,kube-proxy,docker,etcd

server 包 包含了Master 和Worker Node 二进制文件。

$cd /usr/local/k8s

$wget https://dl.k8s.io/v1.23.1/kubernetes-server-linux-amd64.tar.gz
#解压二进制文件
$tar zxvf kubernetes-server-linux-amd64.tar.gz
#创建k8s工作目录
$mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

$cd /usr/local/k8s/kubernetes/server/bin
$cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
$cp kubectl /usr/bin/
3.3.1.3、生成token文件
echo $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" > /opt/kubernetes/cfg/token.csv
3.3.1.4、配置kube-apiserver

1、创建配置文件

$cat > /opt/kubernetes/cfg/kube-apiserver.conf<< EOF
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.91.132:2379 \
--bind-address=192.168.91.132 \
--secure-port=6443 \
--advertise-address=192.168.91.132 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd 集群地址
–bind-address:监听地址
–secure-port:https 安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service 虚拟IP 地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC 授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap 机制
–token-auth-file:bootstrap token 文件
–service-node-port-range:Service nodeport 类型默认分配端口范围
–kubelet-client-xxx:apiserver 访问kubelet 客户端证书
–tls-xxx-file:apiserver https 证书
–etcd-xxxfile:连接Etcd 集群证书
–audit-log-xxx:审计日志
–service-account-issuer=string 指定service account token issuer的标识符 ;该issuer将在iss声明中分发Token以使标识符生效;该参数的值为字符串或URL
–service-account-signing-key-file=string 含有当前service account token issuer私钥的文件路径 ;该issuer将以此私钥签名发布的ID token(需开启TokenRequest特性

2、拷贝之前生成的k8s证书

$cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

3、systemd管理

$cat > /usr/lib/systemd/system/kube-apiserver.service<< EOF
[Unit]
Description=Kubernetes API Server
documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

4、启动kube-apiserver并设置开机启动

$systemctl daemon-reload && systemctl start kube-apiserver && systemctl enable kube-apiserver
3.3.2、配置kubectl

为kubectl创建认证文件,方便后面命令 *** 作

3.3.2.1、生成kubectl证书
$cd /root/TLS/k8s

$cat > kubectl-csr.json< 
#生成证书
$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem  -config=ca-config.json -profile=kubernetes kubectl-csr.json | cfssljson -bare kubectl

$cp ~/TLS/k8s/kubectl*.pem /opt/kubernetes/ssl
3.3.2.2、生成kubectl.kubeconfig配置文件
$cd /opt/kubernetes/ssl

# 生成kubelet.kubeconfig 配置文件
$kubectl config set-cluster kubernetes 
--certificate-authority=/opt/kubernetes/ssl/ca.pem 
--embed-certs=true 
--server=https://192.168.91.132:6443 
--kubeconfig=kubectl.kubeconfig

$kubectl config set-credentials admin 
--client-certificate=/opt/kubernetes/ssl/kubectl.pem 
--embed-certs=true 
--client-key=/opt/kubernetes/ssl/kubectl-key.pem  
--kubeconfig=kubectl.kubeconfig

$kubectl config set-context default 
--cluster=kubernetes 
--user=admin 
--kubeconfig=kubectl.kubeconfig

$kubectl config use-context default --kubeconfig=kubectl.kubeconfig

$mkdir ~/.kube -p

$cp kubectl.kubeconfig ~/.kube/config

#查看kubernetes状态
$kubectl cluster-info

3.3.3、部署kube-controller-manager 3.3.3.1、生成证书
$cd /root/TLS/k8s
$cat > kube-controller-csr.json< 
#生成证书
$cfssl gencert -ca=ca.pem -ca-key=ca-key.pem  -config=ca-config.json -profile=kubernetes kube-controller-csr.json | cfssljson -bare kube-controller-manager

$cp ~/TLS/k8s/kube-controller-manager*.pem /opt/kubernetes/ssl
3.3.3.2、生成kube-controller-manager.kubeconfig配置文件
$cd /opt/kubernetes/ssl

# 生成kube-controller-manager.kubeconfig配置文件
$kubectl config set-cluster kubernetes 
--certificate-authority=/opt/kubernetes/ssl/ca.pem 
--embed-certs=true 
--server=https://192.168.91.132:6443 
--kubeconfig=kube-controller-manager.kubeconfig

$kubectl config set-credentials kube-controller-manager 
--client-certificate=/opt/kubernetes/ssl/kube-controller-manager.pem 
--embed-certs=true 
--client-key=/opt/kubernetes/ssl/kube-controller-manager-key.pem  
--kubeconfig=kube-controller-manager.kubeconfig

$kubectl config set-context default 
--cluster=kubernetes 
--user=kube-controller-manager 
--kubeconfig=kube-controller-manager.kubeconfig

$kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

3.3.3.3、systemd管理kube-controller-manager

1、配置文件

$cat > /opt/kubernetes/cfg/kube-controller-manager.conf<< EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--cluster-name=kubernetes \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--leader-elect=true \
--controllers=*,bootstrapsigner,tokencleaner \
--kubeconfig=/opt/kubernetes/ssl/kube-controller-manager.kubeconfig \
--tls-cert-file=/opt/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/opt/kubernetes/ssl/kube-controller-manager-key.pem \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--use-service-account-credentials=true \
--experimental-cluster-signing-duration=87600h0m0s"
EOF

2、启动文件

$cat > /usr/lib/systemd/system/kube-controller-manager.service<< EOF
[Unit]
Description=Kubernetes Controller Manager
documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

3、启动kube-controller-manager并设置开机启动

$systemctl daemon-reload && systemctl start kube-controller-manager && systemctl enable kube-controller-manager

#查看状态controller-manager已经OK
$kubectl get componentstatuses

3.3.4、部署kube-scheduler 3.3.4.1、生成kube-scheduler证书
$cd /root/TLS/k8s

$cat > kube-scheduler-csr.json< 
3.3.4.2、生成kube-scheduler.kubeconfig配置文件 
$cd /opt/kubernetes/ssl

# 生成kube-scheduler.kubeconfig配置文件
$kubectl config set-cluster kubernetes 
--certificate-authority=/opt/kubernetes/ssl/ca.pem 
--embed-certs=true 
--server=https://192.168.91.132:6443 
--kubeconfig=kube-scheduler.kubeconfig

$kubectl config set-credentials kube-scheduler 
--client-certificate=/opt/kubernetes/ssl/kube-scheduler.pem 
--embed-certs=true 
--client-key=/opt/kubernetes/ssl/kube-scheduler-key.pem  
--kubeconfig=kube-scheduler.kubeconfig

$kubectl config set-context default 
--cluster=kubernetes 
--user=kube-scheduler 
--kubeconfig=kube-scheduler.kubeconfig

$kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
3.3.4.3、systemd管理kube-scheduler

、配置文件

$cat > /opt/kubernetes/cfg/kube-scheduler.conf<< EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--kubeconfig=/opt/kubernetes/ssl/kube-scheduler.kubeconfig \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--tls-cert-file=/opt/kubernetes/ssl/kube-scheduler.pem \
--tls-private-key-file=/opt/kubernetes/ssl/kube-scheduler-key.pem \
--bind-address=127.0.0.1"
EOF

2、启动文件

$cat > /usr/lib/systemd/system/kube-scheduler.service<< EOF
[Unit]
Description=Kubernetes Scheduler
documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

3、启动

$systemctl daemon-reload && systemctl start kube-scheduler && systemctl enable kube-scheduler

#验证kube-scheduler状态
$kubectl get cs

3.4、 部署Worker Node 3.4.1、部署Kubelet 3.4.1.1、复制二进制文件
#复制文件
$ cp /usr/local/k8s/kubernetes/server/bin/kubelet /opt/kubernetes/bin
3.4.1.2、生成kubelet.kubeconfig配置文件
$cd /opt/kubernetes/ssl

#允许用户请求证书
$kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

# 生成kubelet.kubeconfig配置文件
$export KUBE_APISERVER="https://192.168.91.132:6443" # apiserver IP:PORT
$export TOKEN="fa752d92837853b0dd11889d7eafbf9f" # 与token.csv 里保持一致

$kubectl config set-cluster kubernetes 
--certificate-authority=/opt/kubernetes/ssl/ca.pem 
--embed-certs=true 
--server=${KUBE_APISERVER} 
--kubeconfig=kubelet-bootstrap.kubeconfig


$kubectl config set-credentials kubelet-bootstrap 
--token=${TOKEN} 
--kubeconfig=kubelet-bootstrap.kubeconfig

$kubectl config set-context default 
--cluster=kubernetes 
--user=kubelet-bootstrap 
--kubeconfig=kubelet-bootstrap.kubeconfig

$kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
3.4.1.3、创建kubelet.conf配置文件
$cat > /opt/kubernetes/cfg/kubelet.conf<< EOF
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=alone \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/ssl/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/ssl/kubelet-bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"
EOF

$cat > /opt/kubernetes/cfg/kubelet-config.yml<< EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
   enabled: false
  webhook:
   cacheTTL: 2m0s
   enabled: true
  x509:
   clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
   cacheAuthorizedTTL: 5m0s
   cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
3.4.1.4、systemd管理kubelet
$cat > /usr/lib/systemd/system/kubelet.service<< EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

$systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet
3.4.1.5、批准授权申请

worker node向master node申请授权

$kubectl get csr

$kubectl certificate approve  

此时的节点状态仍是NotReady,因为还没连通内部网络

3.4.2、部署kube-proxy 3.4.2.1、生成kube-proxy证书
#复制文件
$ cp /usr/local/k8s/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin


$cd /root/TLS/k8s
$cat > kube-proxy-csr.json< 
3.4.2.2、生成kube-proxy.kubeconfig配置文件 
$cd /opt/kubernetes/ssl

# 生成kube-scheduler.kubeconfig配置文件
$kubectl config set-cluster kubernetes 
--certificate-authority=/opt/kubernetes/ssl/ca.pem 
--embed-certs=true 
--server=https://192.168.91.132:6443 
--kubeconfig=kube-proxy.kubeconfig

$kubectl config set-credentials kube-proxy 
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem 
--embed-certs=true 
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem  
--kubeconfig=kube-proxy.kubeconfig

$kubectl config set-context default 
--cluster=kubernetes 
--user=kube-proxy 
--kubeconfig=kube-proxy.kubeconfig

$kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
3.4.2.3、systemd管理kube-proxy

1、创建配置文件

$cat > /opt/kubernetes/cfg/kube-proxy.conf<< EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml \
--hostname-override=alone"
EOF

$cat > /opt/kubernetes/cfg/kube-proxy-config.yml<< EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
healthzBindAddress: 0.0.0.0:10256
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/ssl/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16
EOF

2、创建启动文件

$cat > /usr/lib/systemd/system/kube-proxy.service<< EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

3、启动kube-proxy 并设置开机启动

$systemctl daemon-reload && systemctl start kube-proxy && systemctl enable kube-proxy

3.4.3、部署Calico
#创建组件目录
$mkdir /opt/plugins/calico -p
$cd /opt/plugins/calico
#下载yaml文件
$wget https://docs.projectcalico.org/v3.20/manifests/calico.yaml

#修改calico.yaml 属性 CALICO_IPV4POOL_CIDR
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
#与kube-controller-manager.config中的--cluster-cidr=10.244.0.0/16
#与kube-proxy-config.yml中clusterCIDR: 10.244.0.0/16一致


#应用yaml文件
$kubectl apply -f calico.yaml


#使用命令查看 yaml中包含的镜像
$ cat calico.yaml |grep image
#在apply或create命令无法下载镜像时通过docker pull 来拉取镜像
#验证
$kubectl get pods -n kube-system

#-w可以实时查看
$kubectl get pods -n kube-system -w

$kubectl get node


#清理网络环境
$kubectl delete -f calico.yaml
$rm -rf /run/calico 
/sys/fs/bpf/calico 
/var/lib/calico 
/var/log/calico 
/opt/cluster/plugins/calico 
/opt/cni/bin/calico

#查看是否还有残留的calico的pod
$kubectl get pods -n kube-system

#强制删除Pod
$kubectl delete pod   -n kube-system --force --grace-period=0

此时节点状态已经是Ready

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5703501.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存