win10熔断幽灵补丁影响随机性能

win10熔断幽灵补丁影响随机性能,第1张

IT之家10月20日消息 今年早些时候两个影响计算机核心流程的主要漏洞被发现,这些“推测执行”漏洞被称为Spectre(幽灵)和Meltdown(熔毁),意味着黑客可以通过访问网站来窃取数据。

虽然没有已知的事件可以利用这些漏洞,但处理器微代码补丁可能会对已修补的PC的性能产生下降高达30%的影响。

各种科技公司一直致力于缓解这一问题,在微软的最新举措中,他们正努力在明年年初到来的Windows 10 19H1大版本中实施Retpoline修补方案。

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create rbd1-data 32 32

pool 'rbd1-data' created

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool ls

device_health_metrics

mypool

.rgw.root

default.rgw.log

default.rgw.control

default.rgw.meta

myrbd1

cephfs-metadata

cephfs-data

rbd1-data

在存储池启用rbd:

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool application enable rbd1-data rbd

enabled application 'rbd' on pool 'rbd1-data'

初始化存储池:

ceph@ceph-deploy:~/ceph-cluster$ rbd pool init -p rbd1-data

创建存储池映像文件:

映像文件的管理都是rbd命令来执行,rbd可对映像执行创建,查看,删除,以及创建快照,克隆映像,删除快照,查看快照,快照回滚等管理 *** 作

ceph@ceph-deploy:~/ceph-cluster$ rbd create data-img1 --size 3G --pool rbd1-data --image-format 2 --image-feature layering

ceph@ceph-deploy:~/ceph-cluster$ rbd create data-img2 --size 5G --pool rbd1-data --image-format 2 --image-feature layering

查看存储池映像文件

ceph@ceph-deploy:~/ceph-cluster$ rbd list --pool rbd1-data

data-img1

data-img2

列出映像更多信息

ceph@ceph-deploy:~/ceph-cluster$ rbd list --pool rbd1-data -l

NAME      SIZE  PARENT  FMT  PROT  LOCK

data-img1  3 GiB            2           

data-img2  5 GiB            2

ceph@ceph-deploy:~/ceph-cluster$ rbd --image data-img1 --pool rbd1-data info

rbd image 'data-img1':

size 3 GiB in 768 objects

order 22 (4 MiB objects)

snapshot_count: 0

id: 3ab91c6a62f5

block_name_prefix: rbd_data.3ab91c6a62f5

format: 2

features: layering

op_features:

flags:

create_timestamp: Thu Sep  2 06:48:11 2021

access_timestamp: Thu Sep  2 06:48:11 2021

modify_timestamp: Thu Sep  2 06:48:11 2021

ceph@ceph-deploy:~/ceph-cluster$ rbd --image data-img1 --pool rbd1-data info --format json --pretty-format

{

    "name": "data-img1",

    "id": "3ab91c6a62f5",

    "size": 3221225472,

    "objects": 768,

    "order": 22,

    "object_size": 4194304,

    "snapshot_count": 0,

    "block_name_prefix": "rbd_data.3ab91c6a62f5",

    "format": 2,

    "features": [

        "layering"

    ],

    "op_features": [],

    "flags": [],

    "create_timestamp": "Thu Sep  2 06:48:11 2021",

    "access_timestamp": "Thu Sep  2 06:48:11 2021",

    "modify_timestamp": "Thu Sep  2 06:48:11 2021"

}

镜像(映像)特性的启用和禁用

特性包括:

layering支持分层快照特性  默认开启

striping条带化

exclusive-lock:支持独占锁  默认开启

object-map 支持对象映射,加速数据导入导出及已用空间特性统计等  默认开启

fast-diff 快速计算对象和快找数据差异对比  默认开启

deep-flatten  支持快照扁平化 *** 作  默认开启

journaling  是否记录日志

开启:

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable object-map --pool rbd1-data --image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable fast-diff --pool rbd1-data --image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable exclusive-lock --pool rbd1-data --image data-img1

禁止:

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable object-map --pool rbd1-data --image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable fast-diff --pool rbd1-data --image data-img1

ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable exclusive-lock --pool rbd1-data --image data-img1

客户端使用块设备:

首先要安装ceph-comman,配置授权

[root@ceph-client1 ceph_data]# yum install -y http://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm

[root@ceph-client1 ceph_data]# yum install ceph-common -y 

授权,

ceph@ceph-deploy:/etc/ceph$ sudo -i

root@ceph-deploy:~# cd /etc/ceph/           

root@ceph-deploy:/etc/ceph# scp ceph.conf ceph.client.admin.keyring root@192.168.241.21:/etc/ceph

ubuntu系统:

root@ceph-client2:/var/lib/ceph# apt install -y ceph-common

root@ceph-deploy:/etc/ceph# sudo scp ceph.conf ceph.client.admin.keyring ceph@192.168.241.22:/tmp

ceph@192.168.241.22's password:

ceph.conf                                                                                                                  100%  270  117.7KB/s  00:00   

ceph.client.admin.keyring

root@ceph-client2:/var/lib/ceph# cd /etc/ceph/

root@ceph-client2:/etc/ceph# cp /tmp/ceph.c* /etc/ceph/

root@ceph-client2:/etc/ceph# ll /etc/ceph/

total 20

drwxr-xr-x  2 root root 4096 Aug 26 07:58 ./

drwxr-xr-x 84 root root 4096 Aug 26 07:49 ../

-rw-------  1 root root  151 Sep  2 07:24 ceph.client.admin.keyring

-rw-r--r--  1 root root  270 Sep  2 07:24 ceph.conf

-rw-r--r--  1 root root  92 Jul  8 07:17 rbdmap

-rw-------  1 root root    0 Aug 26 07:58 tmpmhFvZ7

客户端映射镜像

root@ceph-client2:/etc/ceph# rbd -p rbd1-data map data-img1

rbd: sysfs write failed

RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd1-data/data-img1 object-map fast-diff".

In some cases useful info is found in syslog - try "dmesg | tail".

rbd: map failed: (6) No such device or address

root@ceph-client2:/etc/ceph# rbd feature disable rbd1-data/data-img1 object-map fast-diff

root@ceph-client2:/etc/ceph# rbd -p rbd1-data map data-img1

/dev/rbd0

root@ceph-client2:/etc/ceph# rbd -p rbd1-data map data-img2

格式化块设备admin映射映像文件

查看块设备

root@ceph-client2:/etc/ceph# lsblk

NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda      8:0    0  20G  0 disk

└─sda1  8:1    0  20G  0 part /

sr0    11:0    1 1024M  0 rom 

rbd0  252:0    0    3G  0 disk

rbd1  252:16  0    5G  0 disk

root@ceph-client2:/etc/ceph# mkfs.ext4 /dev/rbd1

mke2fs 1.44.1 (24-Mar-2018)

Discarding device blocks: done                           

Creating filesystem with 1310720 4k blocks and 327680 inodes

Filesystem UUID: 168b99e6-a3d7-4dc6-9c69-76ce8b42f636

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                           

Writing inode tables: done                           

Creating journal (16384 blocks): done

Writing superblocks and filesystem accounting information: done

挂在挂设备

root@ceph-client2:/etc/ceph# mkdir /data/data1 -p

root@ceph-client2:/etc/ceph# mount /dev/rbd1 /data/data1/

验证写入数据:

root@ceph-client2:/etc/ceph# cd /data/data1/

root@ceph-client2:/data/data1# cp /var/log/ . -r

root@ceph-client2:/data/data1# ceph df

--- RAW STORAGE ---

CLASS    SIZE    AVAIL    USED  RAW USED  %RAW USED

hdd    220 GiB  213 GiB  7.4 GiB  7.4 GiB      3.37

TOTAL  220 GiB  213 GiB  7.4 GiB  7.4 GiB      3.37

--- POOLS ---

POOL                  ID  PGS  STORED  OBJECTS    USED  %USED  MAX AVAIL

device_health_metrics  1    1      0 B        0      0 B      0    66 GiB

mypool                  2  32  1.2 MiB        1  3.5 MiB      0    66 GiB

.rgw.root              3  32  1.3 KiB        4  48 KiB      0    66 GiB

default.rgw.log        4  32  3.6 KiB      209  408 KiB      0    66 GiB

default.rgw.control    5  32      0 B        8      0 B      0    66 GiB

default.rgw.meta        6    8      0 B        0      0 B      0    66 GiB

myrbd1                  7  64  829 MiB      223  2.4 GiB  1.20    66 GiB

cephfs-metadata        8  32  563 KiB      23  1.7 MiB      0    66 GiB

cephfs-data            9  64  455 MiB      129  1.3 GiB  0.66    66 GiB

rbd1-data              10  32  124 MiB      51  373 MiB  0.18    66 GiB

创建普通用户并授权

root@ceph-deploy:/etc/ceph# ceph auth add client.huahualin mon "allow rw"  osd "allow rwx pool=rbd1-data"

added key for client.huahualin

root@ceph-deploy:/etc/ceph# ceph-authtool --create-keyring ceph.client.huahualin.keyring

creating ceph.client.huahualin.keyring

root@ceph-deploy:/etc/ceph# ceph auth  get client.huahualin -o ceph.client.huahualin.keyring

exported keyring for client.huahualin

使用普通用户创建rbd

root@ceph-deploy:/etc/ceph# scp ceph.conf ceph.client.huahualin.keyring  root@192.168.241.21:/etc/ceph/

普通用户映射镜像

[root@ceph-client1 ~]# rbd --user huahualin --pool rbd1-data map data-img2

/dev/rbd0

使用普通用户挂载rbd

[root@ceph-client1 ~]# mkfs.ext4 /dev/rbd0

[root@ceph-client1 ~]# fdisk -l /dev/rbd0

[root@ceph-client1 ~]# mkdir /data

[root@ceph-client1 ~]# mount  /dev/rbd0 /data

[root@ceph-client1 ~]# df -Th

Filesystem              Type      Size  Used Avail Use% Mounted on

devtmpfs                devtmpfs  475M    0  475M  0% /dev

tmpfs                  tmpfs    487M    0  487M  0% /dev/shm

tmpfs                  tmpfs    487M  7.7M  479M  2% /run

tmpfs                  tmpfs    487M    0  487M  0% /sys/fs/cgroup

/dev/mapper/centos-root xfs        37G  1.7G  36G  5% /

/dev/sda1              xfs      1014M  138M  877M  14% /boot

tmpfs                  tmpfs      98M    0  98M  0% /run/user/0

192.168.241.12:6789:/  ceph      67G  456M  67G  1% /ceph_data

/dev/rbd0              ext4      4.8G  20M  4.6G  1% /data

挂载rbd后会自动加载模块libceph.ko

[root@ceph-client1 ~]# lsmod |grep ceph

ceph                  363016  1

libceph              306750  2 rbd,ceph

dns_resolver          13140  1 libceph

libcrc32c              12644  4 xfs,libceph,nf_nat,nf_conntrack

[root@ceph-client1 ~]# modinfo libceph

filename:      /lib/modules/3.10.0-1160.el7.x86_64/kernel/net/ceph/libceph.ko.xz

license:        GPL

description:    Ceph core library

author:        Patience Warnick <patience@newdream.net>

author:        Yehuda Sadeh <yehuda@hq.newdream.net>

author:        Sage Weil <sage@newdream.net>

retpoline:      Y

rhelversion:    7.9

srcversion:    D4ABB648AE8130ECF90AA3F

depends:        libcrc32c,dns_resolver

intree:        Y

vermagic:      3.10.0-1160.el7.x86_64 SMP mod_unload modversions

signer:        CentOS Linux kernel signing key

sig_key:        E1:FD:B0:E2:A7:E8:61:A1:D1:CA:80:A2:3D:CF:0D:BA:3A:A4:AD:F5

sig_hashalgo:  sha256

如果镜像空间不够用了,我们可以做镜像空间的拉伸,一般不建议减小

查看rdb1-data存储池的镜像

[root@ceph-client1 ~]# rbd ls -p rbd1-data -l

NAME      SIZE  PARENT  FMT  PROT  LOCK

data-img1  3 GiB            2           

data-img2  5 GiB            2 

比如data-img2空间不够了,需要拉伸,将data-img2扩展到8G

[root@ceph-client1 ~]# rbd resize --pool rbd1-data --image data-img2 --size  8G

Resizing image: 100% complete...done.

可以通过fdisk -l查看镜像空间大小,但是通过df -h就看不到

[root@ceph-client1 ~]# lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda              8:0    0  40G  0 disk

├─sda1            8:1    0    1G  0 part /boot

└─sda2            8:2    0  39G  0 part

  ├─centos-root 253:0    0  37G  0 lvm  /

  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]

sr0              11:0    1 1024M  0 rom 

rbd0            252:0    0    8G  0 disk /data

[root@ceph-client1 ~]# fdisk -l /dev/rbd0

Disk /dev/rbd0: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

将挂载设置开机启动

[root@ceph-client1 ~]# vi /etc/rc.d/rc.local

rbd --user huahualin --pool rbd1-data map data-img2

mount /dev/rbd0 /data

[root@ceph-client1 ~]# chmod a+x  /etc/rc.d/rc.local

[root@ceph-client1 ~]# reboot

grep -E '(vmx|svm)' /proc/cpuinfo

yum install qemu virt kvm -y

Question:

Solution:已安装的跳过

yum install qemu virt kvm -y --skip-broken

systemctl start libvirtd

systemctl enable libvirtd

virsh list

yum install -y bridge-utils

#配置桥接模式

cd /etc/sysconfig/network-scripts

cp ifcfg-em2 ifcfg-br0

[root@localhost network-scripts]# vim ifcfg-em2

TYPE=Ethernet

BRIDGE=br0

NAME=em2

UUID=74c8085f-4c0d-4743-b0a0-70e51e3eb877

DEVICE=em2

ONBOOT=yes

#注意****IPADDR**** 要改为自己的

[root@localhost network-scripts]# vim ifcfg-br0

TYPE=Bridge

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=br0

DEVICE=br0

ONBOOT=yes

IPADDR=172.16.10.3

PREFIX=24

GATEWAY=172.16.10.254

DNS1=114.114.114.114

systemctl restart network

#验证

brctl show

cd /home/kvm

#创建****master****虚拟机的存储盘**** 10.4

qemu-img create -f qcow2 -o cluster_size=2M k8s-master01.qcow2 200G

virt-install --virt-type kvm --os-type=linux --os-variant rhel7 --name k8s-master01.qcow2 --memory 8192 --vcpus 4 --disk /home/kvm/k8s-master01.qcow2,format=qcow2 --cdrom /home/kvm/CentOS-7-x86_64-DVD-2009.iso --network bridge=br0 --graphics vnc,listen=0.0.0.0 --noautoconsole

#创建****worker****虚拟机的存储盘**** 10.5

qemu-img create -f qcow2 -o cluster_size=2M k8s-worker01.qcow2 200G

virt-install --virt-type kvm --os-type=linux --os-variant rhel7 --name k8s-worker01.qcow2 --memory 8192 --vcpus 4 --disk /home/kvm/k8s-worker01.qcow2,format=qcow2 --cdrom /home/kvm/CentOS-7-x86_64-DVD-2009.iso --network bridge=br0 --graphics vnc,listen=0.0.0.0 --noautoconsole

#创建****worker****虚拟机的存储盘**** 10.3

qemu-img create -f qcow2 -o cluster_size=2M k8s-worker02.qcow2 200G

virt-install --virt-type kvm --os-type=linux --os-variant rhel7 --name k8s-worker02.qcow2 --memory 32768 --vcpus 32 --disk /home/kvm/k8s-worker02.qcow2,format=qcow2 --cdrom /home/kvm/CentOS-7-x86_64-DVD-2009.iso --network bridge=br0 --graphics vnc,listen=0.0.0.0 --noautoconsole

netstat -ntlp | grep 5900

virsh list --all

virsh shutdown k8s-master01.qcow2

virsh start k8s-master01.qcow2

ssh 172.16.10.50 root@starQuest2022

Question****:系统启动卡住

Solution:

virsh destroy k8s-master01.qcow2

virsh undefine k8s-master01.qcow2

Question****:更改桥接模式失败引发的问题

Solution:

vi /etc/sysconfig/network-scripts/ifcfg-eth0

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=static

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=eth0

UUID=c510f2f9-9820-45e8-9c70-65674bd35258

DEVICE=eth0

ONBOOT=yes

IPADDR=172.16.10.50

PREFIX=24

GATEWAY=172.16.10.254

DNS1=114.114.114.114

systemctl restart network

Question:

Solution:

vi /root/.ssh/known_hosts 删除有问题IP对应行

#设置hostname

hostnamectl set-hostname k8s-master01

hostnamectl set-hostname k8s-worker01

hostnamectl set-hostname k8s-worker02

yum update

yum install wget

yum install vim

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

yum --enablerepo=elrepo-kernel install kernel-lt -y

vi /etc/default/grub

GRUB_TIMEOUT=5

GRUB_DISTRIBUTOR=" ,,g' /etc/system-release)"

GRUB_DEFAULT=0

GRUB_DISABLE_SUBMENU=true

GRUB_TERMINAL_OUTPUT="console"

GRUB_CMDLINE_LINUX="crashkernel=auto spectre_v2=retpoline rhgb quiet"

GRUB_DISABLE_RECOVERY="true"

grub2-mkconfig -o /boot/grub2/grub.cfg

reboot

uname -a

方式一 配置yum源

scp -r docker-ce.repo 172.16.10.51:/etc/yum.repos.d/

yum install docker-ce

方式二 yum 安装rpm包

yum install -y docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm

# 开机自启动

systemctl start docker

systemctl enable docker

vi /etc/docker/daemon.json

{

"exec-opts": [

],

"log-driver": "json-file",

"log-level": "warn",

"log-opts": {

},

"registry-mirrors": [

],

"insecure-registries": ["harbor.bicisims.com"],

"selinux-enabled": false

}

systemctl restart docker

docker ps

yum install ntpdate -y

ntpdate time2.aliyun.com

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

echo 'Asia/Shanghai' >/etc/timezone

crontab -e

0 12 * * * /usr/sbin/ntpdate time2.aliyun.com

systemctl stop firewalld

systemctl disable firewalld

systemctl status firewalld

sed -i 's/enforcing/disabled/' /etc/selinux/config

setenforce 0

swapoff -a

sed -ri 's/. swap. /#&/' /etc/fstab

cat >/etc/sysctl.d/k8s_better.conf <<EOF

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_instances=8192

fs.inotify.max_user_watches=1048576

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

EOF

cat /etc/sysctl.d/k8s_better.conf

sysctl -p /etc/sysctl.d/k8s_better.conf

上面这两个错忽略

cat >/etc/sysconfig/modules/ipvs.modules <<EOF

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules &&bash

vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl= https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey= https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

#必须确认是否更改hostname

hostnamectl set-hostname k8s-master01

yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5

systemctl enable kubelet

kubeadm init --apiserver-advertise-address=172.16.10.50 --kubernetes-version=1.23.5 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16

[

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown (id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.10.50:6443 --token dak7im.w25v1sjl0kcm4y3c \

--discovery-token-ca-cert-hash sha256:afb2a0b22a3e563671103f93965f71a915f65054db74b7ffa97a84932a098f42

]

vi /etc/hosts

172.16.10.50 k8s-master01

172.16.10.51 k8s-worker01

172.16.10.52 k8s-worker02

kubectl get nodes

kubeadm token create --print-join-command

kubectl version

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown (id -g) $HOME/.kube/config

scp -r conf/ 172.16.10.50:/home/software/

wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

kubectl apply -f kube-flannel.yml

kubectl get pod -A

Question:

Solution:

kubectl explain DaemonSet

Question:k8s Node 一直 pending****Solution:

$ vim /etc/kubernetes/manifests/kube-apiserver.yaml

spec:

containers:

$ kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml

$ kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml

kubectl get pod

yum install -y nfs-utils

systemctl enable nfs

systemctl start nfs

1)安装NFS

$ yum install -y nfs-utils rpcbind

$ yum install -y nfs-utils

2)启动服务

服务器端

systemctl enable rpcbind

systemctl start rpcbind

systemctl restart rpcbind

客户端

systemctl enable nfs

systemctl start nfs

systemctl restart nfs

3 )创建共享目录 服务器端

mkdir -p /home/data

vi /etc/exports

编写NFS的共享配置

/home/data *(rw,sync,no_root_squash)

4)查看NFS共享目录 服务器端

showmount -e 172.16.10.5

5) 如果要把其他服务器的磁盘加进来 就在对应的服务器安装 nfs服务端 然后建立共享文件夹

kubectl get pod -A

cd /root/tools/storageclass/

vim nfs-provisioner.yaml

kubectl apply -f rbac.yaml

kubectl apply -f nfs-provisioner.yaml

kubectl apply -f nfs-StorageClass.yaml

kubectl patch storageclass huaweinfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

wget <u>https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml</u>

wget <u>https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml</u>

kubectl apply -f kubesphere-installer.yaml

kubectl apply -f <u>cluster-configuration.yaml</u>

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

Console: http://172.16.10.50:30880

Account: admin

Password: P@88w0rd starQuest2022


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/tougao/9572061.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-04-29
下一篇 2023-04-29

发表评论

登录后才能评论

评论列表(0条)

保存