linux 多路径存储是怎么回事

linux 多路径存储是怎么回事,第1张

Linux下HDS存储多路径查看

在Redhat下确定需要划分的存储空间。在本例中需要进行划分的空间是从HDS AMS2000上划分到服务器的多路径存储空间。其中sddlmad为ycdb1上需要进行划分的空间,sddlmah为ycdb2上需要进行划分的空间。具体如下:

查看环境

# rpm -qa|grep device-mapper

device-mapper-event-1.02.32-1.el5

device-mapper-multipath-0.4.7-30.el5

device-mapper-1.02.32-1.el5

# rpm -qa|grep lvm2 lvm2-2.02.46-8.el5

查看空间

#fdisk -l

Disk /dev/sddlmad: 184.2 GB, 184236900352 bytes 255 heads, 63 sectors/track, 22398 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sddlmah: 184.2 GB, 184236900352 bytes

255 heads, 63 sectors/track, 22398 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

查看存储

#cd /opt/DynamicLinkManager/bin/

#./dlnkmgr view -lu

Product : AMS

SerialNumber : 83041424 LUs : 8

iLU HDevName DevicePathID Status

0000 sddlmaa /dev/sdb 000000 Online

/dev/sdj 000008 Online

/dev/sdr 000016 Online

/dev/sdz 000017 Online

0001 sddlmab /dev/sdc 000001 Online

/dev/sdk 000009 Online

/dev/sds 000018 Online

/dev/sdaa 000019 Online

0002 sddlmac /dev/sdd 000002 Online

/dev/sdl 000010 Online

/dev/sdt 000020 Online

/dev/sdab 000021 Online

0003 sddlmad /dev/sde 000003 Online

/dev/sdm 000011 Online

/dev/sdu 000022 Online

/dev/sdac 000023 Online

0004 sddlmae /dev/sdf 000004 Online

/dev/sdn 000012 Online

/dev/sdv 000024 Online

/dev/sdad 000025 Online

0005 sddlmaf /dev/sdg 000005 Online

/dev/sdo 000013 Online

/dev/sdw 000026 Online

/dev/sdae 000027 Online

0006 sddlmag /dev/sdh 000006 Online

/dev/sdp 000014 Online

/dev/sdx 000028 Online

/dev/sdaf 000029 Online

0007 sddlmah /dev/sdi 000007 Online

/dev/sdq 000015 Online

/dev/sdy 000030 Online

/dev/sdag 000031 Online

##############################################################

4. lvm.conf的修改

为了能够正确的使用LVM,需要修改其过滤器:

#cd /etc/lvm #vi lvm.conf

# By default we accept every block device

# filter = [ "a/.*/" ]

filter = [ "a|sddlm[a-p][a-p]|.*|","r|dev/sd|" ]

例:

[root@bsrunbak etc]# ls -l lvm*

[root@bsrunbak etc]# cd lvm

[root@bsrunbak lvm]# ls

archive backup cache lvm.conf

[root@bsrunbak lvm]# more lvm.conf

[root@bsrunbak lvm]# pvs

Last login: Fri Jul 10 11:17:21 2015 from 172.17.99.198

[root@bsrunserver1 ~]#

[root@bsrunserver1 ~]#

[root@bsrunserver1 ~]# df -h

FilesystemSize Used Avail Use% Mounted on

/dev/sda4 30G 8.8G 20G 32% /

tmpfs 95G 606M 94G 1% /dev/shm

/dev/sda2 194M 33M 151M 18% /boot

/dev/sda1 200M 260K 200M 1% /boot/efi

/dev/mapper/datavg-oraclelv

50G 31G 17G 65% /oracle

172.16.110.25:/Tbackup

690G 553G 102G 85% /Tbackup

/dev/mapper/tmpvg-oradatalv

345G 254G 74G 78% /oradata

/dev/mapper/datavg-lvodc

5.0G 665M 4.1G 14% /odc

[root@bsrunserver1 ~]# pvs

PV VG Fmt Attr PSize PFree

/dev/sda5datavg lvm2 a-- 208.06g 153.06g

/dev/sddlmba tmpvg lvm2 a-- 200.00g 49.99g

/dev/sddlmbb tmpvg lvm2 a-- 200.00g 0

[root@bsrunserver1 ~]# cd /etc/lvm

[root@bsrunserver1 lvm]# more lvm.conf

# Don't have more than one filter line active at once: only one gets

used.

# Run vgscan after you change this parameter to ensure that

# the cache file gets regenerated (see below).

# If it doesn't do what you expect, check the output of 'vgscan -vvvv'.

# By default we accept every block device:

# filter = [ "a/.*/" ]

# Exclude the cdrom drive

# filter = [ "r|/dev/cdrom|" ]

# When testing I like to work with just loopback devices:

# filter = [ "a/loop/", "r/.*/" ]

# Or maybe all loops and ide drives except hdc:

# filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]

# Use anchors if you want to be really specific

# filter = [ "a|^/dev/hda8$|", "r/.*/" ]

filter = [ "a|/dev/sddlm.*|", "a|^/dev/sda5$|", "r|.*|" ]

[root@bsrunserver1 lvm]# df

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sda4 30963708 9178396 20212448 32% /

tmpfs 99105596620228 98485368 1% /dev/shm

/dev/sda2 198337 33546154551 18% /boot

/dev/sda1 204580 260204320 1% /boot/efi

/dev/mapper/datavg-oraclelv

51606140 31486984 17497716 65% /oracle

172.16.110.25:/Tbackup

722486368 579049760 106736448 85% /Tbackup

/dev/mapper/tmpvg-oradatalv

361243236 266027580 76865576 78% /oradata

/dev/mapper/datavg-lvodc

5160576680684 4217748 14% /odc

[root@bsrunserver1 lvm]#

You have new mail in /var/spool/mail/root

[root@bsrunserver1 lvm]#

[root@bsrunserver1 lvm]# pvs

PV VG Fmt Attr PSize PFree

/dev/sda5datavg lvm2 a-- 208.06g 153.06g

/dev/sddlmba tmpvg lvm2 a-- 200.00g 49.99g

/dev/sddlmbb tmpvg lvm2 a-- 200.00g 0

[root@bsrunserver1 lvm]#

进入文件:

[root@bsrunbak lvm]# cd /opt/D*/bin

or

[root@bsrunbak bin]# pwd

/opt/DynamicLinkManager/bin

显示HDS存储卷:

[root@bsrunbak lvm]# ./dlnkmgr view -lu

Linux多路径指的是除了主机和硬盘一条路径的连接,还包括了主机和网络服务器的连接形成的主机一对多的路径连接关系。通过多路径的连接,实现了磁盘的虚拟化。

1、安装多路径软件包:

device-mapper-1.02.67-2.el5

device-mapper-event-1.02.67.2.el5

device-mapper-multipath-0.4.7-48.el5

[root@RKDB01 Server]# rpm -ivh device-mapper-1.02.67-2.el5.x86_64.rpm

warning: device-mapper-1.02.67-2.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing.。。 ########################################### [100%]

package device-mapper-1.02.67-2.el5.x86_64 is already installed

[root@RKDB01 Server]# rpm -ivh device-mapper-event-1.02.67-2.el5.x86_64.rpm

warning: device-mapper-event-1.02.67-2.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing.。。 ########################################### [100%]

package device-mapper-event-1.02.67-2.el5.x86_64 is already installed

[root@RKDB01 Server]# rpm -ivh device-mapper-multipath-0.4.7-48.el5.x86_64.rpm

warning: device-mapper-multipath-0.4.7-48.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing.。。 ########################################### [100%]

package device-mapper-multipath-0.4.7-48.el5.x86_64 is already installed

2、设置开机启动,并检查安装包是否正常:

chkconfig --level 345 multipathd on

lsmod |grep dm_multipath

[root@RKDB01 Server]# chkconfig --level 345 multipathd on

[root@RKDB01 Server]# lsmod |grep dm_multipath

dm_multipath 58969 0

scsi_dh 42561 1 dm_multipath

dm_mod 102417 4 dm_mirror,dm_multipath,dm_raid45,dm_log

[root@RKDB01 Server]#

3、配置multipathd 使其正常工作,编辑/etc/multipath.conf,开放如下内容:

defaults {

udev_dir /dev

polling_interval 10

selector “round-robin 0”

path_grouping_policy multibus

getuid_callout “/sbin/scsi_id -g -u -s /block/%n”

prio_callout none

path_checker readsector0

rr_min_io 100

max_fds 8192

rr_weight priorities

failback immediate

no_path_retry fail

user_friendly_names yes

}

blacklist {

wwid 26353900f02796769

devnode “^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*”

devnode “^hd[a-z]”

}

4、并关闭如下内容

#blacklist {

# devnode “*”

#}

#defaults {

27 # user_friendly_names yes

28 #}

5、完成之后执行如下命令发现多路径:

[root@RKDB01 Server]# modprobe dm-multipath

[root@RKDB01 Server]# multipath -F

[root@RKDB01 Server]# multipath dm-multipath

[root@RKDB01 Server]# multipath dm-round-robin

[root@RKDB01 Server]# service multipathd restart

正在关闭multipathd 端口监控程序: [确定]

正在启动守护进程multipathd: [确定]

[root@RKDB01 Server]# multipath -v2

[root@RKDB01 Server]# multipath -v2

[root@RKDB01 Server]# multipath -ll

mpath1 (3600d02310000011b16a5d57c6a1bd99a) dm-0 TOYOU,NetStor_iSUM510

[size=3.3T][features=0][hwhandler=0][rw]

\_ round-robin 0 [prio=2][ena bled]

\_ 1:0:0:0 sdb 8:16 [failed][ready]

\_ 1:0:1:0 sdc 8:32 [failed][ready]

[root@RKDB01 Server]#

6、重启服务器后,可以看到多路径信息了:

[root@RKDB01 ~]# ll /dev/mapper/

总计 0

crw------- 1 root root 10, 60 11-05 22:35 control

brw-rw---- 1 root disk 253, 0 11-05 22:35 mpath1

brw-rw---- 1 root disk 253, 1 11-05 22:35 mpath2

[root@RKDB01 ~]# multipath -ll

mpath2 (3600d02310000011b76128b9c63138cf4) dm-1 TOYOU,NetStor_iSUM510

[size=3.2T][features=0][hwhandler=0][rw]

\_ round-robin 0 [prio=2][active]

\_ 1:0:0:1 sdc 8:32 [active][ready]

\_ 1:0:1:1 sde 8:64 [active][ready]

mpath1 (3600d02310000011b16a5d57c6a1bd99a) dm-0 TOYOU,NetStor_iSUM510

[size=20G][features=0][hwhandler=0][rw]

\_ round-robin 0 [prio=2][active]

\_ 1:0:0:0 sdb 8:16 [active][ready]

\_ 1:0:1:0 sdd 8:48 [active][ready]

7、通过fdisk 看可以生成了DM-0/DM-1两个盘,正是上面sdc/sde,sdb/sdd多路径后出来的:

[root@RKDB01 ~]# fdisk -l

Disk /dev/sda: 299.4 GB, 299439751168 bytes

255 heads, 63 sectors/track, 36404 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 * 1 38 305203+ 83 Linux

/dev/sda2 39 13092 104856255 83 Linux

/dev/sda3 13093 19619 52428127+ 83 Linux

/dev/sda4 19620 36404 134825512+ 5 Extended

/dev/sda5 19620 26146 52428096 83 Linux

/dev/sda6 26147 28757 20972826 83 Linux

/dev/sda7 28758 30324 12586896 82 Linux swap / Solaris

/dev/sda8 30325 36404 48837568+ 83 Linux

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn‘t contain a valid partition table

Disk /dev/sdc: 3568.4 GB, 3568429957120 bytes

255 heads, 63 sectors/track, 433836 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn’t contain a valid partition table

Disk /dev/sdd: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn‘t contain a valid partition table

Disk /dev/sde: 3568.4 GB, 3568429957120 bytes

255 heads, 63 sectors/track, 433836 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn’t contain a valid partition table

Disk /dev/dm-0: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-0 doesn‘t contain a valid partition table

Disk /dev/dm-1: 3568.4 GB, 3568429957120 bytes

255 heads, 63 sectors/track, 433836 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-1 doesn’t contain a valid partition table

Disk /dev/sdf: 4009 MB, 4009754624 bytes

255 heads, 63 sectors/track, 487 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdf4 * 1 488 3915744+ b W95 FAT32

Partition 4 has different physical/logical endings:

phys=(486, 254, 63) logical=(487, 125, 22)

[root@RKDB01 ~]#

8、同时也可以在/dev/mapper目录中查看到多路径映射的信息:

[root@RKDB01 ~]# ll /dev/mapper/

总计 0

crw------- 1 root root 10, 60 11-06 00:49 control

brw-rw---- 1 root disk 253, 2 11-06 00:49 data-data001

brw-rw---- 1 root disk 253, 0 11-06 00:49 mpath1

brw-rw---- 1 root disk 253, 1 11-06 00:49 mpath2

linux服务器通过multipath多路径连接到共享存储,那么当文件系统空间不足的时候,有几种方式可以扩展文件系统的大小:

1、pv不变,原lun存储扩大容量,扩大lv,扩大文件系统

2、新增pv,加入到vg中,扩大lv,扩大文件系统

下文是针对场景1的情况下如何 *** 作(但是个人建议采取新建pv的方式2进行):

Environment

If you have this specific scenario, you can use the following steps:

Note: if these lv's are part of a clustered vg, steps 1 and 2 need to be performed on all nodes. 注意:集群模式下步骤1和步骤2两个节点都需要执行。

1) Update block devices

Note: This step needs to be run against any sd devices mapping to that lun. When using multipath, there will be more than one. 通过multipath -ll命令查看每个聚合卷对应的路径。

2) Update multipath device

例子:

3) Resize the physical volume, which will also resize the volume group

4) Resize your logical volume (the below command takes all available space in the vg)

5) Resize your filesystem

6) Verify vg, lv and filesystem extension has worked appropriately

模拟存储端扩容testlv增加

查看客户端多路径情况

客户端更新存储

更新聚合设备

更新pv空间

更新lv空间

更新文件系统空间


欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/yw/8655791.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-04-19
下一篇 2023-04-19

发表评论

登录后才能评论

评论列表(0条)

保存