multipath多路径,Linux系统底层存储扩容了,如何扩大文件系统?

multipath多路径,Linux系统底层存储扩容了,如何扩大文件系统?,第1张

linux服务器通过multipath多路径连接到共享存储,那么当文件系统空间不足的时候,有几种方式可以扩展文件系统的大小:

1、pv不变,原lun存储扩大容量,扩大lv,扩大文件系统

2、新增pv,加入到vg中,扩大lv,扩大文件系统

下文是针对场景1的情况下如何 *** 作(但是个人建议采取新建pv的方式2进行):

Environment

If you have this specific scenario, you can use the following steps:

Note: if these lv's are part of a clustered vg, steps 1 and 2 need to be performed on all nodes. 注意:集群模式下步骤1和步骤2两个节点都需要执行。

1) Update block devices

Note: This step needs to be run against any sd devices mapping to that lun. When using multipath, there will be more than one. 通过multipath -ll命令查看每个聚合卷对应的路径。

2) Update multipath device

例子:

3) Resize the physical volume, which will also resize the volume group

4) Resize your logical volume (the below command takes all available space in the vg)

5) Resize your filesystem

6) Verify vg, lv and filesystem extension has worked appropriately

模拟存储端扩容testlv增加

查看客户端多路径情况

客户端更新存储

更新聚合设备

更新pv空间

更新lv空间

更新文件系统空间

Linux下HDS存储多路径查看

在Redhat下确定需要划分的存储空间。在本例中需要进行划分的空间是从HDS AMS2000上划分到服务器的多路径存储空间。其中sddlmad为ycdb1上需要进行划分的空间,sddlmah为ycdb2上需要进行划分的空间。具体如下:

查看环境

# rpm -qa|grep device-mapper

device-mapper-event-1.02.32-1.el5

device-mapper-multipath-0.4.7-30.el5

device-mapper-1.02.32-1.el5

# rpm -qa|grep lvm2 lvm2-2.02.46-8.el5

查看空间

#fdisk -l

Disk /dev/sddlmad: 184.2 GB, 184236900352 bytes 255 heads, 63 sectors/track, 22398 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sddlmah: 184.2 GB, 184236900352 bytes

255 heads, 63 sectors/track, 22398 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

查看存储

#cd /opt/DynamicLinkManager/bin/

#./dlnkmgr view -lu

Product : AMS

SerialNumber : 83041424 LUs : 8

iLU HDevName DevicePathID Status

0000 sddlmaa /dev/sdb 000000 Online

/dev/sdj 000008 Online

/dev/sdr 000016 Online

/dev/sdz 000017 Online

0001 sddlmab /dev/sdc 000001 Online

/dev/sdk 000009 Online

/dev/sds 000018 Online

/dev/sdaa 000019 Online

0002 sddlmac /dev/sdd 000002 Online

/dev/sdl 000010 Online

/dev/sdt 000020 Online

/dev/sdab 000021 Online

0003 sddlmad /dev/sde 000003 Online

/dev/sdm 000011 Online

/dev/sdu 000022 Online

/dev/sdac 000023 Online

0004 sddlmae /dev/sdf 000004 Online

/dev/sdn 000012 Online

/dev/sdv 000024 Online

/dev/sdad 000025 Online

0005 sddlmaf /dev/sdg 000005 Online

/dev/sdo 000013 Online

/dev/sdw 000026 Online

/dev/sdae 000027 Online

0006 sddlmag /dev/sdh 000006 Online

/dev/sdp 000014 Online

/dev/sdx 000028 Online

/dev/sdaf 000029 Online

0007 sddlmah /dev/sdi 000007 Online

/dev/sdq 000015 Online

/dev/sdy 000030 Online

/dev/sdag 000031 Online

##############################################################

4. lvm.conf的修改

为了能够正确的使用LVM,需要修改其过滤器:

#cd /etc/lvm #vi lvm.conf

# By default we accept every block device

# filter = [ "a/.*/" ]

filter = [ "a|sddlm[a-p][a-p]|.*|","r|dev/sd|" ]

例:

[root@bsrunbak etc]# ls -l lvm*

[root@bsrunbak etc]# cd lvm

[root@bsrunbak lvm]# ls

archive backup cache lvm.conf

[root@bsrunbak lvm]# more lvm.conf

[root@bsrunbak lvm]# pvs

Last login: Fri Jul 10 11:17:21 2015 from 172.17.99.198

[root@bsrunserver1 ~]#

[root@bsrunserver1 ~]#

[root@bsrunserver1 ~]# df -h

FilesystemSize Used Avail Use% Mounted on

/dev/sda4 30G 8.8G 20G 32% /

tmpfs 95G 606M 94G 1% /dev/shm

/dev/sda2 194M 33M 151M 18% /boot

/dev/sda1 200M 260K 200M 1% /boot/efi

/dev/mapper/datavg-oraclelv

50G 31G 17G 65% /oracle

172.16.110.25:/Tbackup

690G 553G 102G 85% /Tbackup

/dev/mapper/tmpvg-oradatalv

345G 254G 74G 78% /oradata

/dev/mapper/datavg-lvodc

5.0G 665M 4.1G 14% /odc

[root@bsrunserver1 ~]# pvs

PV VG Fmt Attr PSize PFree

/dev/sda5datavg lvm2 a-- 208.06g 153.06g

/dev/sddlmba tmpvg lvm2 a-- 200.00g 49.99g

/dev/sddlmbb tmpvg lvm2 a-- 200.00g 0

[root@bsrunserver1 ~]# cd /etc/lvm

[root@bsrunserver1 lvm]# more lvm.conf

# Don't have more than one filter line active at once: only one gets

used.

# Run vgscan after you change this parameter to ensure that

# the cache file gets regenerated (see below).

# If it doesn't do what you expect, check the output of 'vgscan -vvvv'.

# By default we accept every block device:

# filter = [ "a/.*/" ]

# Exclude the cdrom drive

# filter = [ "r|/dev/cdrom|" ]

# When testing I like to work with just loopback devices:

# filter = [ "a/loop/", "r/.*/" ]

# Or maybe all loops and ide drives except hdc:

# filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]

# Use anchors if you want to be really specific

# filter = [ "a|^/dev/hda8$|", "r/.*/" ]

filter = [ "a|/dev/sddlm.*|", "a|^/dev/sda5$|", "r|.*|" ]

[root@bsrunserver1 lvm]# df

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sda4 30963708 9178396 20212448 32% /

tmpfs 99105596620228 98485368 1% /dev/shm

/dev/sda2 198337 33546154551 18% /boot

/dev/sda1 204580 260204320 1% /boot/efi

/dev/mapper/datavg-oraclelv

51606140 31486984 17497716 65% /oracle

172.16.110.25:/Tbackup

722486368 579049760 106736448 85% /Tbackup

/dev/mapper/tmpvg-oradatalv

361243236 266027580 76865576 78% /oradata

/dev/mapper/datavg-lvodc

5160576680684 4217748 14% /odc

[root@bsrunserver1 lvm]#

You have new mail in /var/spool/mail/root

[root@bsrunserver1 lvm]#

[root@bsrunserver1 lvm]# pvs

PV VG Fmt Attr PSize PFree

/dev/sda5datavg lvm2 a-- 208.06g 153.06g

/dev/sddlmba tmpvg lvm2 a-- 200.00g 49.99g

/dev/sddlmbb tmpvg lvm2 a-- 200.00g 0

[root@bsrunserver1 lvm]#

进入文件:

[root@bsrunbak lvm]# cd /opt/D*/bin

or

[root@bsrunbak bin]# pwd

/opt/DynamicLinkManager/bin

显示HDS存储卷:

[root@bsrunbak lvm]# ./dlnkmgr view -lu


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/8799100.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-04-21
下一篇 2023-04-21

发表评论

登录后才能评论

评论列表(0条)

保存