Linux dm-multipath on local disk?
Compiled by : Hiu, Yen-Onn (yenonn@gmail.com), 7th May
2013
Problem: If you are running a Linux, RHEL5 or RHEL6, some
machines have local scsi disks that been detected by dm-multipath. It is a
known fault on the configuration and strongly not recommended for local disks. Please
read this link from Redhat for future clarification. (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/DM_Multipath/#ignore_localdisk_procedure)
This is the scenario when you first login to a login
problematic Linux/dm-multipath server. You
should see the multipathing is acquiring the / and /boot mount points. As the
result, we can’t stop the multipathd because some of the disks are being used
by the system.
Even with some of the /etc/multipath.conf blacklisted local scsi
disk, we still can see the local disks are not been ignored by the dm-multipath.
[root@s11t0008c ~]# uname -r
2.6.32-220.4.2.el6.x86_64
[hiuy@s11t0008c ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/mpathap3
1008M 459M 498M
48% /
tmpfs 32G 0
32G 0% /dev/shm
/dev/mapper/mpathap1 186M
59M 118M 34% /boot
/dev/mapper/vg00-home
4.0G 158M
3.6G 5% /home
/dev/mapper/vg00-opt 4.0G
1.7G 2.1G 45% /opt
/dev/mapper/vg00-tmp 4.0G
138M 3.7G 4% /tmp
/dev/mapper/vg00-usr 4.0G
1.3G 2.6G 33% /usr
/dev/mapper/vg00-var 4.0G
634M 3.2G 17% /var
/dev/mapper/vg00-crash
7.7G 923M
6.4G 13% /var/crash
tmpfs 4.0K 0
4.0K 0% /dev/vx
This is the snippet of the /etc/multipath.conf
blacklist {
#
devnode ".*"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "IBM"
product "S/390.*"
}
# don't count normal SATA devices as multipaths
device {
vendor "ATA"
}
# don't count 3ware devices as multipaths
device {
vendor "3ware"
}
device {
vendor "AMCC"
}
# nor highpoint devices
device {
vendor "HP.*"
}
wwid 3600508b1001030363945374330300f00
}
[root@s11t0008c ~]# multipath -ll
mpatha (3600508b1001030363945374330300f00) dm-0 HP,LOGICAL VOLUME
size=279G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 0:0:0:1 sda 8:0 active ready running
[root@s11t0008c sysconfig]# cat
/etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Sep 28
08:12:03 2012
#
# Accessible filesystems, by reference,
are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8),
mount(8) and/or blkid(8) for more info
#
UUID=e65ac106-53a5-4f0d-bd5e-ad563883c7d8
/ ext4 defaults 1 1
UUID=4d5a5077-8582-4f3d-8708-2588644686d1
/boot ext3 defaults 1 2
/dev/mapper/vg00-home /home ext4 defaults 1 2
/dev/mapper/vg00-opt /opt ext4 defaults 1 2
/dev/mapper/vg00-tmp /tmp ext4 defaults 1 2
/dev/mapper/vg00-usr /usr ext4 defaults 1 2
/dev/mapper/vg00-var /var ext4 defaults 1 2
UUID=5caa587b-f2d4-4063-9db3-f9b2901e816d
swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults
0 0
/dev/vg00/crash /var/crash ext3
defaults 2 2
In order to trace the UUID on
the / mount point, we can query the blkid table on the devices that had been
registered. The
[root@s11t0008c ~]# blkid -U
e65ac106-53a5-4f0d-bd5e-ad563883c7d8
/dev/mapper/mpathap3
Solution to the problem
You have to make sure that you have local disks that had
been blacklisted. In this case, you can specify individual devices by their
WWID (world-wide Identification) with the wwid entry in the blacklist section
of the configuration file.
For example:
blacklist {
wwid 3600508b1001030363945374330300f00
}
To verify the devices had been blacklisted you can use the
command “multipath –v4” and you should see the output as below.
===== paths list =====
uuid hcil dev
dev_t pri dm_st chk_st vend/pr
3600508b1001030363945374330300f00
0:0:0:1 sda 8:0 1
undef ready HP,LOGI
3600508b4000756cf0000a000029d0000 1:0:0:1
sdb 8:16 10
undef ready HP,HSV2
.
.
May
06 23:01:06 | sda: (HP:LOGICAL VOLUME) vendor/product blacklisted
May 06 23:01:06 | sdb: (HP:HSV210)
vendor/product blacklisted
.
.
Then, you have to recompile the initramdisk, to make sure
that during the booting (init), the HP logical disk (local disk) is
blacklisted.
1.
Backup your initial initramdisk
[root@s11t0008c boot]# cp -p /boot/initramfs-`uname
-r`.img /boot/initramfs-`uname -r`.img.bak
2.
Generate the new initramdisk and reboot the
server.
[root@s11t0008c boot]# mkinitrd –force
/boot/initramfs-`uname -r`.img `uname -r`
[root@s11t0008c boot]# init 6
Post configuration/verification
Once you are done with the reboot, you should see this.
[root@s11t0008c
~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 1008M 459M
499M 48% /
tmpfs 32G 0
32G 0% /dev/shm
/dev/sda1 186M 168M
9.2M 95% /boot
/dev/mapper/vg00-home
4.0G 158M
3.6G 5% /home
/dev/mapper/vg00-opt 4.0G
1.7G 2.1G 45% /opt
/dev/mapper/vg00-tmp 4.0G
138M 3.7G 4% /tmp
/dev/mapper/vg00-usr 4.0G
1.3G 2.6G 33% /usr
/dev/mapper/vg00-var 4.0G
637M 3.2G 17% /var
/dev/mapper/vg00-crash
7.7G 923M
6.4G 13% /var/crash
tmpfs 4.0K 0
4.0K 0% /dev/vx
[root@s11t0008c ~]# blkid -U
e65ac106-53a5-4f0d-bd5e-ad563883c7d8
/dev/sda3
[root@s11t0008c ~]# mpathconf
multipath is enabled
find_multipaths is disabled
user_friendly_names is enabled
dm_multipath module is loaded
multipathd is chkconfiged on
[root@s11t0008c ~]# multipath -ll
[root@s11t0008c ~]#
4 comments:
Thanks a lot for the post . Was facing similar issue and your article helped me to resolve the issue .
Was facing similar issue and ur post helped me to resolve the issue . Thanks a lot for the post.
Was facing similar issue and ur post helped me to resolve the issue . Thanks a lot for the post.
Thank you! This helped out a ton with issues on older servers running RHEL7.6
Post a Comment