A customer's installation had some Linux systems and an external drive array, with the array data mounted on Linux via iSCSI, yielding a number of mounted partitions. This installation used the LVM (logical volume manager), which added another layer of abstraction, and it took me a while to sort out where things came from. This post memorializes how I tracked this down.
Implementation notes: I was reverse engineering a fairly old system, so almost everything is dated. In particular, the Linux systems were running CentOS 5.3, which long past being supported. As I write this, current CentOS is around version 7.4.
I have not researched command differences between the old CentOS and anything newer.
Our ultimate goal is to figure out which iSCSI targets are found on the local system, and almost everybody starts with either the "mount" command, which shows only mount points, or "df", which shows mount points plus size information.
# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VOLGR_SYS-VOL_ROOT 5078656 2076892 2739620 44% / /dev/mapper/VOLGR_SYS-VOL_USR 30472188 1891600 27007724 7% /usr /dev/mapper/VOLGR_SYS-VOL_TMP 10157368 154348 9478732 2% /tmp /dev/sda1 93307 27744 60746 32% /boot /dev/mapper/VOLGR_DATA-VOL_DATABASE 304721920 256764784 32228496 89% /var/local/database /dev/mapper/VOLGR_DATA-VOL_EXPORT 633788608 532349916 68724760 89% /export /dev/mapper/VOLGR_DATA-VOL_HOME 101573920 5461968 90869072 6% /home /dev/sdd1 1922470928 67098676 1757716332 4% /rsync-backups
Some of the above are found on internal hard drives inside the Linux system itself, while others come from the drive array; you can't tell for sure from here (though you could take some guesses), which is why we're reverse engineering it.
Drive array configuration
This depends entirely on the array product: they are using a Promise m210i VTrak array, and this is always the first place to start in order to determine the logical unit numbers (LUNs). Each one is a separate "device" that can be exported to another system.
Though most disk array products can provide data in many forms — NFS, CIFS (windows network sharing), Appletalk, rsync, etc.—but this case uses iSCSI. It's a fairly efficient protocol, but allows each volume to be exported to one system. If further sharing is required, the computer system mounting the data has to export it somehow.
I won't discuss how to create or manage logical devices—consult the product manual for that—we're only looking at logical devices that are already there.
Our example has two logical units:
The two logical devices LD0 and LD1 are exported via iSCSI to a Linux system, and though the size of each device is seen by the client, the name and RAID level are not.
The question now is how these are mapped on the Linux system.
ISCSI block devices
Our example assumes that the disk array is exporting only to a single Linux system, though this is not inherently the case: it's possible (and common) for a disk array device to provide space for many systems, though I believe each logical device can be mounted by only one machine at a time (hence: many clients means many logical devices).
In iSCSI parlance, "initiator" is the Linux system trying to use the data, and "target" is the disk array device. There's an odd naming scheme for devices that we don't really need to get into for our purposes of just mapping what mounts where. These terms—initiator and target—will come up now and then.
The iscsiadm command allows us to query what's mounted where, and it's an odd format that takes a bit of getting used to (some fields removed):
# iscsiadm -m session -P 3 iSCSI Transport Class version 2.0-724 iscsiadm version 2.0-868 Target: iqn.1994-12.com.promise.4e.220.127.116.11.0.0.20 Current Portal: 10.97.1.6:3260,1 ← which array device are we talking to? Persistent Portal: 10.97.1.6:3260,1 ... ************************ Attached SCSI devices: ************************ Host Number: 5 State: running scsi5 Channel 00 Id 0 Lun: 0 Attached scsi disk sdc State: running ← array LUN#0 = Linux /dev/sdc scsi5 Channel 00 Id 0 Lun: 1 Attached scsi disk sdd State: running ← array LUN#1 = Linux /dev/sdd
The IP addresses show the connection to the disk array, and in a multi-array configuration you can use this to figure out which unit it's connected to.
But the part we care about is the last section, "Attached SCSI drives", and it shows the mapping of each logical unit on the array to the attached drive on the Linux system, and we can marry this to the logical unit information on the array itself.
|Array LUN||Array name||Array size||Linux block device|
|#0||DARY00_00||1 TB, RAID 5||/dev/sdc|
|#1||DARY00_01||1.82 TB, RAID 0||/dev/sdd|
The Linux system will surely have other block devices, and in this case I'd expect to see /dev/sda and /dev/sdb used for internal hard drives. They will have filesystems as well, but they won't involve this iSCSI array.
Block devices are usually partitioned with the fdisk command— which brings back MS-DOS memories—and we can use this to see how each one is divided up. We are only looking at the partitioning, not changing anything, so tread carefully here and exit the tool (with the "q" command) as soon as you have what you need.
# fdisk /dev/sdc ... Command (m for help): p ← p=print partition table Disk /dev/sdc: 1099.5 GB, 1099512152064 bytes 255 heads, 63 sectors/track, 133674 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 * 1 133674 1073736373+ 8e Linux LVM ← Our LVM partition Command (m for help): Command (m for help): q ← q=quit without saving
For systems not using the logical volume manager, there may well be multiple partitions: sdc1, sdc2, etc., each as a mountable filesystem, but in this case we're obviously using the logical volume manager so we have to dig further.
But, for /dev/sdd the partitioning shows not "Linux LVM", but "Linux", for /dev/sdd1, and we see that in our original "df" listing as the /rsync-backups partition. This is not going through the logical volume manager, and this was in fact set up later with a single (non-RAID) hard drive just to give me a place to park backups.
So, we have now taken care of array LUN#1, with array LUN#0 requiring the LVM deeper dive.
The Logical Volume Manager is a powerful system that supports a very flexible storage subsystem, with filesystems spanning drives and many other options that are far beyond the scope of this post. But I'll touch on three concepts we'll need to decode this.
The LVM subsystem has physical volumes (PV), volume groups (VG), and logical volumes (LV), with each having its own set of commands. In every case we are only looking, not changing anything. The goal here is to reverse engineer what's going on.
First is looking for all physical volumes (with uninteresting fields elided):
# pvdisplay --- Physical volume --- PV Name /dev/sdc1 VG Name VOLGR_DATA PV Size 1023.99 GB / not usable 26.68 MB ... --- Physical volume --- PV Name /dev/sdb1 VG Name VOLGR_SYS PV Size 110.72 GB / not usable 4.59 MB ...
This shows the /dev/sdc1 shown in the fdisk command listing, and it's associated with volume group VOLGR_DATA. These names are arbitrarily given, though it's easy enough confuse volume groups with volumes that the notion of "volume group" was helpfully encoded in the name by the person who set this all up.
We also see /dev/sdb1, which is the internal hard drive on the system, not included in the external array, and we do not see anything from /dev/sdd because that device was partitioned directly and outside the LVM system (though still hosted on the array).
We don't see anything for /dev/sda either, and that's because it's the Linux system root drive. We don't care about that for this process.
So, we have a single physical device (/dev/sdd1) associated with the drive array, and it's included in the VOLGR_DATA volume group. Let's look there as well.
# vgdisplay --- Volume group --- VG Name VOLGR_DATA ← hosted on iSCSI array VG Size 1023.97 GB ... --- Volume group --- VG Name VOLGR_SYS ← hosted on internal Linux hard drive VG Size 110.72 GB ...
A single volume group can be composed of more than one physical volume (say, two hard drives spanned together to form a single volume), but we're not seeing that here: we just have a single volume group we care about (VOLGR_DATA) on the array, with VOLGR_SYS found on the internal hard drives.
A volume group can contain one or more logical volumes, so let's query that now with the lvdisplay command, providing the name of the volume group:
# lvdisplay VOLGR_DATA --- Logical volume --- LV Name /dev/VOLGR_DATA/VOL_DATABASE VG Name VOLGR_DATA LV Write Access read/write LV Status available # open 1 LV Size 300.00 GB ... --- Logical volume --- LV Name /dev/VOLGR_DATA/VOL_EXPORT VG Name VOLGR_DATA LV Write Access read/write LV Status available # open 1 LV Size 623.97 GB ... --- Logical volume --- LV Name /dev/VOLGR_DATA/VOL_HOME VG Name VOLGR_DATA LV Write Access read/write LV Status available # open 1 LV Size 100.00 GB ...
Now we're seeing some familiar items: each of the logical volumes corresponds to a mounted filesystem on the Linux system, though via a slightly different path name.
A side note: if the # Open count shows zero, then the partition is not mounted.
Each volume group has its own directory (e.g. /dev/VOLGR_DATA/) that contains the logical volumes within, though each is a symbolic link to a common /dev/mapper directory:
# ls -l /dev/VOLGR_DATA/ lrwxrwxrwx 1 root root 35 Jul 6 2017 VOL_DATABASE -> /dev/mapper/VOLGR_DATA-VOL_DATABASE lrwxrwxrwx 1 root root 33 Jul 6 2017 VOL_EXPORT -> /dev/mapper/VOLGR_DATA-VOL_EXPORT lrwxrwxrwx 1 root root 31 Jul 6 2017 VOL_HOME -> /dev/mapper/VOLGR_DATA-VOL_HOME
And at this point we pretty much have it: each of the /dev/mapper/ names—each a mashup of the volume group + volume names— corresponds to a mounted filesystem as shown in the original "df" listing.
There are, of course, many more possible permutations of how this can be organized, but for the purposes of reverse engineering a configuration, this ought to be sufficient in most cases.