We are setting up an ADAPT0 (RAID-60-like) configuration for a file server.
We have two disk pools. Each consists of 14 disks and is set up using ADAPT. According to Dell’s official white paper, ADAPT is similar to RAID 6 but distributes spare capacity. On page 13, it is indicated that the chunk size is 512 KiB and that the stripe width is 4 MiB (over 8 disks) for each disk pool.
My understanding is that with 14 disks, 2 disks worth of capacity are reserved for spare, 20% of the remaining 12 disks (2.4 disks worth of capacity) is used for parity and 80% (9.6 disks) is used for storage. However, the chunk size is 512 KiB and the stripe width remains 4MiB since we are only writing to 8 disks in one contiguous block.
To achieve an ADAPT0 (RAID-60-like) configuration, we then created a logical volume that stripes over these two disk pools using LVM. We used a stripe size that matches that of the hardware RAID (512 KiB):
$ vgcreate vg-gw /dev/sda /dev/sdb $ lvcreate -y --type striped -L 10T -i 2 -I 512k -n vol vg-gw
Next, set up an XFS file system over the striped logical volume. Following guidelines from XFS.org and a few other sources, we matched the stripe unit
su to the LVM and RAID stripe size (512k) and set the stripe width
sw to 16 since we have 16 “data disks”.
$ mkfs.xfs -f -d su=512k,sw=16 -l su=256k /dev/mapper/vg--gw-vol $ mkdir -p /vol/vol $ mount -o rw -t xfs /dev/mapper/vg--gw-vol /vol/vol
Was this the correct approach? More specifically:
(1) Was it correct to align the LVM stripe size to that of the hardware RAID (512 KiB)?
(2) Was it correct to align the XFS stripe unit and widths as we have (512 KiB stripe size and 16 data disks), or are we supposed to “abstract” the underlying volumes (4 MiB stripe size and 2 data disks)?
(3) Adding to the confusion is the self-reported output of the block devices here:
$ grep "" /sys/block/sda/queue/*_size /sys/block/sda/queue/hw_sector_size:512 /sys/block/sda/queue/logical_block_size:512 /sys/block/sda/queue/max_segment_size:65536 /sys/block/sda/queue/minimum_io_size:4096 /sys/block/sda/queue/optimal_io_size:1048576 /sys/block/sda/queue/physical_block_size:4096