raid – Confirming parameters for XFS filesystem and LVM volume striping over 2 ADAPT (RAID6-like) volumes

We are setting up an ADAPT0 (RAID-60-like) configuration for a file server.

We have two disk pools. Each consists of 14 disks and is set up using ADAPT. According to Dell’s official white paper, ADAPT is similar to RAID 6 but distributes spare capacity. On page 13, it is indicated that the chunk size is 512 KiB and that the stripe width is 4 MiB (over 8 disks) for each disk pool.

My understanding is that with 14 disks, 2 disks worth of capacity are reserved for spare, 20% of the remaining 12 disks (2.4 disks worth of capacity) is used for parity and 80% (9.6 disks) is used for storage. However, the chunk size is 512 KiB and the stripe width remains 4MiB since we are only writing to 8 disks in one contiguous block.

To achieve an ADAPT0 (RAID-60-like) configuration, we then created a logical volume that stripes over these two disk pools using LVM. We used a stripe size that matches that of the hardware RAID (512 KiB):

$ vgcreate vg-gw /dev/sda /dev/sdb
$ lvcreate -y --type striped -L 10T -i 2 -I 512k -n vol vg-gw

Next, set up an XFS file system over the striped logical volume. Following guidelines from and a few other sources, we matched the stripe unit su to the LVM and RAID stripe size (512k) and set the stripe width sw to 16 since we have 16 “data disks”.

$ mkfs.xfs -f -d su=512k,sw=16 -l su=256k /dev/mapper/vg--gw-vol
$ mkdir -p /vol/vol
$ mount -o rw -t xfs /dev/mapper/vg--gw-vol /vol/vol

Was this the correct approach? More specifically:

(1) Was it correct to align the LVM stripe size to that of the hardware RAID (512 KiB)?

(2) Was it correct to align the XFS stripe unit and widths as we have (512 KiB stripe size and 16 data disks), or are we supposed to “abstract” the underlying volumes (4 MiB stripe size and 2 data disks)?

(3) Adding to the confusion is the self-reported output of the block devices here:

$ grep "" /sys/block/sda/queue/*_size

Thank you!

linux – Ubuntu 18.04 live server. Issues in setting RAID 1

I installed Ubuntu 18.04.04 on a server that has UEFI. I followed several online instructions to install RAID during Ubuntu installation but failed to. Frustrated, I decided to do a basic install and then do a RAID I on an existing system. I can create a partition table that matches the main drive. The two drives are identical.
After copying the partition tables, lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT displays this:

   loop0   89.1M squashfs loop /snap/core/8268
   loop1   96.5M squashfs loop /snap/core/9436
   sda    931.5G          disk
   ├─sda1   512M vfat     part /boot/efi
   ├─sda2    32G swap     part (SWAP)
   ├─sda3   600G ext4     part /
   └─sda4   290G ext4     part /home
   sdb    931.5G          disk
   ├─sdb1   512M          part
   ├─sdb2    32G          part
   ├─sdb3   600G          part
   └─sdb4   290G          part

sda is the device on which Ubuntu is installed. sdb is the one I am adding to create a RAID 1.
In the follow-up instructions, I find that to change partition type, on option ‘t’, fdisk gives me integer and not hex codes, and there is no option (fd) for linux raid autodetect. Instead there is option (29) for linux raid. Does this matter?

Also, any references to Ubuntu 18.04 install with RAID 1 for UEFI (creating RAID 1 during Ubuntu install) would be helpful.

Replacing CMOS battery affect to the RAID controller?

Recently I replaced my graphic card and the BIOS does post with FF status code but in the step of Verify DMI Pool Data hangs. I’m thinking remove de CMOS battery for reset de board but I have a RAID 0 configured with the Intel Raid Controller include in the mother board (GA-X58A-UD5) and I am not sure if RAID 0 will be lost or not.

Can we move RAID configured disks across different Servers

I have 2 SSD disks configured as RAID 1 using RAID feature in BIOS. By chance if my motherboard and processor is damaged, can I connect these 2 SSD disks to another Server. Will these SSD disks boot on another Server. If not, is there any solution for making it to work.

Raid configuration | Web Hosting Talk

Registration at Web Hosting Talk is completely free and takes only a few seconds. By registering you’ll gain:

– Full Posting Privileges.

– Access to Private Messaging.

– Optional Email Notification.

– Ability to Fully Participate.

– And Much More.

Register Now, or check out the Site Tour and find out everything Web Hosting Talk has to offer.

raid5 – RAID 5: Disk lost after boot

On Debian 9.12 RAID5 (md0) always is degraded after reboot. The array has three disks, sdb, sdc und sdd. sdd is always missing.
The system boots to emergency mode. Here after “mdadm –run /dev/md0” and Control-D the system boots with a degraded array. Now I can add sdd to md0 and 6 hours later everything seems to be calm, until next reboot.
But if I’m right, this means to me, there is no security against drive-failure, right?

Before I exchanged a 3TB-HDD by a 4TB. This means 3x4TB RAID5. I did grow it, so I have 7.27TB.

Is there an idea, why after reboot sdd always fails in RAID but it can always be added and revovers?

cat /proc/mdstat 
Personalities : (raid1) (raid6) (raid5) (raid4) (linear) (multipath) (raid0) (raid10) 
md0 : active raid5 sdd(5) sdc(4) sdb(3)
      7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 (3/2) (UU_)
      (==>..................)  recovery = 11.3% (444437704/3906887168) finish=441.1min speed=130822K/sec
      bitmap: 9/15 pages (36KB), 131072KB chunk

Thank you!

amazon ec2 – Optimal RAID configuration for EC2 instance store used for HDFS

I’m trying to determine if there is any practical advantage to configuring a RAID array on the instance store of a 3x d2.2xlarge instances being used for HDFS. Initially I planned to just mount each store and add it as an additional data directory for Hadoop. But it seems there could be some additional performance gains with a RAID 0 or 10 configuration. Since durability is handled by HDFS itself there is no need to consider RAID 1 or 5 from that perspective ( eg: if one or all stores failed on an instance, durability is provided by replication from the other data nodes). RAID 6 seems impractical due to known issues with long rebuild times and slowed throughput performance due to 2x parity writes (again it seems best to let HDFS handle durability). That leaves RAID 0 and 10 that both theoretically have better disk I/O than a standard HDD. Would HDFS have observable performance gains on a RAID array for the instance store?

RAID array empty after reboot

After I rebooted my computer, I looked at the array and it was empty. Does anyone know why this is and how I can fix it? I had already moved my entire movie collection onto the drive, luckily I hadn’t deleted the source files.

system installation – Install Ubuntu 20.04 DESKTOP with Software RAID 1 on two disks

So I have looked far and wide (but maybe not enough) for the answer. Historically, you could avoid the LiveCD and use the alternateCD / text installer to configure software RAID 1 on two internal disks at the start of install. Apparently this is still working for Ubuntu 20.04 SERVER edition. But I cannot find a solution for the DESKTOP edition as it only seems to come via a LiveCD now. I need displays and TK as the application I am testing in different environments needs that. Am I missing something obvious? Is there a slew of packages I could install to turn a server edition into the desktop?

The computer system has two independent disks and no hardware raid — either in a controller or in the BIOS. So it has to be setup in software. While I can configure each disk with the partitions expected, I cannot tie them together into a logical RAID volume to present to the Live CD installer. I could do this in earlier editions using the AlternateCD installer. Best I can tell anything but the Live Installer has been deprecated with no intent to support the Alternate in the Desktop release. Ideas? Thanks in advance.

storage – RAID controller link speed with mixed drive types

Link speed negotiation is done on a per-port basis, so it should be possible to have SATA 3.0 (6 Gb/s) and SAS 3.0 (12 Gb/s) connected to the same PERC card at maximum speed.

However, you can not (and should not) mix them in the same virtual disk / array. In other words, keep them in separate array and all should works without issues.