[Slackdocs] RAID howto

AlBundy buzonazo at gmail.com
Sun Oct 14 22:19:40 CEST 2012


Niki Kovacs <info at xxx> wrote:
> So when I have to install Slackware on a server
> using RAID, I do the following:
>
> # mdadm -Ss
> 
> # mdadm --zero-superblock /dev/sda1
> # mdadm --zero-superblock /dev/sda2
> # mdadm --zero-superblock /dev/sda3
> # mdadm --zero-superblock /dev/sdb1
> # mdadm --zero-superblock /dev/sdb2
[...]
> # dd if=/dev/zero of=/dev/sda bs=512 count=64
> # dd if=/dev/zero of=/dev/sdb bs=512 count=64
>
> # mdadm --create /dev/md1 --level=1 --raid-devices=4 --metadata=0.90 /dev/sda1 /dev/sdb1 etc.
[...]
> Before launching the installer, I take care of defining the swap partition:
> # mkswap /dev/md2
>
> Now I do the following:
> # watch cat /proc/mdstat
>
> And here I wait for my disks to synchronize completely, which can take
> an hour and a half or more. Only when they are fully synchronized
> ([UUUU]) do I launch the installer:
> # setup

(Please note: The next is applicable to RAID-1, I have not 
experience with other RAID schemas)
You don't need to wait one hour for the synchronization to 
be completed!.
Just after
	mdadm --create /dev/md1 ....
command, you can make a mkfs command on /dev/md1, 
mount /dev/md1, and write data over it.

You can make the complete slackware installation on 
the RAID during synchronization;
you can exit from the setup, and shutdown the computer, 
with the RAID not yet synchronized (on RAID-1).

After you boot into your new slack installation, 
synchronization continues.

Data are stored in the disk, even during synchronization, 
so read and write operations on /dev/md* nodes are 
completed successfully.

(A different question is: what happens if one disk fails 
in a 2-disk RAID-1 during synchronization?).




Paul Chavent <paul.chavent at xxx> wrote:

> I did the same, but as i explained it before, if i do :
> mdadm --create /dev/md2 --level 1 --raid-devices 2 /dev/sda2 /dev/sdb2
>
> I can use the device during the installation, but after reboot, the /dev/md2
> is NOT mounted.
>
> That because the "not metadata=0.90" arrays are decreasing numbered form
> /dev/md127 ... or named as /dev/md/name

[...]
> As you say in the previous post "I used the --metadata=0.90
> option for every array: md1, md2 and md3".
> Try to not use "--metadata=0.90" for non boot devices. And you will probably
> encounter my problem.
[...]
> indeed, my root partition (/dev/md1) was created with the metadata=0.90
> and it has been mounted after the first reboot (that's why my system boot).
> 
> But my home and swap (that wasn't metadata=0.90) haven't been mounted
> (because they was named /dev/md127, /dev/md126).

I use a RAID-1 with 2x1TB partitions here, metadata=1.2, mounted as
/opt/data.

A bit of theory:

When you play with mdadm RAID in Linus, there are three possibilities:
1.- kernel has support for RAID AUTODETECT, so you can use RAID devices 
with metadata=0.90 as root filesystem.

2.- you can use /sbin/mdadm by hand or from /etc/rc.d scripts to configure 
non-root RAID devices with any metadata version.

3.- if you wants to use a RAID device with metadata!=0.90 as root device, 
you MUST use an initrd; inside this initrd, there will be a init script 
that invoques /sbin/mdadm with some --scan parameters


So you can assign a on-disk raid array to a device node (/dev) in two ways:
-by kernel (only metadata=0.90)
-from /sbin/mdadm  usermode program (all metadata).



I think your problem is an incorrect /etc/mdadm.conf file.
Please try:
	cp /etc/mdadm.conf /etc/mdadm.bak
# Assemble your raid by-hand:
	mdadm --assemble /dev/md2  /dev/sda2 /dev/sdb2
# Save current config
	mdadm --detail --scan > /etc/mdadm.conf
# Try again
	reboot

If you can mount /dev/md2 after rebooting, add a line to /etc/fstab:
	/dev/md2  /home ......
and reboot again. Check if /home is mounted OK.


More information about the Slackdocs mailing list