HOW TO BOOT LINUX FROM OCZ REVODRIVE http://www.average.org/ssdroot/ A word of warning: be prepared to end up with unbootable system, having to manually revive it in the grub interactive mode and in intiramfs's busybox. Keep manuals and howtos at hand, you won't be able to look them up on the computer that is not running the OS at the moment. I did it with Revodrive 3 120 Gb. Procedure for different models may vary. Revodrive is a "fakeraid": it looks to Linux as two different devices, but BIOS implements RAID0 on them, so at boot it looks as one device. If you want grub to be able to boot from it, you have to make Linux mdraid exactly match the BIOS RAID, then grub will be able to see the boot filesystem. Obviously, BIOS RAID implementation knows nothing about Linux mdraid superblocks. So, in order to mimic the BIOS configuration, we must make Linux RAID without superblocks, with "mdadm --build". After setting up my system, I realized that Linux RAID superblocks are located at the end of the devices, so it could work with "proper" linux RAID, "mdadm --assemble". That would simplify things a lot, and should not cause problems. BIOS RAID would just have two last blocks containing odd data, Linux would interpret them and RAID superblocks. Anyway, I will be describing the "hard way" here. In order to mount root, early userspace needs to either get the RAID configured by the kernel, or do it itself. Kernel has a command line attribure to configure supreblock-less RAIDs on boot, described in Documentation/md.txt. Unfortunately, it does not work. So, let's add a custom script into initramfs. The easiest way is to make it specific to our particular instance of the card. Find which devices are the two revodrive sections: $ ls -l /dev/disk/by-id|grep ata-OCZ-REVODRIVE lrwxrwxrwx 1 root root 9 Mar 30 17:05 ata-OCZ-REVODRIVE3_OCZ-XXXXXXXXXXXXXXXX -> ../../sdg lrwxrwxrwx 1 root root 9 Mar 30 17:05 ata-OCZ-REVODRIVE3_OCZ-YYYYYYYYYYYYYYYY -> ../../sdh Apparently, factory default chunk size for the device is 64K. If you've changed it in the maintenance tool, use that figure. If your system uses iniramfs-tools, add the following script (under any name, e.g. "revodrive") into the directory /etc/initramfs-tools/scripts/local-top: ===== Start script file ===== #!/bin/sh if [ "$1" = "prereqs" ] ; then exit 0 fi . /scripts/functions wait_for_udev mdadm --build /dev/mdN --level=0 --raid-devices=2 --chunk=64 \ /dev/disk/by-id/ata-OCZ-REVODRIVE3_OCZ-XXXXXXXXXXXXXXXX \ /dev/disk/by-id/ata-OCZ-REVODRIVE3_OCZ-YYYYYYYYYYYYYYYY ===== End script file ===== I suggest doing this on the live system: you will be able to (1) check that it works, and (2) have ready initramfs images to copy to the target drive. You can of course make the script more generic, not bound to the particular instance of the device. Run "update-initramfs -u" and check that the md device gets created when you boot the system. Adding some sensible error handling to the script might be a good idea, too. I suggest to create disklabel and partition the raid. Use '-S 32 -H 32' to eusure that partitions start at "round" bounraries vs. the flash erase blocks. This is optional, but you will need reserved space for grub bootloader, and partitioning ensures that there is such space, onto which the filesystems will not try to stomp. Create partitions to your taste. Make one partition bootable in case BIOS needs it. Note that you must partition the md device, not the member disks, because BIOS will see the RAID as a single disk, and expect partition table there. Note: physically, the partition table will live in the first chunk of the first member of the RAID, and on boot, the kernel will "think" that the partition table belongs to the member drive, and complain that it specifies incorrect size. This is harmless and should be ignored. Create filesystems, providing parameters suggested for SSD media, for example: # mkfs.ext4 -b 1024 -E stride=128,stripe-width=128 -O ^has_journal /dev/mdNpM Mount all new filesystems under some point the way they will be arranged on the target system, e.g.: # mount -o noatime,discard /dev/md6p3 /mnt # mkdir /mnt/boot # mount -o noatime,discard /dev/md6p1 /mnt/boot and copy everything you want to copy to the new place: # cd / # find . opt var -mount -depth -print | \ > grep -v lost+found | \ > time cpio -dumpv /mnt Edit /mnt/etc/fstab to mount the new root and other filesystems, and swaps. Use "dumpe2fs /dev/mdNpM | grep UUID" to find the UUIDs, or you can safely put the devices names there, as /dev/mdN is defined "by hand" and will never change on its own. It is recommended to specify "noatime,discard" options for best filesystem performance on SSD media. Next thing, you need to convince grub installer that the RAID is the thing that it will see as the boot disk. Create this entry in /mnt/boot/grub/device.map (possibly replacing the one that was there before): (hd0) /dev/mdN Prepare to do things in chrooted environment: # mount -o bind /sys /mnt/sys # mount -o bind /proc /mnt/proc # mount -o bind /dev /mnt/dev # chroot /mnt /bin/bash In the chrooted environment: # update-grub2 -o /boot/grub/grub.cfg # grub-install /dev/mdN Note that after you have switched to the new setup with root on the SSD, when you run "update-initramfs -u" it will tell you that because there is no /dev/mdN in mdadm.conf, your system will be unbootable. In fact, it would have been unbootable if you have not fixed it by including the script into initramfs. If you go for superblock-full RAID, "mdadm --assemble", there will be no need in a custom script in initramfs, and "update-initramfs -u" will not complain about unbootable system. All other steps should remain the same. But as said, I did not actually test if this approach works. == Eugene Crosser, 2013-04-01 (but not a joke).