Rexxer

Some tips for me and other

Amazon EC2 + Ubuntu + Raid

I had to add Raid to the EC2 instance with Ubuntu.

I chose RAID10.

So,

1. Add disks to the instance (I added 4 disks for RAID10):

Adding Instance Store Volumes to an AMI

Amazon EBS-backed AMIs don’t include an instance store by default. However, you might want instances launched from your Amazon EBS-backed AMIs to include instance store volumes. This section describes how to create an AMI that includes instance store volumes.

AWS Management Console

To add instance store volumes to an AMI

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.

  2. In the navigation pane, click Instances.

  3. Select an instance and select Create Image (EBS AMI) from the Actions list.

  4. In the Create Image dialog, add a meaningful name and description to your image.

  5. Click Instance Store Volumes.

  6. For each instance store volume, select a volume from the Instance Store list and a device name from Device, and then click Add.

    Instance Store Volumes tab in the Create Image dialog box
    This step just create a new AMI for the next creating.
    For the current instance you have to create volumes on EBS and attach them to the instance.

    2. Install mdadm for RAID: sudo apt-get install mdadm (Choose e-mail configuration during install – I chose No configuration)

    3. The next step – check that Ubuntu doesn’t mount these disks automatically – I don’t know why Ubuntu mount it, so, unmount them if needed and edit fstab:
    sudo umount /mnt
    4. Create RAID10:
    sudo mdadm –create –verbose /dev/md0 –level=10 –raid-devices=4 /dev/xvd[b-e]
    5. Monitor the creation: sudo cat /proc/mdstat or sudo watch cat /proc/mdstat
    6. Update initramfs to up RAID after reboot:
    sudo update-initramfs -u
    7. Format the new disk: sudo mkfs.ext4 /dev/md0
    8. Edit mdadm.conf:
    get UUID from sudo mdadm -Db /dev/md0
    and add to mdadm.conf the next e.g.:
    DEVICES /dev/xvdk /dev/xvdh /dev/xvdi /dev/xvfl
    ARRAY /dev/md0 level=10 num-devices=4 metadata=1.2 UUID=2cfdb9f9:4126fc1b:3c32dc68:473d8c4e
    9. Edit fstab:
    sudo echo “/dev/md0 /usr/data ext4 defaults,nobootwait,noatime 0 0”
    (/usr/data – mount point)
    10. Make changes in /etc/initramfs-tools/conf.d/mdadm to allow boot with degraded RAID:
    sudo nano /etc/initramfs-tools/conf.d/mdadm

    # BOOT_DEGRADED:
    # Do you want to boot your system if a RAID providing your root filesystem
    # becomes degraded?
    #
    # Running a system with a degraded RAID could result in permanent data loss
    # if it suffers another hardware fault.
    #
    # However, you might answer “yes” if this system is a server, expected to
    # tolerate hardware faults and boot unattended.
    BOOT_DEGRADED=yes

    For Red Hat and etc you need run the command dracut -f

    What you need to do is something like this:

    mkinitrd /boot/initrd-<kernel-version>.img <kernel-version>
    

    I’m assuming here that the CentOS version you’re using is still using mkinitrd – if it has switched to dracut then you will want:

    dracut /boot/initramfs-<kernel-version>.img <kernel-version>
    
    11. For getting e-mails about raid status you can add string like this MAILADDR mail@domain.com into mdadm.conf. You must have sendmail or another e-mailer.
    12. Reboot and check.
    13. Conclusions:
    1. I did it in Ubuntu 12  on the Amazon. I tried make RAID10 but it didn’t up after reboot and I lost the access to instances.
    So, I detached disk from instance, made a new instance in the same location us-east-1b and attached disk to the new VM, edited /etc/fstab, detached the disk again and attached it to the old instance (/dev/sda1 by default – check it before detaching) – I got the instances back and rebuild RAID sucessfully.
    2. If you have problems with raid and system doesn’t boot, you can detach disk and attach to another VM, then mount it and chroot. Then do some changes and do update-initramfs -u, then attach it to source VM.
    3. Make RAID only with EBS disks because internal (ephernal) disks will be cleaned after stop.
    Script for temp disks:
    #!/bin/sh
    sudo mdadm –force –run –create –verbose /dev/md0 –level=10 –raid-devices=4 /dev/xvd[b-e]
    mkfs.ext4 /dev/md0
    mount /dev/md0 /usr/data

Leave a Reply