[Openstack] Cannot boot from volume with 2 devices

Gaurav Gupta gauravgupta at gmail.com
Wed Nov 2 15:32:58 UTC 2011


On Tue, Nov 1, 2011 at 3:51 PM, Scott Moser <smoser at ubuntu.com> wrote:

> On Tue, 1 Nov 2011, Vishvananda Ishaya wrote:
>
> > Sounds like we can work around this pretty easily by sorting the disks
> before we pass them into the xml template.
>
> The long term solution here is not to load the kernel and the ramdisk
> outside the image, but rather let grub load it with root=LABEL=xxxx
> or root=UUID=xxxx .
>
> If you boot one of the full disk Ubuntu image (-disk1.img) files at
> https://cloud-images.ubuntu.com/releases/oneiric/release/ or
> https://cloud-images.ubuntu.com/server/natty/current/ , then you wont have
> the problem.  You'll also be able to 'apt-get update && apt-get
> dist-upgrade && reboot' and get a new kernel.  That is not possible with
> the hypervisor doing the kernel and ramdisk loading.
>

Actually you can create a bootable image where the kernel and ramdisk are
picked from the root file system. AFAIU you can't do this with volumes
thought, you have to create an image and boot from it.

This is assuming that in the multiple-disks-attached scenario, the *real*
> root disk (the one with the bootloader on it) is found by bios.
>
> static device names were deprecated several years ago by all linux
> distributions.  Lets move towards using the better solution.
>


> >
> > Vish
> >
> > On Nov 1, 2011, at 9:52 AM, Gaurav Gupta wrote:
> >
> > > Hi all, I asked a question on Launchpad. but haven't heard back
> anything yet. Trying this forum to see if someone has any idea how to
> resolve this issue:
> > > https://answers.launchpad.net/nova/+question/176938
> > >
> > > To summarize:
> > > ----------------------
> > >
> > > Say I had 2 disks, disk1 and disk2 (represented by 2 volumes). disk1
> has the root-file-system and disk2 has some data. I boot an instances using
> the boot-from-volumes extension, and specify the 2 disks such as disk1
> should be attached to /dev/vda and disk2 to /dev/vdb. When the instance is
> launched it fails to boot, because it tries to find the root-filesystem on
> disk2 instead.
> > >
> > > The underlying problem is with virsh/libvirt. Boot fails because in
> the libvirt.xml file created by Openstack, disk2 (/dev/vdb) is listed
> before disk1 (/dev/vda). So, what happens is that the hypervisor attaches
> disk2 first (since its listed first in the XML). Therefore when these disks
> are attached on the guest, disk2 appears as /dev/vda and disk1 as /dev/vdb,
> which causes the boot failure. Later the kernel tries to find the root
> filesystem on '/dev/vda' (because thats' what is selected as the root) and
> it fails for obvious reason. I think it's a virsh bug. It should be smart
> about it and attach the devices in the right order.
> _______________________________________________
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111102/fcfee68c/attachment.html>


More information about the Openstack mailing list