<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Hi Weiguo,<br>
      <br>
      my answers are inline.<br>
      <br>
      -martin<br>
      <br>
      On 30.05.2013 21:20, w sun wrote:<br>
    </div>
    <blockquote cite="mid:BAY178-W31F73387318764C8CE7676E2910@phx.gbl"
      type="cite">
      <style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style>
      <div dir="ltr">I would suggest on nova compute host (particularly
        if you have separate compute nodes),
        <div><br>
        </div>
        <div>(1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf
          is readable by user nova!!</div>
      </div>
    </blockquote>
    yes to both<br>
    <blockquote cite="mid:BAY178-W31F73387318764C8CE7676E2910@phx.gbl"
      type="cite">
      <div dir="ltr">
        <div>(2) make sure you can start up a regular ephemeral instance
          on the same nova node (ie, nova-compute is working correctly)</div>
      </div>
    </blockquote>
    an ephemeral instance is working<br>
    <blockquote cite="mid:BAY178-W31F73387318764C8CE7676E2910@phx.gbl"
      type="cite">
      <div dir="ltr">
        <div>(3) if you are using cephx, make sure libvirt secret is set
          up correct per instruction at ceph.com</div>
      </div>
    </blockquote>
    I do not use cephx<br>
    <blockquote cite="mid:BAY178-W31F73387318764C8CE7676E2910@phx.gbl"
      type="cite">
      <div dir="ltr">
        <div>(4) look at
          /var/lib/nova/instance/xxxxxxxxxxxxx/libvirt.xml and the disk
          file is pointing to the rbd volume</div>
      </div>
    </blockquote>
    For an ephemeral instance the folder is create, for a volume bases
    instance the folder is not created.<br>
    <br>
    <blockquote cite="mid:BAY178-W31F73387318764C8CE7676E2910@phx.gbl"
      type="cite">
      <div dir="ltr">
        <div>(5) If all above look fine and you still couldn't perform
          nova boot with the volume,  you can try last thing to manually
          start up a kvm session with the volume similar to below. At
          least this will tell you if you qemu has the correct rbd
          enablement.</div>
        <div><br>
        </div>
        <div>              /usr/bin/kvm -m 2048 -drive
          file=rbd:ceph-openstack-volumes/volume-3f964f79-febe-4251-b2ba-ac9423af419f,index=0,if=none,id=drive-virtio-disk0
          -boot c -net nic -net user -nographic  -vnc :1000 -device
          piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1</div>
        <div><br>
        </div>
      </div>
    </blockquote>
    If I start kvm by hand it is working.<br>
    <br>
    <blockquote cite="mid:BAY178-W31F73387318764C8CE7676E2910@phx.gbl"
      type="cite">
      <div dir="ltr">
        <div>--weiguo</div>
        <div><br>
          <div>> Date: Thu, 30 May 2013 16:37:40 +0200<br>
            > From: <a class="moz-txt-link-abbreviated" href="mailto:martin@tuxadero.com">martin@tuxadero.com</a><br>
            > To: <a class="moz-txt-link-abbreviated" href="mailto:ceph-users@ceph.com">ceph-users@ceph.com</a><br>
            > CC: <a class="moz-txt-link-abbreviated" href="mailto:openstack@lists.launchpad.net">openstack@lists.launchpad.net</a><br>
            > Subject: [ceph-users] Openstack with Ceph, boot from
            volume<br>
            > <br>
            > Hi Josh,<br>
            > <br>
            > I am trying to use ceph with openstack (grizzly), I
            have a multi host setup.<br>
            > I followed the instruction
            <a class="moz-txt-link-freetext" href="http://ceph.com/docs/master/rbd/rbd-openstack/">http://ceph.com/docs/master/rbd/rbd-openstack/</a>.<br>
            > Glance is working without a problem.<br>
            > With cinder I can create and delete volumes without a
            problem.<br>
            > <br>
            > But I cannot boot from volumes.<br>
            > I doesn't matter if use horizon or the cli, the vm goes
            to the error state.<br>
            > <br>
            > From the nova-compute.log I get this.<br>
            > <br>
            > 2013-05-30 16:08:45.224 ERROR nova.compute.manager<br>
            > [req-5679ddfe-79e3-4adb-b220-915f4a38b532<br>
            > 8f9630095810427d865bc90c5ea04d35
            43b2bbbf5daf4badb15d67d87ed2f3dc]<br>
            > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc]
            Instance failed block<br>
            > device setup<br>
            > .....<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError:
            [Errno 101]<br>
            > ENETUNREACH<br>
            > <br>
            > What tries nova to reach? How could I debug that
            further?<br>
            > <br>
            > Full Log included.<br>
            > <br>
            > -martin<br>
            > <br>
            > Log:<br>
            > <br>
            > ceph --version<br>
            > ceph version 0.61
            (237f3f1e8d8c3b85666529860285dcdffdeda4c5)<br>
            > <br>
            > root@compute1:~# dpkg -l|grep -e ceph-common -e cinder<br>
            > ii ceph-common 0.61-1precise<br>
            > common utilities to mount and interact with a ceph
            storage<br>
            > cluster<br>
            > ii python-cinderclient 1:1.0.3-0ubuntu1~cloud0<br>
            > python bindings to the OpenStack Volume API<br>
            > <br>
            > <br>
            > nova-compute.log<br>
            > <br>
            > 2013-05-30 16:08:45.224 ERROR nova.compute.manager<br>
            > [req-5679ddfe-79e3-4adb-b220-915f4a38b532<br>
            > 8f9630095810427d865bc90c5ea04d35
            43b2bbbf5daf4badb15d67d87ed2f3dc]<br>
            > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc]
            Instance failed block<br>
            > device setup<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most
            recent call last):<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
            line 1071,<br>
            > in _prep_block_device<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] return<br>
            > self._setup_block_device_mapping(context, instance,
            bdms)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
            line 721, in<br>
            > _setup_block_device_mapping<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] volume =<br>
            > self.volume_api.get(context, bdm['volume_id'])<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py",
            line 193, in get<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc]<br>
            > self._reraise_translated_volume_exception(volume_id)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py",
            line 190, in get<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] item =<br>
            > cinderclient(context).volumes.get(volume_id)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py",
            line 180,<br>
            > in get<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] return
            self._get("/volumes/%s"<br>
            > % volume_id, "volume")<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/cinderclient/base.py",
            line 141, in _get<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] resp, body =<br>
            > self.api.client.get(url)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/cinderclient/client.py",
            line 185, in get<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] return
            self._cs_request(url,<br>
            > 'GET', **kwargs)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/cinderclient/client.py",
            line 153, in<br>
            > _cs_request<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/cinderclient/client.py",
            line 123, in<br>
            > request<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            > "/usr/lib/python2.7/dist-packages/requests/api.py",
            line 44, in request<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] return<br>
            > session.request(method=method, url=url, **kwargs)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/requests/sessions.py",
            line 279, in<br>
            > request<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] resp =
            self.send(prep,<br>
            > stream=stream, timeout=timeout, verify=verify,
            cert=cert, proxies=proxies)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/requests/sessions.py",
            line 374, in send<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] r =
            adapter.send(request,<br>
            > **kwargs)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] File<br>
            >
            "/usr/lib/python2.7/dist-packages/requests/adapters.py",
            line 206, in send<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] raise
            ConnectionError(sockerr)<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError:
            [Errno 101]<br>
            > ENETUNREACH<br>
            > 2013-05-30 16:08:45.224 19614 TRACE
            nova.compute.manager [instance:<br>
            > 059589a3-72fc-444d-b1f0-ab1567c725fc]<br>
            > 2013-05-30 16:08:45.329 AUDIT nova.compute.manager<br>
            > [req-5679ddfe-79e3-4adb-b220-915f4a38b532<br>
            > 8f9630095810427d865bc90c5ea04d35
            43b2bbbf5daf4badb15d67d87ed2f3dc]<br>
            > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc]
            Terminating instance<br>
            > _______________________________________________<br>
            > ceph-users mailing list<br>
            > <a class="moz-txt-link-abbreviated" href="mailto:ceph-users@lists.ceph.com">ceph-users@lists.ceph.com</a><br>
            > <a class="moz-txt-link-freetext" href="http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com">http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com</a><br>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>