[Openstack-operators] State of Juno in Production

Clayton O'Neill clayton at oneill.net
Thu Feb 19 02:13:07 UTC 2015


We ran into an interesting issue today that seems counter-intuitive.  We
use Ceph as our image backend when booting from ephemeral volumes and for
cinder volumes.   We had a few remaining qcow2 images that would come up
and act as if there was no boot sector on the boot drive.

>From what it looks like, the code path in the imagebackend.py for the nova
libvirt driver looks to see if a matching image id already exists in
/var/lib/nova/instances/_base, and if it does, it just uploads it to ceph
and tries to boot off of it without converting the image.  It took a few
hours to work out what was going on, since docs on how the _base directory
works and what it's used for are kind of slim.  Needless to say, you can't
boot off of an unconverted qcow2 image, and we're not sure how we ended up
with qcow2 images in that directory anyway.  Deleting those images fixed
the problem for us, so add "Clean up stray qcow2 images in
/var/lib/nova/instances/_base" to the list of Juno upgrade tasks.

Another thing I've not worked out is why we still have raw images in the
_base directory.  I've verified that we're doing copy-on-write w/ceph at
boot time and that appears to be working, so why do we need these giant raw
images sitting out in /var/lib/instances/_base?  I thought the way this was
supposed to work is that glance would just return a pointer to the image in
ceph, nova would create a cow copy of that image, then boot the instance
from that directly.

On Tue, Feb 17, 2015 at 11:46 AM, Joe Topjian <joe at topjian.net> wrote:

> Hello,
>
> I'm beginning to plan for a Juno upgrade and wanted to get some feedback
> from anyone else who has gone through the upgrade and has been running Juno
> in production.
>
> The environment that will be upgraded is pretty basic: nova-network, no
> cells, Keystone v2. We run a RabbitMQ cluster, though, and per other recent
> discussions, see the same reported issues.
>
> The only issue I'm aware of is that live snapshotting is disabled. Has
> anyone re-enabled this and seen issues? What was the procedure to re-enable?
>
> Any other gotchas or significant differences seen from running Icehouse?
>
> Thanks,
> Joe
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150218/6057eabf/attachment.html>


More information about the OpenStack-operators mailing list