[Openstack] Nova compute manager: trying to understand rationale for kpartx atop qemu-nbd

Lee Schermerhorn Lee.Schermerhorn at hp.com
Wed May 2 18:37:55 UTC 2012


With diablo plus some of our own changes, we've discovered our compute
nodes in some of our test nova environments are littered with
orphaned /dev/mapper/nbd* links to /dev/dm-* devices that are holding
the respective nbd devices.  Of course, this causes injection failure
for VMs that attempt to reuse those wedged nbd devices that appear
available by the method that nova uses to determine nbd device
availability.

This could well be self-inflicted, and we haven't gotten down to the
root cause.  However, in looking at it, we're wondering why the compute
manager uses kpartx, specifically atop nbd device.  It does appear to be
rather fragile and nbd devices do support partitioned images if the
module is installed with max_part > 0, where zero is the unfortunate
default.

We're thinking that maybe we can dispense with kpartx and, ensuring that
the nbd module is installed with max_part > 0 on our compute nodes, use
the resulting /dev/nbdXpY devices directly.   But, before we charge off
down that path, we want to understand the rationale for using kpartx
atop nbd.  We've searched the wiki and git logs and the wider 'net for
enlightenment.  Finding none, we turn to the collective wisdom of the
Community.

Is there some problem with qemu-nbd and partitioned images that argues
against this approach?  Perhaps qemu-nbd doesn't recognize/support all
of the partition table types that kpartx does?  Something more
insidious?

Anyone know?

Regards,
Lee Schermerhorn







More information about the Openstack mailing list