[openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

Daniel P. Berrange berrange at redhat.com
Fri Jun 24 08:19:23 UTC 2016


On Thu, Jun 23, 2016 at 09:09:44AM -0700, Walter A. Boring IV wrote:
> 
> volumes connected to QEMU instances eventually become directly connected?
> 
> > Our long term goal is that 100% of all network storage will be connected
> > to directly by QEMU. We already have the ability to partially do this with
> > iSCSI, but it is lacking support for multipath. As & when that gap is
> > addressed though, we'll stop using the host OS for any iSCSI stuff.
> > 
> > So if you're requiring access to host iSCSI volumes, it'll work in the
> > short-medium term, but in the medium-long term we're not going to use
> > that so plan accordingly.
> 
> What is the benefit of this largely monolithic approach?  It seems that
> moving everything into QEMU is diametrically opposed to the unix model
> itself and
> is just a re-implementation of what already exists in the linux world
> outside of QEMU.

There are many benefits to having it inside QEMU. First it gives us
improved isolation between VMs, because we can control the network
I/O directly against the VM using cgroup resource controls. It gives
us improved security, particularly in combination with LUKS encryption
since the unencrypted block device is not directly visible / accessible
to any other process. It gives us improved reliability / managability
since we avoid having to spawn the iscsi client tools which have poor
error reporting and have been frequent sources of instability in our
infrastructure (eg see how we have to blindly re-run the same command
many times over because it randomly times out). It will give us improved
I/O performance because of a shorter I/O path to get requests from QEMU
out to the network.

NB, this is not just about iSCSI, the same is all true for RBD where
we've also stopped using in-kernel RBD client and do it all in QEMU.

> Does QEMU support hardware initiators? iSER?

No, this is only for case where you're doing pure software based
iSCSI client connections. If we're relying on local hardware that's
a different story.

> 
> We regularly fix issues with iSCSI attaches in the release cycles of
> OpenStack,
> because it's all done in python using existing linux packages.  How often

This is a great example of the benefit that in-QEMU client gives us. The
Linux iSCSI client tools have proved very unreliable in use by OpenStack.
This is a reflection of the very architectural approach. We have individual
resources needed by distinct VMs, but we're having to manage them as a host
wide resource and that's creating us unneccessary complexity and having a
poor effect on our reliability overall.

> are QEMU
> releases done and upgraded on customer deployments vs. python packages
> (os-brick)?

We're removing the entire layer of instability by removing the need to
deal with any command line tools, and thus greatly simplifying our
setup on compute nodes. No matter what we might do in os-brick it'll
never give us a simple or reliable system - we're just papering over
the flaws by doing stuff like blindly re-trying iscsi commands upon
failure.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|



More information about the OpenStack-dev mailing list