[openstack-dev] vhost-scsi support in Nova

Nicholas A. Bellinger nab at linux-iscsi.org
Thu Jul 24 05:32:44 UTC 2014


Hi Nova folks,

Please let me address some of the outstanding technical points that have
been raised recently within the following spec [1] for supporting vhost-scsi
[2] within Nova.

Mike and Daniel have been going back and forth on various details, so I
thought it might be helpful to open the discussion to a wider audience.

First, some background.  I'm the target (LIO) subsystem maintainer for the
upstream Linux kernel, and have been one of the primary contributors in that
community for a number of years.  This includes the target-core subsystem,
the backend drivers that communicate with kernel storage subsystems, and a
number of frontend fabric protocol drivers.

vhost-scsi is one of those frontend fabric protocol drivers that has been
included upstream, that myself and others have contributed to and improved
over the past three years.  Given this experience and commitment to
supporting upstream code, I'd like to address some of the specific points
wrt vhost-scsi here.

*) vhost-scsi doesn't support migration

Since it's initial merge in QEMU v1.5, vhost-scsi has a migration blocker
set.  This is primarily due to requiring some external orchestration in
order to setup the necessary vhost-scsi endpoints on the migration
destination to match what's running on the migration source.

Here are a couple of points that Stefan detailed some time ago about what's
involved for properly supporting live migration with vhost-scsi:

(1) vhost-scsi needs to tell QEMU when it dirties memory pages, either by
DMAing to guest memory buffers or by modifying the virtio vring (which also
lives in guest memory).  This should be straightforward since the
infrastructure is already present in vhost (it's called the "log") and used
by drivers/vhost/net.c.

(2) The harder part is seamless target handover to the destination host.
vhost-scsi needs to serialize any SCSI target state from the source machine
and load it on the destination machine.  We could be in the middle of
emulating a SCSI command.

An obvious solution is to only support active-passive or active-active HA
setups where tcm already knows how to fail over.  This typically requires
shared storage and maybe some communication for the clustering mechanism.
There are more sophisticated approaches, so this straightforward one is just
an example.

That said, we do intended to support live migration for vhost-scsi using
iSCSI/iSER/FC shared storage.

*) vhost-scsi doesn't support qcow2

Given all other cinder drivers do not use QEMU qcow2 to access storage
blocks, with the exception of the Netapp and Gluster driver, this argument
is not particularly relevant here.

However, this doesn't mean that vhost-scsi (and target-core itself) cannot
support qcow2 images.  There is currently an effort to add a userspace
backend driver for the upstream target (tcm_core_user [3]), that will allow
for supporting various disk formats in userspace.

The important part for vhost-scsi is that regardless of what type of target
backend driver is put behind the fabric LUNs (raw block devices using
IBLOCK, qcow2 images using target_core_user, etc) the changes required in
Nova and libvirt to support vhost-scsi remain the same.  They do not change
based on the backend driver.

*) vhost-scsi is not intended for production

vhost-scsi has been included the upstream kernel since the v3.6 release, and
included in QEMU since v1.5.  vhost-scsi runs unmodified out of the box on a
number of popular distributions including Fedora, Ubuntu, and OpenSuse.  It
also works as a QEMU boot device with Seabios, and even with the Windows
virtio-scsi mini-port driver.

There is at least one vendor who has already posted libvirt patches to
support vhost-scsi, so vhost-scsi is already being pushed beyond a debugging
and development tool.

For instance, here are a few specific use cases where vhost-scsi is
currently the only option for virtio-scsi guests:

  - Low (sub 100 usec) latencies for AIO reads/writes with small iodepth
    workloads
  - 1M+ small block IOPs workloads at low CPU utilization with large
    iopdeth workloads.
  - End-to-end data integrity using T10 protection information (DIF)

So vhost-scsi can/will support essential features like live migration,
qcow2, and the virtio-scsi data plane effort should not block existing
alternatives already in upstream.

With that, we'd like to see Nova officially support vhost-scsi because of
its wide availability in the Linux ecosystem, and the considerable
performance, efficiency, and end-to-end data-integrity benefits that it
already brings to the table.

We are committed to addressing the short and long-term items for this
driver, and making it a success in Openstack Nova.

Thank you,

--nab

[1] https://review.openstack.org/#/c/103797/5/specs/juno/virtio-scsi-settings.rst
[2] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/vhost/scsi.c
[3] http://www.spinics.net/lists/target-devel/msg07339.html




More information about the OpenStack-dev mailing list