[openstack-dev] [nova] vhost-scsi support in Nova

Nicholas A. Bellinger nab at linux-iscsi.org
Fri Jul 25 09:47:05 UTC 2014


Hey Stefan,

On Thu, 2014-07-24 at 21:50 +0100, Stefan Hajnoczi wrote:
> On Thu, Jul 24, 2014 at 7:45 PM, Vishvananda Ishaya
> <vishvananda at gmail.com> wrote:
> > As I understand this work, vhost-scsi provides massive perf improvements
> > over virtio, which makes it seem like a very valuable addition. I’m ok
> > with telling customers that it means that migration and snapshotting are
> > not supported as long as the feature is protected by a flavor type or
> > image metadata (i.e. not on by default). I know plenty of customers that
> > would gladly trade some of the friendly management features for better
> > i/o performance.
> >
> > Therefore I think it is acceptable to take it with some documentation that
> > it is experimental. Maybe I’m unique but I deal with people pushing for
> > better performance all the time.
> 
> Work to make userspace virtio-scsi scale well on multicore hosts has
> begun.  I'm not sure there will be a large IOPS scalability difference
> between the two going forward.  I have CCed Fam Zheng who is doing
> this.

The latency and efficiency gains with existing vhost-scsi vs.
virtio-scsi (minus data-plane) are pretty significant, even when a
single queue per virtio controller is used.

Note the sub 100 usec latencies we've observed with fio random 4k
iodepth=1 workloads are with vhost exposing guest I/O buffer memory as a
zero-copy direct data placement sink for remote RDMA WRITEs. 

Also, average I/O Latency is especially low when the guest is capable of
utilizing a blk-mq based virtio guest driver.  For the best possible
results in KVM guest, virtio-scsi will want to be utilize the upcoming
scsi-mq support in Linux, that will greatly benefit both QEMU data-plane
and vhost-scsi type approaches of SCSI target I/O submission.

> 
> In virtio-blk vs vhost-blk a clear performance difference was never
> observed.  At the end of the day, the difference is whether a kernel
> thread or a userspace thread submits the aio request.  virtio-blk
> efforts remain focussed on userspace where ease of migration,
> management, and lower security risks are favorable.
> 

All valid points, no disagreement here.

> I guess virtio-scsi will play out the same way, which is why I stopped
> working on vhost-scsi.  If others want to do the work to integrate
> vhost-scsi (aka tcm_vhost), that's great.  Just don't expect that
> performance will make the effort worthwhile.  The real difference
> between the two is that the in-kernel target is a powerful and
> configurable SCSI target, whereas the userspace QEMU target is
> focussed on emulating SCSI commands without all the configuration
> goodies.

Understood.

As mentioned, we'd like the Nova folks to consider vhost-scsi support as
a experimental feature for the Juno release of Openstack, given the
known caveats.

Thanks for your comments!

--nab




More information about the OpenStack-dev mailing list