[openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

Preston L. Bannister preston at bannister.us
Fri Jun 24 05:02:16 UTC 2016


On 06/23/2016  Daniel Berrange wrote (lost attribution in thread):

> Our long term goal is that 100% of all network storage will be connected
>
to directly by QEMU. We already have the ability to partially do this with
> iSCSI, but it is lacking support for multipath. As & when that gap is
> addressed though, we'll stop using the host OS for any iSCSI stuff.
>
> So if you're requiring access to host iSCSI volumes, it'll work in the
> short-medium term, but in the medium-long term we're not going to use
> that so plan accordingly.
>

On 06/23/2016 10:09 AM, Walter A. Boring IV wrote:

> We regularly fix issues with iSCSI attaches in the release cycles of
> OpenStack,
> because it's all done in python using existing linux packages.  How often
> are QEMU
> releases done and upgraded on customer deployments vs. python packages
> (os-brick)?
>
> I don't see a compelling reason for re-implementing the wheel,
> and it seems like a major step backwards.
>

On Thu, Jun 23, 2016 at 12:07:43PM -0600, Chris Friesen wrote:

> This is an interesting point.
>
> Unless there's a significant performance benefit to connecting
> directly from qemu, it seems to me that we would want to leverage
> the existing work done by the kernel and other "standard" iSCSI
> initators.
>

On Thu, Jun 23, 2016 at 1:28 PM, Sean McGinnis <sean.mcginnis at gmx.com>
wrote:
>
> I'm curious to find out this as well. Is this for a performance gain? If
> so, do we have any metrics showing that gain is significant enough to
> warrant making a change like this?
>
> The host OS is still going to be involved. AFAIK, this just cuts out the
> software iSCSI initiator from the picture. So we would be moving from a
> piece of software dedicated to one specific functionality, to a
> different piece of software that's main reason for existence is nothing
> to do with IO path management.
>
> I'm not saying I'm completely opposed to this. If there is a reason for
> doing it then it could be worth it. But so far I haven't seen anything
> explaining why this would be better than what we have today.



First, I have not taken any measurements, so please ignore everything I
say. :)

Very generally, if you take out unnecessary layers, you can often improve
performance and reliability. Not in every case, but often.

Volume connections routed through the Linux kernel *might* lose performance
from the extra layer (measures are needed), and have to be managed. The
last could be easily underestimated. Nova has to manage Linux's knowledge
of volume connections. In the strictest sense the nova-compute host Linux
does not *need* to know about volumes attached to Nova instances. The
hairiest part of the the problem: What to do when the nova-compute Linux
table of attached volumes gets out of sync? My guess is there are error
cases not-yet well-handled in Nova in this area. My guess is Nova could be
somewhat simpler if all volumes were directly attached to QEMU.

(Bit cheating when mentioning the out-of-sync case, as got bit a couple
times in testing. It happens.)

But ... as mentioned earlier, suspect you cannot get to 100% direct to QEMU
if there is specialized hardware that has to tie into the nova-compute
Linux. Seems unlikely you would get consensus, as this impacts major
vendors. Which means you have to keep managing the host map of volumes.
Which means you cannot simplify Nova. (If someone knows how to use the
specialized hardware, with less footprint in host Linux, this answer could
change.)

Where this will land, I do not know. Do not know the performance measures.

Can OpenStack allow for specialized hardware, without routing through host
Linux? (Probably not, but would be happy to be wrong.)

And again, as an outsider, I could be wrong about everything. :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160623/1ea396cb/attachment.html>


More information about the OpenStack-dev mailing list