[Openstack-operators] [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

John Griffith john.griffith8 at gmail.com
Fri Jun 17 03:47:15 UTC 2016


On Wed, Jun 15, 2016 at 5:59 PM, Preston L. Bannister <preston at bannister.us>
wrote:

> QEMU has the ability to directly connect to iSCSI volumes. Running the
> iSCSI connections through the nova-compute host *seems* somewhat
> inefficient.
>

​I know tests I've run in the past virt-io actually does a really good job
here.  Granted it's been a couple years since I've spent any time looking
at this so really can't definitively say without looking again.​


>
> There is a spec/blueprint and implementation that landed in Kilo:
>
>
> https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html
> https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator
>
> From looking at the OpenStack Nova sources ... I am not entirely clear on
> when this behavior is invoked (just for Ceph?), and how it might change in
> future.
>

​I actually hadn't seen that, glad you pointed it out :)  I haven't tried
configuring it but will try and do so and see what sort of differences in
performance there are.  One other thing to keep in mind (I could be
mistaken, but...) last time I looked at this is wasn't vastly different
from the model we use now.  It's not actually using an iSCSI initiator on
the Instance, it's still using an initiator on the compute node and passing
the device in I believe.  I'm sure somebody will correct me if I'm wrong
here.

I don't know what your reference to Ceph has to do with this here?  This
appears to be a Cinder iSCSI mechanism.  You can see how to config in the
commit message (https://review.openstack.org/#/c/135854/19, again I plan to
try it out).

>
> Looking for a general sense where this is headed. (If anyone knows...)
>

​Seems like you should be able to configure it and run it, assuming the
work is actually done and hasn't broken while sitting.
​


>
> If there is some problem with QEMU and directly attached iSCSI volumes,
> that would explain why this is not the default. Or is this simple inertia?
>

​Virt-io is actually super flexible and lets us do all sorts of things with
various connector types.  I think you'd have to have some pretty compelling
data to change the default here.​  Another thing to keep in mind, even if
we just consider iSCSI and leave out FC and other protocols; one thing we
absolutely wouldn't want is to give Instances direct access to the iSCSI
network.  This raises all sorts of security concerns for folks running
public clouds.  It also means more heavy weight Instances due to additional
networking requirements, the iSCSI stack etc.  More importantly, the last
time I looked hot-plugging didn't work with this option, but again I admit
it's been a long time since I've looked at it and my memory isn't always
that great.

>
>
> I have a concrete concern. I work for a company (EMC) that offers backup
> products, and we now have backup for instances in OpenStack. To make this
> efficient, we need to collect changed-block information from instances.
>

​Ahh, ok, so you don't really have a "concrete concern" about using virt-io
driver, or the way things work... or even any data that one performs
better/worse than the other.  What you do have apparently is a solution
you'd like to integrate and sell with OpenStack.​  Fair enough, but we
should probably be clear about the motivation until there's some data
(there very well may be compelling reasons to change this).

>
> 1)  We could put an intercept in the Linux kernel of the nova-compute host
> to track writes at the block layer. This has the merit of working for
> containers, and potentially bare-metal instance deployments. But is not
> guaranteed for instances, if the iSCSI volumes are directly attached to
> QEMU.
>
> 2)  We could use the QEMU support for incremental backup (first bit landed
> in QEMU 2.4). This has the merit of working with any storage, by only for
> virtual machines under QEMU.
>
> As our customers are (so far) only asking about virtual machine backup. I
> long ago settled on (2) as most promising.
>
> What I cannot clearly determine is where (1) will fail. Will all iSCSI
> volumes connected to QEMU instances eventually become directly connected?
>
>
> Xiao's unanswered query (below) presents another question. Is this a
> site-choice? Could I require my customers to configure their OpenStack
> clouds to always route iSCSI connections through the nova-compute host? (I
> am not a fan of this approach, but I have to ask.)
>

​Certainly seems like you could.  The question is would the distro in use
support it?  Also would it work with multi-backend configs.  Honestly it
sounds like there's a lot of data collection and analysis that you could do
here and contribute back to the community.​  Perhaps Xiao or you should try
it out?

>
> To answer Xiao's question, can a site configure their cloud to *always*
> directly connect iSCSI volumes to QEMU?
>
>
>
> On Tue, Feb 16, 2016 at 4:54 AM, Xiao Ma (xima2) <xima2 at cisco.com> wrote:
>
>> Hi, All
>>
>> I want to make the qemu communicate with iscsi target using libiscsi
>> directly, and I
>> followed https://review.openstack.org/#/c/135854/ to add
>> 'volume_drivers = iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’
>>  in nova.conf
>>  and then restarted nova services and cinder services, but still the
>> volume configuration of vm is as bellow:
>>
>>     <disk type='block' device='disk'>
>>       <driver name='qemu' type='raw' cache='none'/>
>>       <source
>> dev='/dev/disk/by-path/ip-10.75.195.205:3260-iscsi-iqn.2010-10.org.openstack:volume-076bb429-67fd-4c0c-9ddf-0dc7621a975a-lun-0'/>
>>       <target dev='vdb' bus='virtio'/>
>>       <serial>076bb429-67fd-4c0c-9ddf-0dc7621a975a</serial>
>>       <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
>> function='0x0'/>
>>     </disk>
>>
>>
>> I use centos7 and Liberty version of OpenStack.
>> Could anybody tell me how can I achieve it?
>>
>>
>> Thanks.
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160616/f4f01778/attachment.html>


More information about the OpenStack-operators mailing list