[openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

Walter A. Boring IV walter.boring at hpe.com
Thu Jun 16 17:18:59 UTC 2016


One major disadvantage is lack of multipath support.

Multipath is still done outside of qemu and there is no native multipath 
support inside of qemu from what I can tell.  Another
disadvantage is that qemu iSCSI support is all s/w based. There are 
hardware iSCSI initiators that are supported by os-brick today.  I think 
migrating attaches into qemu itself isn't a good idea and will always be 
behind the level of support already provided by the tools that have been 
around forever.  Also, what kind of support does QEMU have for target 
portal discovery?  Can it discover all targets via a single portal, and 
can you pass in multiple portals to do discovery for the same volume?  
This is also related to multipath support.  Some storage arrays can't do 
discovery on a single portal, they have to have discovery on each interface.

Do you have some actual numbers to prove that host based attaches passed 
into libvirt are slower than QEMU direct attaches?

You can't really compare RBD to iSCSI.  RBD is a completely different 
beast.  The kernel rbd driver hasn't been as stable and as fast as the 
rbdclient that qemu uses.

Walt


On 06/15/2016 04:59 PM, Preston L. Bannister wrote:
> QEMU has the ability to directly connect to iSCSI volumes. Running the 
> iSCSI connections through the nova-compute host *seems* somewhat 
> inefficient.
>
> There is a spec/blueprint and implementation that landed in Kilo:
>
> https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html
> https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator
>
> From looking at the OpenStack Nova sources ... I am not entirely clear 
> on when this behavior is invoked (just for Ceph?), and how it might 
> change in future.
>
> Looking for a general sense where this is headed. (If anyone knows...)
>
> If there is some problem with QEMU and directly attached iSCSI 
> volumes, that would explain why this is not the default. Or is this 
> simple inertia?
>
>
> I have a concrete concern. I work for a company (EMC) that offers 
> backup products, and we now have backup for instances in OpenStack. To 
> make this efficient, we need to collect changed-block information from 
> instances.
>
> 1)  We could put an intercept in the Linux kernel of the nova-compute 
> host to track writes at the block layer. This has the merit of working 
> for containers, and potentially bare-metal instance deployments. But 
> is not guaranteed for instances, if the iSCSI volumes are directly 
> attached to QEMU.
>
> 2)  We could use the QEMU support for incremental backup (first bit 
> landed in QEMU 2.4). This has the merit of working with any storage, 
> by only for virtual machines under QEMU.
>
> As our customers are (so far) only asking about virtual machine 
> backup. I long ago settled on (2) as most promising.
>
> What I cannot clearly determine is where (1) will fail. Will all iSCSI 
> volumes connected to QEMU instances eventually become directly connected?
>
>
> Xiao's unanswered query (below) presents another question. Is this a 
> site-choice? Could I require my customers to configure their OpenStack 
> clouds to always route iSCSI connections through the nova-compute 
> host? (I am not a fan of this approach, but I have to ask.)
>
> To answer Xiao's question, can a site configure their cloud to 
> *always* directly connect iSCSI volumes to QEMU?
>
>
>
> On Tue, Feb 16, 2016 at 4:54 AM, Xiao Ma (xima2) <xima2 at cisco.com 
> <mailto:xima2 at cisco.com>> wrote:
>
>     Hi, All
>
>     I want to make the qemu communicate with iscsi target using
>     libiscsi directly, and I
>     followed https://review.openstack.org/#/c/135854/ to add
>     'volume_drivers =
>     iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’ in nova.conf
>      and then restarted nova services and cinder services, but still
>     the volume configuration of vm is as bellow:
>
>         <disk type='block' device='disk'>
>           <driver name='qemu' type='raw' cache='none'/>
>           <source
>     dev='/dev/disk/by-path/ip-10.75.195.205:3260-iscsi-iqn.2010-10.org.openstack:volume-076bb429-67fd-4c0c-9ddf-0dc7621a975a-lun-0'/>
>           <target dev='vdb' bus='virtio'/>
>     <serial>076bb429-67fd-4c0c-9ddf-0dc7621a975a</serial>
>           <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
>     function='0x0'/>
>         </disk>
>
>
>     I use centos7 and Liberty version of OpenStack.
>     Could anybody tell me how can I achieve it?
>
>
>     Thanks.
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160616/cc4d4b12/attachment.html>


More information about the OpenStack-dev mailing list