<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<blockquote cite="mid:20160624081923.GE25240@redhat.com" type="cite">
<blockquote type="cite">
<pre wrap="">Does QEMU support hardware initiators? iSER?
</pre>
</blockquote>
<pre wrap="">
No, this is only for case where you're doing pure software based
iSCSI client connections. If we're relying on local hardware that's
a different story.
</pre>
<blockquote type="cite">
<pre wrap="">
We regularly fix issues with iSCSI attaches in the release cycles of
OpenStack,
because it's all done in python using existing linux packages. How often
</pre>
</blockquote>
<pre wrap="">
This is a great example of the benefit that in-QEMU client gives us. The
Linux iSCSI client tools have proved very unreliable in use by OpenStack.
This is a reflection of the very architectural approach. We have individual
resources needed by distinct VMs, but we're having to manage them as a host
wide resource and that's creating us unneccessary complexity and having a
poor effect on our reliability overall.</pre>
</blockquote>
I've been doing more and more digging and research into doing this<br>
and it seems that canonical removed libiscsi from qemu due to
security problems<br>
in the 14.04 LTS release cycle.<br>
<br>
Trying to fire up a new vm manually with qemu attaching an iscsi
disk via<br>
the documented mechanism ends up with qemu complaining that it can't<br>
open the disk 'unknown protocol'. <br>
<br>
qemu-system-x86_64 -drive
file=iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0
-iscsi initiator-name=iqn.walt-qemu-initiator<br>
qemu-system-x86_64: -drive
file=iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0:
could not open disk image
iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0:
Unknown protocol<br>
<br>
There was bug filed against qemu back in 2014 and was marked as wont
fix due to security issues. <br>
<a class="moz-txt-link-freetext" href="https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1271573">https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1271573</a><br>
<br>
That looks like it has been fixed since here:<br>
<a class="moz-txt-link-freetext" href="https://bugs.launchpad.net/ubuntu/+source/libiscsi/+bug/1271653">https://bugs.launchpad.net/ubuntu/+source/libiscsi/+bug/1271653</a><br>
But that's only xenial (16.04) support and won't be in 14.x tree. <br>
<br>
<br>
I have also confirmed that the
nova.virt.libvirt.volume.net.LibvirtNetVolumeDriver<br>
fails for iscsi for the same exact reason against nova master.<br>
<br>
I modified the nova/virt/libvirt/driver.py and changed iscsi to
point to the LibvirtNetVolumeDriver<br>
and tried to attach an iSCSI volume. It failed and the libvirtd log
showed the unknown protocol error.<br>
<br>
The n-cpu.log entry:<br>
2016-06-24 08:09:21.555 8891 DEBUG nova.virt.libvirt.guest
[req-46954106-c728-43ba-b40a-5b91a1639610 admin admin] attach device
xml: <disk type="network" device="disk"><br>
<driver name="qemu" type="raw" cache="none"/><br>
<source protocol="iscsi"
name="iqn.2000-05.com.3pardata:20810002ac00383d/0"><br>
<host name="10.52.1.11" port="3260"/><br>
</source><br>
<target bus="virtio" dev="vdb"/><br>
<serial>a1d0c85e-d6e6-424f-9ca7-76ecd0ce45fb</serial><br>
</disk><br>
attach_device /opt/stack/nova/nova/virt/libvirt/guest.py:251<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[req-46954106-c728-43ba-b40a-5b91a1639610 admin admin] [instance:
74092b75-dc20-47e5-9127-c63367d05b29] Failed to attach volume at
mountpoint: /dev/vdb<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] Traceback (most
recent call last):<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1160, in
attach_volume<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29]
guest.attach_device(conf, persistent=True, live=live)<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] File
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 252, in
attach_device<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29]
self._domain.attachDeviceFlags(device_xml, flags=flags)<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] File
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line
186, in doit<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] result =
proxy_call(self._autowrap, f, *args, **kwargs)<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] File
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line
144, in proxy_call<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] rv = execute(f,
*args, **kwargs)<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] File
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line
125, in execute<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] six.reraise(c,
e, tb)<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] File
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83,
in tworker<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] rv =
meth(*args, **kwargs)<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] File
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 517, in
attachDeviceFlags<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] if ret == -1:
raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)<br>
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver
[instance: 74092b75-dc20-47e5-9127-c63367d05b29] libvirtError:
operation failed: open disk image file failed<br>
<br>
<br>
The <b>/var/log/libvirtd.log</b> entry....<br>
<br>
2016-06-24 15:09:21.572+0000: 21000: debug :
qemuMonitorIOProcess:396 : QEMU_MONITOR_IO_PROCESS:
mon=0x7fd4f000c920 buf={"return": "could not open disk image
iscsi://10.52.1.11:3260/iqn.2000-05.com.3pardata%3A20810002ac00383d/0:
Unknown protocol\r\n", "id": "libvirt-18"}^M<br>
len=153<br>
<br>
<br>
<br>
<br>
So the argument that linux iSCSI client tools have proven unreliable
also holds true for libiscsi.<br>
This really isn't a win. <br>
<br>
<br>
<br>
As a side note here: <br>
I am working on a performance report that I'm working on to test<br>
the performance of bare metal iscsi vs. host passed through to virsh
(like we do in Nova today)<br>
and qemu iscsi built in support. my preliminary results show that
libiscsi and host attach passed<br>
through to virsh run about the same relative io performance, but
both are about 50% iSCSI to bare metal.<br>
<br>
Walt<br>
<br>
<blockquote cite="mid:20160624081923.GE25240@redhat.com" type="cite">
<blockquote type="cite">
<pre wrap="">are QEMU
releases done and upgraded on customer deployments vs. python packages
(os-brick)?
</pre>
</blockquote>
<pre wrap="">
We're removing the entire layer of instability by removing the need to
deal with any command line tools, and thus greatly simplifying our
setup on compute nodes. No matter what we might do in os-brick it'll
never give us a simple or reliable system - we're just papering over
the flaws by doing stuff like blindly re-trying iscsi commands upon
failure.
Regards,
Daniel
</pre>
</blockquote>
<br>
</body>
</html>