<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div class=""><div class="h5">On 06/23/2016 Daniel Berrange wrote (lost attribution in thread):</div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">Our long term goal is that 100% of all network storage will be connected<span style="color:rgb(34,34,34)"> </span><br></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">to directly by QEMU. We already have the ability to partially do this with<br>iSCSI, but it is lacking support for multipath. As & when that gap is<br>addressed though, we'll stop using the host OS for any iSCSI stuff.<br><br>So if you're requiring access to host iSCSI volumes, it'll work in the<br>short-medium term, but in the medium-long term we're not going to use<br>that so plan accordingly.<br></div></div></blockquote></div><div class="gmail_quote"><div><br></div><div class=""><div class="h5">On 06/23/2016 10:09 AM, Walter A. Boring IV wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">We regularly fix issues with iSCSI attaches in the release cycles of OpenStack,<br>because it's all done in python using existing linux packages. How often are QEMU<br>releases done and upgraded on customer deployments vs. python packages (os-brick)?<br> <br>I don't see a compelling reason for re-implementing the wheel,<br>and it seems like a major step backwards.<br></div></div></blockquote><div> <br></div><div><div class=""><div class="h5">On Thu, Jun 23, 2016 at 12:07:43PM -0600, Chris Friesen wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">This is an interesting point.<br><br>Unless there's a significant performance benefit to connecting<br>directly from qemu, it seems to me that we would want to leverage<br>the existing work done by the kernel and other "standard" iSCSI<br>initators.<br></div></div></blockquote><div> </div></div></div><div class="gmail_quote">On Thu, Jun 23, 2016 at 1:28 PM, Sean McGinnis <span dir="ltr"><<a href="mailto:sean.mcginnis@gmx.com" target="_blank">sean.mcginnis@gmx.com</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">I'm curious to find out this as well. Is this for a performance gain? If<br>
so, do we have any metrics showing that gain is significant enough to<br>
warrant making a change like this?<br>
<br>
The host OS is still going to be involved. AFAIK, this just cuts out the<br>
software iSCSI initiator from the picture. So we would be moving from a<br>
piece of software dedicated to one specific functionality, to a<br>
different piece of software that's main reason for existence is nothing<br>
to do with IO path management.<br>
<br>
I'm not saying I'm completely opposed to this. If there is a reason for<br>
doing it then it could be worth it. But so far I haven't seen anything<br>
explaining why this would be better than what we have today.</blockquote><div><br></div><div><br></div><div>First, I have not taken any measurements, so please ignore everything I say. :)</div><div><br></div><div>Very generally, if you take out unnecessary layers, you can often improve performance and reliability. Not in every case, but often.</div><div><br></div><div>Volume connections routed through the Linux kernel <i>might</i> lose performance from the extra layer (measures are needed), and have to be managed. The last could be easily underestimated. Nova has to manage Linux's knowledge of volume connections. In the strictest sense the nova-compute host Linux does not <i>need</i> to know about volumes attached to Nova instances. The hairiest part of the the problem: What to do when the nova-compute Linux table of attached volumes gets out of sync? My guess is there are error cases not-yet well-handled in Nova in this area. My guess is Nova could be somewhat simpler if all volumes were directly attached to QEMU.</div><div><br></div><div>(Bit cheating when mentioning the out-of-sync case, as got bit a couple times in testing. It happens.)</div><div><br></div><div>But ... as mentioned earlier, suspect you cannot get to 100% direct to QEMU if there is specialized hardware that has to tie into the nova-compute Linux. Seems unlikely you would get consensus, as this impacts major vendors. Which means you have to keep managing the host map of volumes. Which means you cannot simplify Nova. (If someone knows how to use the specialized hardware, with less footprint in host Linux, this answer could change.)</div><div><br></div><div>Where this will land, I do not know. Do not know the performance measures. </div><div><br></div><div>Can OpenStack allow for specialized hardware, without routing through host Linux? (Probably not, but would be happy to be wrong.)</div><div><br></div><div>And again, as an outsider, I could be wrong about everything. :)</div><div><br></div></div></div></div>