<div dir="ltr"><div class="gmail_default" style="font-family:monospace,monospace"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jun 24, 2016 at 2:19 AM, Daniel P. Berrange <span dir="ltr"><<a href="mailto:berrange@redhat.com" target="_blank">berrange@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="">On Thu, Jun 23, 2016 at 09:09:44AM -0700, Walter A. Boring IV wrote:<br>
><br>
> volumes connected to QEMU instances eventually become directly connected?<br>
><br>
> > Our long term goal is that 100% of all network storage will be connected<br></span></blockquote><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">Oh, didn't know this at all. Is this something Nova has been working on for a while? I'd love to hear more about the reasoning, the plan etc. It would also be really neat to have an opportunity to participate.</div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"></div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="">
> > to directly by QEMU. We already have the ability to partially do this with<br>
> > iSCSI, but it is lacking support for multipath. As & when that gap is<br>
> > addressed though, we'll stop using the host OS for any iSCSI stuff.<br></span></blockquote><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"></div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">Any chance anybody has any insight on how to make this work? I tried configuring this last week and it appears to be broken in a few places.</div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"><br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="">
> ><br>
> > So if you're requiring access to host iSCSI volumes, it'll work in the<br>
> > short-medium term, but in the medium-long term we're not going to use<br>
> > that so plan accordingly.<br>
><br>
> What is the benefit of this largely monolithic approach? It seems that<br>
> moving everything into QEMU is diametrically opposed to the unix model<br>
> itself and<br>
> is just a re-implementation of what already exists in the linux world<br>
> outside of QEMU.<br>
<br>
</span>There are many benefits to having it inside QEMU. First it gives us<br>
improved isolation between VMs, because we can control the network<br>
I/O directly against the VM using cgroup resource controls.</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">It gives<br>
us improved security, particularly in combination with LUKS encryption<br>
since the unencrypted block device is not directly visible / accessible<br>
to any other process. It gives us improved reliability / managability<br>
since we avoid having to spawn the iscsi client tools which have poor<br>
error reporting and have been frequent sources of instability in our<br></blockquote><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">True, the iscsi tools aren't the greatest.</div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"></div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
infrastructure (eg see how we have to blindly re-run the same command<br>
many times over because it randomly times out). It will give us improved<br>
I/O performance because of a shorter I/O path to get requests from QEMU<br>
out to the network.<br></blockquote><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">I'd love to hear more on the design and how it all comes together. Particularly the performance info. Like I said, I tried to set it up against master but seems I'm either missing something in the config or it's broken.</div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
NB, this is not just about iSCSI, the same is all true for RBD where<br>
we've also stopped using in-kernel RBD client and do it all in QEMU.<br>
<span class=""><br>
> Does QEMU support hardware initiators? iSER?</span></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="">
<br>
</span>No, this is only for case where you're doing pure software based<br>
iSCSI client connections. If we're relying on local hardware that's<br>
a different story.<br></blockquote><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">I'm confused, so what's the iser driver referenced in the patch commit message: <a href="https://review.openstack.org/#/c/135854/">https://review.openstack.org/#/c/135854/</a></div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"></div> </div><div class="gmail_default" style="font-family:monospace,monospace">So there's a different story for that?</div><div class="gmail_default" style="font-family:monospace,monospace"></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<span class=""><br>
><br>
> We regularly fix issues with iSCSI attaches in the release cycles of<br>
> OpenStack,<br>
> because it's all done in python using existing linux packages. How often<br>
<br>
</span>This is a great example of the benefit that in-QEMU client gives us. The<br>
Linux iSCSI client tools have proved very unreliable in use by OpenStack.<br>
This is a reflection of the very architectural approach. We have individual<br>
resources needed by distinct VMs, but we're having to manage them as a host<br>
wide resource and that's creating us unneccessary complexity and having a<br>
poor effect on our reliability overall.<br>
<span class=""><br>
> are QEMU<br>
> releases done and upgraded on customer deployments vs. python packages<br>
> (os-brick)?<br>
<br>
</span>We're removing the entire layer of instability by removing the need to<br>
deal with any command line tools, and thus greatly simplifying our<br>
setup on compute nodes. No matter what we might do in os-brick it'll<br>
never give us a simple or reliable system - we're just papering over<br>
the flaws by doing stuff like blindly re-trying iscsi commands upon<br>
failure.<br></blockquote><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">This all sounds like it could be a good direction to go in. I'd love to see more info on the plan, how it works, and how to test it out a bit. Didn't find a spec, any links, reviews or config info available?</div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"><br></div></div><div><div class="gmail_default" style="font-family:monospace,monospace">Wish I would've caught this on ML or IRC or wherever, would've loved to have participated a bit.</div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<span class="im"><br>
Regards,<br>
Daniel<br>
--<br>
|: <a href="http://berrange.com" rel="noreferrer" target="_blank">http://berrange.com</a> -o- <a href="http://www.flickr.com/photos/dberrange/" rel="noreferrer" target="_blank">http://www.flickr.com/photos/dberrange/</a> :|<br>
|: <a href="http://libvirt.org" rel="noreferrer" target="_blank">http://libvirt.org</a> -o- <a href="http://virt-manager.org" rel="noreferrer" target="_blank">http://virt-manager.org</a> :|<br>
|: <a href="http://autobuild.org" rel="noreferrer" target="_blank">http://autobuild.org</a> -o- <a href="http://search.cpan.org/~danberr/" rel="noreferrer" target="_blank">http://search.cpan.org/~danberr/</a> :|<br>
|: <a href="http://entangle-photo.org" rel="noreferrer" target="_blank">http://entangle-photo.org</a> -o- <a href="http://live.gnome.org/gtk-vnc" rel="noreferrer" target="_blank">http://live.gnome.org/gtk-vnc</a> :|<br>
<br>
</span><div class=""><div class="h5">__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>