Hi Ian and VIF-Plugging crew,<div><br></div><div>Some more comment inline<br><br><div class="gmail_quote">On 7 November 2012 16:36, Ian Wells <span dir="ltr"><<a href="mailto:ijw.ubuntu@cack.org.uk" target="_blank">ijw.ubuntu@cack.org.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On 7 November 2012 10:52, Salvatore Orlando <<a href="mailto:sorlando@nicira.com">sorlando@nicira.com</a>> wrote:<br>
> I have been following this thread, and I agree with the need of allowing<br>
> Nova to access information about internals of the Quantum plugin so that<br>
> it's allowed to plug interfaces using the appropriate driver.<br>
<br>
</div>I don't agree. I don't want to pass 'the details of the network' to<br>
nova. In fact, I want to get further away from that than we are now -<br>
I really don't want code in the nova project to be tinkering with<br>
networking in any significant way. I want to pass a realised network<br>
endpoint to nova - see below.<br></blockquote><div><br></div><div>I actually don't like this interaction between nova and Quantum either, as I've clarified later in this email.</div><div>Especially I don't like the part when details concerning the plugin are exposed to other services. </div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
> However, I am now reading that there might be use cases in which nova pushes<br>
> into back into Quantum concerning the way a VIF has been plugged. I am<br>
> failing at envisioning such use case, and it would be great if you could<br>
> shed some light on it.<br>
<br>
</div>The concept I had in mind is that, for instance, for the case that<br>
you're attaching a network to a libvirt VM as the example, you require<br>
a bridge interface to put into the libvirt.xml file. You request that<br>
quantum create the bridge interface. Quantum (rather than, at<br>
present, the nova plugging driver) creates the bridge and returns its<br>
name to nova. Nova prvides that to the hypervisor driver for the new<br>
VM to be attached to.<br></blockquote><div><br></div><div>Awesome. That's my vision too. Nova just needs to know where to plug a VIF. It does not have to deal with details concerning how to set up connectivity for that VIF.</div>
<div>Kyle had a good point concerning PCI passthrough or similar situations. My thinking is that in that case you can let the Quantum plugin manage the Virtual Functions on the host and then just pass to Nova the one which was selected for a given VM.</div>
<div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
There has to be a bit of negotiation because not all hypervisors are<br>
created equal and so endpoints will differ between them, so there's<br>
not a single attachment point type that you would return (e.g. PCI<br>
passthrough - I might request a specific PCI device's port be wired<br>
up, and the returned object would be just a re-iteration of the PCI<br>
device; or I might request that a virtual interface in a virtualisable<br>
NIC be set up and be passed back the PCI details of the vNIC<br>
allocated; and for completely software-based endpoints, while libvirt<br>
likes bridges, other hypervisors have other preferences).<br></blockquote><div><br></div><div>I think what you write her makes sense. </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
> I am interested in this because one of Quantum's<br>
> goals was to provide a clean separation between compute and networking<br>
> services. It seems that entanglement between the two it's now crawling back.<br>
<br>
</div>Now, I think they're currently about as entangled as they could<br>
possibly be - some of the networking happens in Quantum but a big<br>
messy chunk also happens in the plugging drivers - which are specific<br>
to both the nature of the Quantum plugin in use (or alternatively<br>
nova-network) and to the hypervisor. </blockquote><div><br></div><div>The VIF drivers have always bothered me a little. Indeed my perfect world is a world without them.</div><div>Just for the sake of precision, they're not really specific to the plugin, as several plugins use the same drivers, but they're definitely specific to the hypervisor.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">If we did the above, then the<br>
interface to Quantum moves to encompass all of the administration of<br>
the networking within the host kernel (for virtual interfaces) and the<br>
physical networking (if the interface is a PCI port).<br></blockquote><div><br></div><div>I personally agree with this view. I said in the past that in a host there's a compute part, which should be managed by nova, and a network part which should be managed by Quantum.</div>
<div>However, opinions vary on this point. I'm pretty sure that there are arguments for keeping the whole host under the control of nova-compute. However, it would be a shame if this discussion holds up progress on this front.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
The huge advantage of this is that we can test all of this networking<br>
in Quantum; at the moment, the cross-project nature of the networking<br>
driver means that only system tests combining Nova and Quantum really<br>
give it a workout - and because of the number of VIF plugging drivers<br>
around, many of the possible configurations don't get the<br>
comprehensive testing they need.<br></blockquote><div><br></div><div>Yes, and also making progress on decoupling nova from Quantum.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
> Personally, I would let Quantum figure out binding information once the VIF<br>
> is plugged, and keep the VIF plugging API as GET only.<br>
<br>
</div>I prefer that Quantum is defined as producing an endpoint to which the<br>
VM can then be attached. Otherwise the question in my mind is, what<br>
precisely are you passing from Quantum to Nova? </blockquote><div><br></div><div>I don't know, honestly. Previously in this thread there was an argument that Nova should send data to Quantum with a PUT or POST request. </div>
<div>You're asking the same question I asked, and I could not get a straight answer (or an answer I could understand).</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
A woolly description of a network - the nature of which is still entirely dependent on the<br>
plugin that Quantum happens to be using, so you need a driver<br>
compatible with that network type? I think an endpoint would be<br>
easier to describe, and there would be fewer types of attachment<br>
point.<br></blockquote><div><br></div><div>Definitely. We started this discussion about 1.5 years ago, and then it got unfortunately buried under a ton of other stuff to do.</div><div>I am in favour of exposing an endpoint which produces VIF pluggin info to Nova, in a way that reduces VIF drivers to nothing or to very simple functions.</div>
<div>One counter argument is that you would make a remote call, with might then involve more remote invocations (REST, message queues, or else) for something which could be entirely handled by logic on the host. So that's something we need to plan carefully.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
> While VIF creation is clearly a task which pertains to the compute service,<br>
> VIF plugging is arguably borderline, and hence it's more than understandable<br>
> that there are different valuable approaches and solutions.<br>
<br>
</div>Absolutely. So there are many solutions that will work. I believe we<br>
should be evaluating them on simplicity and flexibility of interface<br>
and how well we can test them.<br></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888"><br>
--<br>
Ian.<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
> On 7 November 2012 10:08, Gary Kotton <<a href="mailto:gkotton@redhat.com">gkotton@redhat.com</a>> wrote:<br>
>><br>
>> On 11/06/2012 11:58 PM, Ian Wells wrote:<br>
>>><br>
>>> On 6 November 2012 19:39, Gary Kotton<<a href="mailto:gkotton@redhat.com">gkotton@redhat.com</a>> wrote:<br>
>>>><br>
>>>> GET /network-implementation-details/<net-id><br>
>>><br>
>>> A minor quibble, but these commands will probably change the state on<br>
>>> the host that you're getting an attachment for for (or, at least, it<br>
>>> would the way I would do it - you do the call, and e.g. a bridge pops<br>
>>> up and Nova knows where to find it by the return of the call). If<br>
>>> that's the case, it is a POST rather than a GET as you're creating<br>
>>> something.<br>
>><br>
>><br>
>> I need to update the blueprint. The idea in general is to have something<br>
>> like<br>
>><br>
>> GET /port/<id>/binding<br>
>> and<br>
>> PUT /port/<id>/binding/<something><br>
>><br>
>> This will enable the information to be passed to Quantum.<br>
>><br>
>><br>
>>><br>
>>> I'm sure you could do it the other way around (GET the details of how<br>
>>> to connect to the network and then do the work in Nova to make an<br>
>>> endpoint that the hypervisor could use) but I prefer that the work of<br>
>>> buggering about with the networking remained entirely within Quantum.<br>
>>> This seems eminently sensible for PCI passthrough in particular, where<br>
>>> the call would hand over the details of the card to be attached and<br>
>>> return that it had been attached - versus bridge creation, where you'd<br>
>>> probably say 'give me a bridge' and be told the details of the<br>
>>> arbitrarily named bridge you'd just had created.<br>
>><br>
>><br>
>> I would hope that the above PUT command enables Nova to provide this<br>
>> information to Quantum.<br>
>><br>
>> Each plugin has its way of allocating and managing the resources. Some may<br>
>> be done via agents, others may be done directly in Nova. It is allo<br>
>> debatible whether this is good or bad. At this stage I would like to provide<br>
>> an API that can ensure that we have our bases covered for the interim period<br>
>> and the long run.<br>
>><br>
>><br>
>>><br>
>>> The options seem to be:<br>
>>> - be explicit about which port we're attaching (and, presumably, that<br>
>>> a port can only be attached once)<br>
>>> - implicitly create a port iff you attach to a network, use an<br>
>>> existing port otherwise<br>
>>> - drop ports altogether, or replace them with these attachments that<br>
>>> we're talking about right now (get a 'realised' attachment point and<br>
>>> you have effectively added a port to the network, after all).<br>
>>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> OpenStack-dev mailing list<br>
>> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
><br>
</div></div></blockquote></div><br></div>