[openstack-dev] [Quantum/Nova] Improving VIF plugin

Salvatore Orlando sorlando at nicira.com
Wed Nov 7 17:42:52 UTC 2012


Hi Ian and VIF-Plugging crew,

Some more comment inline

On 7 November 2012 16:36, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:

> On 7 November 2012 10:52, Salvatore Orlando <sorlando at nicira.com> wrote:
> > I have been following this thread, and I agree with the need of allowing
> > Nova to access information about internals of the Quantum plugin so that
> > it's allowed to plug interfaces using the appropriate driver.
>
> I don't agree.  I don't want to pass 'the details of the network' to
> nova.  In fact, I want to get further away from that than we are now -
> I really don't want code in the nova project to be tinkering with
> networking in any significant way.  I want to pass a realised network
> endpoint to nova - see below.
>

I actually don't like this interaction between nova and Quantum either, as
I've clarified later in this email.
Especially I don't like the part when details concerning the plugin are
exposed to other services.


>
> > However, I am now reading that there might be use cases in which nova
> pushes
> > into back into Quantum concerning the way a VIF has been plugged. I am
> > failing at envisioning such use case, and it would be great if you could
> > shed some light on it.
>
> The concept I had in mind is that, for instance, for the case that
> you're attaching a network to a libvirt VM as the example, you require
> a bridge interface to put into the libvirt.xml file.  You request that
> quantum create the bridge interface.  Quantum (rather than, at
> present, the nova plugging driver) creates the bridge and returns its
> name to nova.  Nova prvides that to the hypervisor driver  for the new
> VM to be attached to.
>

Awesome. That's my vision too. Nova just needs to know where to plug a VIF.
It does not have to deal with details concerning how to set up connectivity
for that VIF.
Kyle had a good point concerning PCI passthrough or similar situations. My
thinking is that in that case you can let the Quantum plugin manage the
Virtual Functions on the host and then just pass to Nova the one which was
selected for a given VM.



>
> There has to be a bit of negotiation because not all hypervisors are
> created equal and so endpoints will differ between them, so there's
> not a single attachment point type that you would return (e.g. PCI
> passthrough - I might request a specific PCI device's port be wired
> up, and the returned object would be just a re-iteration of the PCI
> device; or I might request that a virtual interface in a virtualisable
> NIC be set up and be passed back the PCI details of the vNIC
> allocated; and for completely software-based endpoints, while libvirt
> likes bridges, other hypervisors have other preferences).
>

I think what you write her makes sense.

>
> > I am interested in this because one of Quantum's
> > goals was to provide a clean separation between compute and networking
> > services. It seems that entanglement between the two it's now crawling
> back.
>
> Now, I think they're currently about as entangled as they could
> possibly be - some of the networking happens in Quantum but a big
> messy chunk also happens in the plugging drivers - which are specific
> to both the nature of the Quantum plugin in use (or alternatively
> nova-network) and to the hypervisor.


The VIF drivers have always bothered me a little. Indeed my perfect world
is a world without them.
Just for the sake of precision, they're not really specific to the plugin,
as several plugins use the same drivers, but they're definitely specific to
the hypervisor.


> If we did the above, then the
> interface to Quantum moves to encompass all of the administration of
> the networking within the host kernel (for virtual interfaces) and the
> physical networking (if the interface is a PCI port).
>

I personally agree with this view. I said in the past that in a host
there's a compute part, which should be managed by nova, and a network part
which should be managed by Quantum.
However, opinions vary on this point. I'm pretty sure that there are
arguments for keeping the whole host under the control of nova-compute.
However, it would be a shame if this discussion holds up progress on this
front.


>
> The huge advantage of this is that we can test all of this networking
> in Quantum; at the moment, the cross-project nature of the networking
> driver means that only system tests combining Nova and Quantum really
> give it a workout - and because of the number of VIF plugging drivers
> around, many of the possible configurations don't get the
> comprehensive testing they need.
>

Yes, and also making progress on decoupling nova from Quantum.


>
> > Personally, I would let Quantum figure out binding information once the
> VIF
> > is plugged, and keep the VIF plugging API as GET only.
>
> I prefer that Quantum is defined as producing an endpoint to which the
> VM can then be attached.  Otherwise the question in my mind is, what
> precisely are you passing from Quantum to Nova?


I don't know, honestly. Previously in this thread there was an argument
that Nova should send data to Quantum with a PUT or POST request.
You're asking the same question I asked, and I could not get a straight
answer (or an answer I could understand).


> A woolly description of a network - the nature of which is still entirely
> dependent on the
> plugin that Quantum happens to be using, so you need a driver
> compatible with that network type?  I think an endpoint would be
> easier to describe, and there would be fewer types of attachment
> point.
>

Definitely. We started this discussion about 1.5 years ago, and then it got
unfortunately buried under a ton of other stuff to do.
I am in favour of exposing an endpoint which produces VIF pluggin info to
Nova, in a way that reduces VIF drivers to nothing or to very simple
functions.
One counter argument is that you would make a remote call, with might then
involve more remote invocations (REST, message queues, or else) for
something which could be entirely handled by logic on the host. So that's
something we need to plan carefully.


> > While VIF creation is clearly a task which pertains to the compute
> service,
> > VIF plugging is arguably borderline, and hence it's more than
> understandable
> > that there are different valuable approaches and solutions.
>
> Absolutely.  So there are many solutions that will work.  I believe we
> should be evaluating them on simplicity and flexibility of interface
> and how well we can test them.
>

> --
> Ian.
>
> > On 7 November 2012 10:08, Gary Kotton <gkotton at redhat.com> wrote:
> >>
> >> On 11/06/2012 11:58 PM, Ian Wells wrote:
> >>>
> >>> On 6 November 2012 19:39, Gary Kotton<gkotton at redhat.com>  wrote:
> >>>>
> >>>> GET /network-implementation-details/<net-id>
> >>>
> >>> A minor quibble, but these commands will probably change the state on
> >>> the host that you're getting an attachment for for (or, at least, it
> >>> would the way I would do it - you do the call, and e.g. a bridge pops
> >>> up and Nova knows where to find it by the return of the call).  If
> >>> that's the case, it is a POST rather than a GET as you're creating
> >>> something.
> >>
> >>
> >> I need to update the blueprint. The idea in general is to have something
> >> like
> >>
> >> GET /port/<id>/binding
> >> and
> >> PUT /port/<id>/binding/<something>
> >>
> >> This will enable the information to be passed to Quantum.
> >>
> >>
> >>>
> >>> I'm sure you could do it the other way around (GET the details of how
> >>> to connect to the network and then do the work in Nova to make an
> >>> endpoint that the hypervisor could use) but I prefer that the work of
> >>> buggering about with the networking remained entirely within Quantum.
> >>> This seems eminently sensible for PCI passthrough in particular, where
> >>> the call would hand over the details of the card to be attached and
> >>> return that it had been attached - versus bridge creation, where you'd
> >>> probably say 'give me a bridge' and be told the details of the
> >>> arbitrarily named bridge you'd just had created.
> >>
> >>
> >> I would hope that the above PUT command enables Nova to provide this
> >> information to Quantum.
> >>
> >> Each plugin has its way of allocating and managing the resources. Some
> may
> >> be done via agents, others may be done directly in Nova. It is allo
> >> debatible whether this is good or bad. At this stage I would like to
> provide
> >> an API that can ensure that we have our bases covered for the interim
> period
> >> and the long run.
> >>
> >>
> >>>
> >>> The options seem to be:
> >>>   - be explicit about which port we're attaching (and, presumably, that
> >>> a port can only be attached once)
> >>>   - implicitly create a port iff you attach to a network, use an
> >>> existing port otherwise
> >>>   - drop ports altogether, or replace them with these attachments that
> >>> we're talking about right now (get a 'realised' attachment point and
> >>> you have effectively added a port to the network, after all).
> >>>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121107/c53e86c6/attachment.html>


More information about the OpenStack-dev mailing list