[openstack-dev] [Quantum/Nova] Improving VIF plugin

Gary Kotton gkotton at redhat.com
Thu Nov 8 09:31:28 UTC 2012


Hi,
I am sorry that it has taken me such a long time to answer the mail 
thread. I have been dealing with some blocking bugs in quantum. All of 
the points in the thread above and below are valid.
There are a number of problems with the VIF drivers. My understanding is 
that they should just return the network configuration. At the moment 
they also do a lot of networking management. From the thread there seems 
to be agreement about the fact that Quantum should do the network 
management. As I suggested at the summit this can either be done by a 
the Quantum agent (for example in the case of the linux bridge), the 
Quantum plugin (in the case of a controller that does not require any 
local networking changes) or by a Quantum library. The presentation from 
the summit can be seen at 
https://docs.google.com/presentation/d/1vD2bc2WyqQzOLODjFrfLgg661WU0nteY7NEaVS4pv5g/edit.

My plans for address the problem was to do it in small steps instead of 
one major one.

The first step was to fix a critical bug with the linuxbridge plugin 
(still requires review love):
     - nova - https://review.openstack.org/#/c/14830/
     - quantum - https://review.openstack.org/#/c/14961/

The second step was to have provide Quantum to have an API which can be 
leveraged to enable the agents to perform the necessary networking updates.

After a number of discussions a number of different API ideas arose and 
the two relevant ones in my opinion are either:
     - having extensions on ports which will manage the port bindings
     - having a port sub attribute which manage the port bindings

A POST for the above will enable Nova to to notify Quantum that it needs 
to do something. This may be relevant for some plugins and may not be 
relevant for others. At the moment Nova updates the port with device_id.

I wanted to have an intermediate step. Prior to having Quantum doing all 
of the networking. The rationale here is that at the moment we have 
something that works. The code will be backward compatible and will 
enable all of gradually evolve from the current implementation to the 
new one. My idea here was to stick with the existing VIF implementations 
but to have one Quantum function that will do the magic with the the 
networking implementation received from the binding call. This will be a 
simple switch statement.

Once we have stabilized this then I think that we can move forward to 
Quantum doing the relevant networking configuration - either internally 
or by a library. This will give all plugins time to provide the 
necessary support and give us time to understand and work the best 
solution for Quantum. This step is even more complicated as we need to 
take into account the traditional networking, baremetal etc.

That is my two cents. I would prefer to do in in a staged and monitored 
approach. I think that we all agree on the end result. It is just a 
matter on how we get there.

Thanks
Gary

On 11/07/2012 10:34 PM, Ian Wells wrote:
> On 7 November 2012 18:42, Salvatore Orlando<sorlando at nicira.com>  wrote:
>> Kyle had a good point concerning PCI passthrough or similar situations. My
>> thinking is that in that case you can let the Quantum plugin manage the
>> Virtual Functions on the host and then just pass to Nova the one which was
>> selected for a given VM.
> Precisely.  There's also the more boring case where you dedicate a
> whole, unvirtualised NIC by todging the settings of the switch it's
> attached to, but it's a simpler case and can be handled in  the same
> way.
>
>> The VIF drivers have always bothered me a little. Indeed my perfect world is
>> a world without them.
>> Just for the sake of precision, they're not really specific to the plugin,
>> as several plugins use the same drivers, but they're definitely specific to
>> the hypervisor.
> They're not *all* specific to a single Quantum plugin, but some are,
> and there's many for Quantum that don't work for nova-network (at
> that, slightly higher, level of plugin).
>
>>> If we did the above, then the
>>> interface to Quantum moves to encompass all of the administration of
>>> the networking within the host kernel (for virtual interfaces) and the
>>> physical networking (if the interface is a PCI port).
>> I personally agree with this view. I said in the past that in a host there's
>> a compute part, which should be managed by nova, and a network part which
>> should be managed by Quantum.
> The other thing about this, though whether we can make use of it I'm
> not sure, is that for something like router insertion it's another
> separate client of L2 Quantum at the same level as Nova.   I know this
> is all currently part of the greater Quantum, but if you see it as two
> levels - the straightforward L2 plugging and the more exciting L3
> features - then we almost have an L2 API here.
>
>>>> Personally, I would let Quantum figure out binding information once the
>>>> VIF
>>>> is plugged, and keep the VIF plugging API as GET only.
>>> I prefer that Quantum is defined as producing an endpoint to which the
>>> VM can then be attached.  Otherwise the question in my mind is, what
>>> precisely are you passing from Quantum to Nova?
>>
>> I don't know, honestly. Previously in this thread there was an argument that
>> Nova should send data to Quantum with a PUT or POST request.
>> You're asking the same question I asked, and I could not get a straight
>> answer (or an answer I could understand).
> I think it's a POST - simply because state is being created in Quantum
> (in as much as 'there is now a bridge on this hypervisor' is a form of
> state).  You could DELETE to clear the attachment up when you've done
> with it.
>
>> One counter argument is that you would make a remote call, with might then
>> involve more remote invocations (REST, message queues, or else) for
>> something which could be entirely handled by logic on the host. So that's
>> something we need to plan carefully.
> Implementing the above idea, I imagine we would be make an API call to
> Quantum (which is good from a separation perspective) followed by
> whatever kind of comms happen between the q-api and the q-agent.  The
> issue would that the interaction with the agent would now be
> synchronous.
>




More information about the OpenStack-dev mailing list