[openstack-dev] [Quantum/Nova] Improving VIF plugin

Ian Wells ijw.ubuntu at cack.org.uk
Wed Nov 7 15:36:17 UTC 2012


On 7 November 2012 10:52, Salvatore Orlando <sorlando at nicira.com> wrote:
> I have been following this thread, and I agree with the need of allowing
> Nova to access information about internals of the Quantum plugin so that
> it's allowed to plug interfaces using the appropriate driver.

I don't agree.  I don't want to pass 'the details of the network' to
nova.  In fact, I want to get further away from that than we are now -
I really don't want code in the nova project to be tinkering with
networking in any significant way.  I want to pass a realised network
endpoint to nova - see below.

> However, I am now reading that there might be use cases in which nova pushes
> into back into Quantum concerning the way a VIF has been plugged. I am
> failing at envisioning such use case, and it would be great if you could
> shed some light on it.

The concept I had in mind is that, for instance, for the case that
you're attaching a network to a libvirt VM as the example, you require
a bridge interface to put into the libvirt.xml file.  You request that
quantum create the bridge interface.  Quantum (rather than, at
present, the nova plugging driver) creates the bridge and returns its
name to nova.  Nova prvides that to the hypervisor driver  for the new
VM to be attached to.

There has to be a bit of negotiation because not all hypervisors are
created equal and so endpoints will differ between them, so there's
not a single attachment point type that you would return (e.g. PCI
passthrough - I might request a specific PCI device's port be wired
up, and the returned object would be just a re-iteration of the PCI
device; or I might request that a virtual interface in a virtualisable
NIC be set up and be passed back the PCI details of the vNIC
allocated; and for completely software-based endpoints, while libvirt
likes bridges, other hypervisors have other preferences).

> I am interested in this because one of Quantum's
> goals was to provide a clean separation between compute and networking
> services. It seems that entanglement between the two it's now crawling back.

Now, I think they're currently about as entangled as they could
possibly be - some of the networking happens in Quantum but a big
messy chunk also happens in the plugging drivers - which are specific
to both the nature of the Quantum plugin in use (or alternatively
nova-network) and to the hypervisor.  If we did the above, then the
interface to Quantum moves to encompass all of the administration of
the networking within the host kernel (for virtual interfaces) and the
physical networking (if the interface is a PCI port).

The huge advantage of this is that we can test all of this networking
in Quantum; at the moment, the cross-project nature of the networking
driver means that only system tests combining Nova and Quantum really
give it a workout - and because of the number of VIF plugging drivers
around, many of the possible configurations don't get the
comprehensive testing they need.

> Personally, I would let Quantum figure out binding information once the VIF
> is plugged, and keep the VIF plugging API as GET only.

I prefer that Quantum is defined as producing an endpoint to which the
VM can then be attached.  Otherwise the question in my mind is, what
precisely are you passing from Quantum to Nova?  A woolly description
of a network - the nature of which is still entirely dependent on the
plugin that Quantum happens to be using, so you need a driver
compatible with that network type?  I think an endpoint would be
easier to describe, and there would be fewer types of attachment
point.

> While VIF creation is clearly a task which pertains to the compute service,
> VIF plugging is arguably borderline, and hence it's more than understandable
> that there are different valuable approaches and solutions.

Absolutely.  So there are many solutions that will work.  I believe we
should be evaluating them on simplicity and flexibility of interface
and how well we can test them.

-- 
Ian.

> On 7 November 2012 10:08, Gary Kotton <gkotton at redhat.com> wrote:
>>
>> On 11/06/2012 11:58 PM, Ian Wells wrote:
>>>
>>> On 6 November 2012 19:39, Gary Kotton<gkotton at redhat.com>  wrote:
>>>>
>>>> GET /network-implementation-details/<net-id>
>>>
>>> A minor quibble, but these commands will probably change the state on
>>> the host that you're getting an attachment for for (or, at least, it
>>> would the way I would do it - you do the call, and e.g. a bridge pops
>>> up and Nova knows where to find it by the return of the call).  If
>>> that's the case, it is a POST rather than a GET as you're creating
>>> something.
>>
>>
>> I need to update the blueprint. The idea in general is to have something
>> like
>>
>> GET /port/<id>/binding
>> and
>> PUT /port/<id>/binding/<something>
>>
>> This will enable the information to be passed to Quantum.
>>
>>
>>>
>>> I'm sure you could do it the other way around (GET the details of how
>>> to connect to the network and then do the work in Nova to make an
>>> endpoint that the hypervisor could use) but I prefer that the work of
>>> buggering about with the networking remained entirely within Quantum.
>>> This seems eminently sensible for PCI passthrough in particular, where
>>> the call would hand over the details of the card to be attached and
>>> return that it had been attached - versus bridge creation, where you'd
>>> probably say 'give me a bridge' and be told the details of the
>>> arbitrarily named bridge you'd just had created.
>>
>>
>> I would hope that the above PUT command enables Nova to provide this
>> information to Quantum.
>>
>> Each plugin has its way of allocating and managing the resources. Some may
>> be done via agents, others may be done directly in Nova. It is allo
>> debatible whether this is good or bad. At this stage I would like to provide
>> an API that can ensure that we have our bases covered for the interim period
>> and the long run.
>>
>>
>>>
>>> The options seem to be:
>>>   - be explicit about which port we're attaching (and, presumably, that
>>> a port can only be attached once)
>>>   - implicitly create a port iff you attach to a network, use an
>>> existing port otherwise
>>>   - drop ports altogether, or replace them with these attachments that
>>> we're talking about right now (get a 'realised' attachment point and
>>> you have effectively added a port to the network, after all).
>>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>



More information about the OpenStack-dev mailing list