[openstack-dev] [Quantum/Nova] Improving VIF plugin

John Garbutt John.Garbutt at citrix.com
Wed Nov 7 19:13:17 UTC 2012


I am keen this works for XenServer, as well as libvirt.

The current flow is something like:
- create the VIF with XenServer's XenAPI, specifying the VM and Network (traditionally, this specifies the bridge or pseudo-bridge in OVS)
- configure the created port as required (see how to match them up here http://support.citrix.com/article/CTX122520)
- call the "plug" command on the VIF into the VM (you can specify the VIF via an uuid provided by XenAPI)

So if Quantum creates the VIF, it would need the uuid of the VM (assigned by XenServer on VM.create).
Presumably the uuid of the VIF could then be passed back to nova-compute so it can call VIF.plug when required?

When doing live-migration, things get trickier because XenServer creates the VIF and plugs it in for you as part of the live-migration operation.

It is not clear to me that this would fit with the suggested model. Or did I misunderstand something?

John

> -----Original Message-----
> From: Robert Kukura [mailto:rkukura at redhat.com]
> Sent: 07 November 2012 4:45 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [Quantum/Nova] Improving VIF plugin
> 
> On 11/07/2012 10:24 AM, Kyle Mestery (kmestery) wrote:
> > On Nov 7, 2012, at 8:22 AM, Robert Kukura <rkukura at redhat.com> wrote:
> >> On 11/07/2012 04:52 AM, Salvatore Orlando wrote:
> >>> Hi,
> >>>
> >>> I have been following this thread, and I agree with the need of
> >>> allowing Nova to access information about internals of the Quantum
> >>> plugin so that it's allowed to plug interfaces using the appropriate driver.
> >>> I think the APIs proposed by Gary are suitable, although the nature
> >>> of the binding object should be fleshed out a little bit better.
> >>> Also I think Kyle has a good point that this information should be
> >>> per-port, not per-network, as in some cases there will be
> >>> port-specific parameter that needs to be passed into the vif driver.
> >>> The trade-off here is that instead of 1 call per network there will
> >>> be 1 call per port. The /port/<port-id>/binding syntax in theory
> >>> also allows for retrieving logical port and binding info in one call.
> >>>
> >>> Those APIs can easily be structured as admin-only; however we need
> >>> to keep in mind that at the moment nova-quantum interaction is
> >>> performed within the tenant context. We can either change this logic
> >>> and say that Nova wil always have admin access to Quantum, or that
> >>> we use an elevated context only for fetching port binding details.
> >>> To this aim I would think about adding a "service" role to Quantum,
> >>> which should be used specifically for retrieving binding details.
> >>
> >> This makes sense to me.
> >>
> >>>
> >>> However, I am now reading that there might be use cases in which
> >>> nova pushes into back into Quantum concerning the way a VIF has been
> plugged.
> >>> I am failing at envisioning such use case, and it would be great if
> >>> you could shed some light on it. I am interested in this because one
> >>> of Quantum's goals was to provide a clean separation between compute
> >>> and networking services. It seems that entanglement between the two
> >>> it's now crawling back. Personally, I would let Quantum figure out
> >>> binding information once the VIF is plugged, and keep the VIF
> >>> plugging API as GET only.
> >>
> >> It seems to me that Quantum might need certain information about the
> >> VIF being plugged into the port to decide the details of the binding,
> >> at least in more complex deployments with multiple networking
> >> technologies and non-uniform connectivity.
> >>
> >> One such item is a node identifier. This would let Quantum
> >> potentially figure out whether/how that node has connectivity to the
> port's network.
> >> For example, if the node does not have a mapping for the physical
> >> network of a provider flat network or VLAN, this could be detected,
> >> and the VIF plugging could fail gracefully rather than the current
> >> situation where the VM comes up but does not have connectivity.
> >>
> >> Another such item is a list of VIF types that the VM can support. For
> >> example, if the Nova libvirt driver indicates that both OVS and Linux
> >> bridge VIF types are supported, then Quantum could select which to use.
> >> Otherwise, it seems we are somehow either hard-wiring or configuring
> >> Nova to know which type of bridge Quantum wants it to use.
> >>
> > Just a note, but won't this already be decided by the time the port is being
> plugged in? For instance, with the libvirt driver, the XML itself will indicate
> why type of VIF is in use, and by the time the port is being plugged in, that
> won't be configurable. Are you saying that before the VM is even built we
> somehow let Quantum influence how the XML is built for the VM
> definition?
> 
> I have been thinking this interaction would occur before the libvirt XML is
> created (when using libvirt). Letting Quantum influence the libvirt XML
> (rather than having this hard-coded or configured in Nova) was the main
> point of Gary's proposal (correct me if I'm wrong, Gary). But I'm not
> suggesting that Quantum know anything about libvirt or any specific
> hypervisor - these VIF types would be defined generically.
> 
> >
> >> A POST to /port/<id>/binding could pass these items from Nova to
> >> Quantum, and the resulting binding resource would contain the
> >> selected VIF type and any needed parameters for VIFs of that type
> >> (tap device name, bridge name, ...), or an error indicating the binding is
> not possible.
> >>
> >> -Bob
> >>
> >>> While VIF creation is clearly a task which pertains to the compute
> >>> service, VIF plugging is arguably borderline, and hence it's more
> >>> than understandable that there are different valuable approaches and
> solutions.
> >>>
> >>> Salvatore
> >>>
> >>> On 7 November 2012 10:08, Gary Kotton <gkotton at redhat.com
> >>> <mailto:gkotton at redhat.com>> wrote:
> >>>
> >>>    On 11/06/2012 11:58 PM, Ian Wells wrote:
> >>>
> >>>        On 6 November 2012 19:39, Gary Kotton<gkotton at redhat.com
> >>>        <mailto:gkotton at redhat.com>>  wrote:
> >>>
> >>>            GET /network-implementation-__details/<net-id>
> >>>
> >>>        A minor quibble, but these commands will probably change the
> >>>        state on
> >>>        the host that you're getting an attachment for for (or, at least, it
> >>>        would the way I would do it - you do the call, and e.g. a bridge
> >>>        pops
> >>>        up and Nova knows where to find it by the return of the call).  If
> >>>        that's the case, it is a POST rather than a GET as you're creating
> >>>        something.
> >>>
> >>>
> >>>    I need to update the blueprint. The idea in general is to have
> >>>    something like
> >>>
> >>>    GET /port/<id>/binding
> >>>    and
> >>>    PUT /port/<id>/binding/<something>
> >>>
> >>>    This will enable the information to be passed to Quantum.
> >>>
> >>>
> >>>
> >>>        I'm sure you could do it the other way around (GET the details
> >>>        of how
> >>>        to connect to the network and then do the work in Nova to make an
> >>>        endpoint that the hypervisor could use) but I prefer that the
> >>>        work of
> >>>        buggering about with the networking remained entirely within
> >>>        Quantum.
> >>>        This seems eminently sensible for PCI passthrough in particular,
> >>>        where
> >>>        the call would hand over the details of the card to be attached and
> >>>        return that it had been attached - versus bridge creation, where
> >>>        you'd
> >>>        probably say 'give me a bridge' and be told the details of the
> >>>        arbitrarily named bridge you'd just had created.
> >>>
> >>>
> >>>    I would hope that the above PUT command enables Nova to provide
> this
> >>>    information to Quantum.
> >>>
> >>>    Each plugin has its way of allocating and managing the resources.
> >>>    Some may be done via agents, others may be done directly in Nova. It
> >>>    is allo debatible whether this is good or bad. At this stage I would
> >>>    like to provide an API that can ensure that we have our bases
> >>>    covered for the interim period and the long run.
> >>>
> >>>
> >>>
> >>>        The options seem to be:
> >>>          - be explicit about which port we're attaching (and,
> >>>        presumably, that
> >>>        a port can only be attached once)
> >>>          - implicitly create a port iff you attach to a network, use an
> >>>        existing port otherwise
> >>>          - drop ports altogether, or replace them with these
> >>>        attachments that
> >>>        we're talking about right now (get a 'realised' attachment point and
> >>>        you have effectively added a port to the network, after all).



More information about the OpenStack-dev mailing list