[openstack-dev] [Quantum/Nova] Improving VIF plugin

Vishvananda Ishaya vishvananda at gmail.com
Wed Nov 7 19:01:35 UTC 2012


Hi guys!

I'm going to top-post here because I want to offer a similar approach.

I'm going to ignore the exact semantics of the POST and switch to python for a second. In my mind we should actually have two calls. In the volume world we have initialize_connection() and terminate_connection(), and I think we need something similar for ports. I've included the relevent code from cinder at the bottom so you can see how it works. The advantage of two calls is it allows us to also do security checks and verify that two things aren't connecting to the same port, etc.

Basically it requires us to define

a) a specific format for 'connector_data', that is information about the connecting hypervisor/machine that we should send in and
b) a specific format for 'connection_info', information returned by quantum to the connecting machine so it can make the connection.

connector data could be something as simple as hostname of the machine, or it could include all of the relevant network info that quantum will need to make the setup. In my mind connection_info should be a little bit more extensible, something like {'driver_port_type': 'xxx', 'data': {...} } where driver_port_type is an arbitrary definition of type and data is the data needed for that type.

Now there are two ways to define driver_port_type, one is to have a different type for every possible quantum backend. The other is to define two or three basic types and expect the drivers to return one of those types. For example you could have:
 {'driver_port_type:': 'bridge', 'data': {'name': 'br100'}}
or:
 {'driver_port_type:': 'interface', 'data': {'name': 'eth0'}}

This allows for drivers to do really fancy stuff by defining a new type:

 {'driver_port_type:': 'openvswitch', 'data': {...}}

But we can encourage drivers to mostly use the simple types. In the volume world, most drivers return a type of iscsi, but there are some that have their own types: sheepdog or rbd.

This kind of setup allows us to have a simple plugin model on the hypervisor side that calls a different driver method for each 'type' and gets back the necessary config to attach the device to the vm.

If we want to do complicated setup on the compute host and keep that logic in quantum without having to do a call that goes:

nova-hosta.compute -> quantum_api -> hosta.quantum

we can have type that works like an adaptor:

 {'driver_port_type:': 'quantum-driver', 'data': {'quantum-backend': 'ovs-driver', 'data': {'driver_port_type:': 'bridge', 'data': {'name': 'br100'}} }}

some messy untested code for that:

from quantum.hypervisor.libvirt import driver as quantum_driver

def get_driver(driver_port_type):
    return {
       'bridge': BridgeDriver,
       'quantum-driver': QuantumAdaptor,
    }[driver_port_type]()

class BridgeDriver(PortDriver):
    def plug_vif(port_id, connection_data):
        return get_libvirt_xml(connection_info)

class QuantumAdaptor(PortDriver):
    def plug_vif(port_id, connection_data):
        q_driver = quantum_driver.get_driver(connection_data['quantum-backend'])
        connection_data = q_driver.plug_vif(connection_data['data'])
        # local setup happens above and the q libvirt driver returns normal
        # connection_data to be used by the normal code.
        l_driver = get_driver(connection_data['driver_port_type']
        return l_driver.plug_vif(connection_data['data'])


Vish


Here is the code on the cinder side for reference:

    def initialize_connection(self, context, volume_id, connector):
        """Prepare volume for connection from host represented by connector.

        This method calls the driver initialize_connection and returns
        it to the caller.  The connector parameter is a dictionary with
        information about the host that will connect to the volume in the
        following format::

            {
                'ip': ip,
                'initiator': initiator,
            }

        ip: the ip address of the connecting machine

        initiator: the iscsi initiator name of the connecting machine.
        This can be None if the connecting machine does not support iscsi
        connections.

        driver is responsible for doing any necessary security setup and
        returning a connection_info dictionary in the following format::

            {
                'driver_volume_type': driver_volume_type,
                'data': data,
            }

        driver_volume_type: a string to identify the type of volume.  This
                           can be used by the calling code to determine the
                           strategy for connecting to the volume. This could
                           be 'iscsi', 'rbd', 'sheepdog', etc.

        data: this is the data that the calling code will use to connect
              to the volume. Keep in mind that this will be serialized to
              json in various places, so it should not contain any non-json
              data types.
        """
        volume_ref = self.db.volume_get(context, volume_id)
        return self.driver.initialize_connection(volume_ref, connector)

On Nov 7, 2012, at 9:42 AM, Salvatore Orlando <sorlando at nicira.com> wrote:

> Hi Ian and VIF-Plugging crew,
> 
> Some more comment inline
> 
> On 7 November 2012 16:36, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
> On 7 November 2012 10:52, Salvatore Orlando <sorlando at nicira.com> wrote:
> > I have been following this thread, and I agree with the need of allowing
> > Nova to access information about internals of the Quantum plugin so that
> > it's allowed to plug interfaces using the appropriate driver.
> 
> I don't agree.  I don't want to pass 'the details of the network' to
> nova.  In fact, I want to get further away from that than we are now -
> I really don't want code in the nova project to be tinkering with
> networking in any significant way.  I want to pass a realised network
> endpoint to nova - see below.
> 
> I actually don't like this interaction between nova and Quantum either, as I've clarified later in this email.
> Especially I don't like the part when details concerning the plugin are exposed to other services. 
>  
> 
> > However, I am now reading that there might be use cases in which nova pushes
> > into back into Quantum concerning the way a VIF has been plugged. I am
> > failing at envisioning such use case, and it would be great if you could
> > shed some light on it.
> 
> The concept I had in mind is that, for instance, for the case that
> you're attaching a network to a libvirt VM as the example, you require
> a bridge interface to put into the libvirt.xml file.  You request that
> quantum create the bridge interface.  Quantum (rather than, at
> present, the nova plugging driver) creates the bridge and returns its
> name to nova.  Nova prvides that to the hypervisor driver  for the new
> VM to be attached to.
> 
> Awesome. That's my vision too. Nova just needs to know where to plug a VIF. It does not have to deal with details concerning how to set up connectivity for that VIF.
> Kyle had a good point concerning PCI passthrough or similar situations. My thinking is that in that case you can let the Quantum plugin manage the Virtual Functions on the host and then just pass to Nova the one which was selected for a given VM.
> 
>  
> 
> There has to be a bit of negotiation because not all hypervisors are
> created equal and so endpoints will differ between them, so there's
> not a single attachment point type that you would return (e.g. PCI
> passthrough - I might request a specific PCI device's port be wired
> up, and the returned object would be just a re-iteration of the PCI
> device; or I might request that a virtual interface in a virtualisable
> NIC be set up and be passed back the PCI details of the vNIC
> allocated; and for completely software-based endpoints, while libvirt
> likes bridges, other hypervisors have other preferences).
> 
> I think what you write her makes sense. 
> 
> > I am interested in this because one of Quantum's
> > goals was to provide a clean separation between compute and networking
> > services. It seems that entanglement between the two it's now crawling back.
> 
> Now, I think they're currently about as entangled as they could
> possibly be - some of the networking happens in Quantum but a big
> messy chunk also happens in the plugging drivers - which are specific
> to both the nature of the Quantum plugin in use (or alternatively
> nova-network) and to the hypervisor.  
> 
> The VIF drivers have always bothered me a little. Indeed my perfect world is a world without them.
> Just for the sake of precision, they're not really specific to the plugin, as several plugins use the same drivers, but they're definitely specific to the hypervisor.
>  
> If we did the above, then the
> interface to Quantum moves to encompass all of the administration of
> the networking within the host kernel (for virtual interfaces) and the
> physical networking (if the interface is a PCI port).
> 
> I personally agree with this view. I said in the past that in a host there's a compute part, which should be managed by nova, and a network part which should be managed by Quantum.
> However, opinions vary on this point. I'm pretty sure that there are arguments for keeping the whole host under the control of nova-compute. However, it would be a shame if this discussion holds up progress on this front.
>  
> 
> The huge advantage of this is that we can test all of this networking
> in Quantum; at the moment, the cross-project nature of the networking
> driver means that only system tests combining Nova and Quantum really
> give it a workout - and because of the number of VIF plugging drivers
> around, many of the possible configurations don't get the
> comprehensive testing they need.
> 
> Yes, and also making progress on decoupling nova from Quantum.
>  
> 
> > Personally, I would let Quantum figure out binding information once the VIF
> > is plugged, and keep the VIF plugging API as GET only.
> 
> I prefer that Quantum is defined as producing an endpoint to which the
> VM can then be attached.  Otherwise the question in my mind is, what
> precisely are you passing from Quantum to Nova?  
> 
> I don't know, honestly. Previously in this thread there was an argument that Nova should send data to Quantum with a PUT or POST request. 
> You're asking the same question I asked, and I could not get a straight answer (or an answer I could understand).
>  
> A woolly description of a network - the nature of which is still entirely dependent on the
> plugin that Quantum happens to be using, so you need a driver
> compatible with that network type?  I think an endpoint would be
> easier to describe, and there would be fewer types of attachment
> point.
> 
> Definitely. We started this discussion about 1.5 years ago, and then it got unfortunately buried under a ton of other stuff to do.
> I am in favour of exposing an endpoint which produces VIF pluggin info to Nova, in a way that reduces VIF drivers to nothing or to very simple functions.
> One counter argument is that you would make a remote call, with might then involve more remote invocations (REST, message queues, or else) for something which could be entirely handled by logic on the host. So that's something we need to plan carefully.
> 
> 
> > While VIF creation is clearly a task which pertains to the compute service,
> > VIF plugging is arguably borderline, and hence it's more than understandable
> > that there are different valuable approaches and solutions.
> 
> Absolutely.  So there are many solutions that will work.  I believe we
> should be evaluating them on simplicity and flexibility of interface
> and how well we can test them.
> 
> --
> Ian.
> 
> > On 7 November 2012 10:08, Gary Kotton <gkotton at redhat.com> wrote:
> >>
> >> On 11/06/2012 11:58 PM, Ian Wells wrote:
> >>>
> >>> On 6 November 2012 19:39, Gary Kotton<gkotton at redhat.com>  wrote:
> >>>>
> >>>> GET /network-implementation-details/<net-id>
> >>>
> >>> A minor quibble, but these commands will probably change the state on
> >>> the host that you're getting an attachment for for (or, at least, it
> >>> would the way I would do it - you do the call, and e.g. a bridge pops
> >>> up and Nova knows where to find it by the return of the call).  If
> >>> that's the case, it is a POST rather than a GET as you're creating
> >>> something.
> >>
> >>
> >> I need to update the blueprint. The idea in general is to have something
> >> like
> >>
> >> GET /port/<id>/binding
> >> and
> >> PUT /port/<id>/binding/<something>
> >>
> >> This will enable the information to be passed to Quantum.
> >>
> >>
> >>>
> >>> I'm sure you could do it the other way around (GET the details of how
> >>> to connect to the network and then do the work in Nova to make an
> >>> endpoint that the hypervisor could use) but I prefer that the work of
> >>> buggering about with the networking remained entirely within Quantum.
> >>> This seems eminently sensible for PCI passthrough in particular, where
> >>> the call would hand over the details of the card to be attached and
> >>> return that it had been attached - versus bridge creation, where you'd
> >>> probably say 'give me a bridge' and be told the details of the
> >>> arbitrarily named bridge you'd just had created.
> >>
> >>
> >> I would hope that the above PUT command enables Nova to provide this
> >> information to Quantum.
> >>
> >> Each plugin has its way of allocating and managing the resources. Some may
> >> be done via agents, others may be done directly in Nova. It is allo
> >> debatible whether this is good or bad. At this stage I would like to provide
> >> an API that can ensure that we have our bases covered for the interim period
> >> and the long run.
> >>
> >>
> >>>
> >>> The options seem to be:
> >>>   - be explicit about which port we're attaching (and, presumably, that
> >>> a port can only be attached once)
> >>>   - implicitly create a port iff you attach to a network, use an
> >>> existing port otherwise
> >>>   - drop ports altogether, or replace them with these attachments that
> >>> we're talking about right now (get a 'realised' attachment point and
> >>> you have effectively added a port to the network, after all).
> >>>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121107/bbcdf301/attachment.html>


More information about the OpenStack-dev mailing list