[openstack-dev] [Quantum/Nova] Improving VIF plugin

Ian Wells ijw.ubuntu at cack.org.uk
Wed Nov 7 20:31:52 UTC 2012


Generally sounds like we're talking about the same thing, in fact, so
I shall slice and dice your post to pick out a couple of points:

On 7 November 2012 20:01, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
> In my mind we should actually have two calls. In the volume world
> we have initialize_connection() and terminate_connection(), and I think we

No argument with this - there has to be cleanup (again it's obviously
required to free a PCI card if you're using one, and reasonable to
deallocate bridges and all the crap that builds up in the kernel).

> a) a specific format for 'connector_data', that is information about the
> connecting hypervisor/machine that we should send in and

I see this as a 'port type' and a set of data which, while it would
normally include the host, would be a dictionary specific to the type
you had in mind:

{ type: 'vif-bridge', host: ... }
{ type: 'vpci', host: ... }

and at the point I start fantasising and we stop thinking about Nova
as the client, you could also have:

{ type: 'switch-port', switch:
'name-of-switch-configured-under-quantum',  port_number: 12 }
{ type: 'l2vpn-endpoint', endpoint-net: <uuid>, tunnel_ip: ..., tunnel_key: ...}

I'm sure we could come up with uses for these...

> If we want to do complicated setup on the compute host and keep that logic
> in quantum without having to do a call that goes:
>
> nova-hosta.compute -> quantum_api -> hosta.quantum
>
> we can have type that works like an adaptor:
>
>  {'driver_port_type:': 'quantum-driver', 'data': {'quantum-backend':
> 'ovs-driver', 'data': {'driver_port_type:': 'bridge', 'data': {'name':
> 'br100'}} }}
>
> some messy untested code for that:
>
> from quantum.hypervisor.libvirt import driver as quantum_driver
>
> def get_driver(driver_port_type):
>     return {
>        'bridge': BridgeDriver,
>        'quantum-driver': QuantumAdaptor,
>     }[driver_port_type]()
>
> class BridgeDriver(PortDriver):
>     def plug_vif(port_id, connection_data):
>         return get_libvirt_xml(connection_info)
>
> class QuantumAdaptor(PortDriver):
>     def plug_vif(port_id, connection_data):
>         q_driver =
> quantum_driver.get_driver(connection_data['quantum-backend'])
>         connection_data = q_driver.plug_vif(connection_data['data'])
>         # local setup happens above and the q libvirt driver returns normal
>         # connection_data to be used by the normal code.
>         l_driver = get_driver(connection_data['driver_port_type']
>         return l_driver.plug_vif(connection_data['data'])

While this does work, it brings us back to a network code in Nova, and
issues with testing.  You could turn this around by saying that you
have:

# Note these interfaces are not actually nova-specific at all
class network.NetworkDriver:
    def init_attachment(type_details, network):


    def terminate_attachment(attachment)

class network.Attachment:
   @abstractmethod
   def realise():
        # Make the thing appear locally, if that's what you asked for


class quantum.nova.NetworkDriver(network.NetworkDriver): # lives in
Quantum project; does API calls
class quantum.QuantumAttachment(network.Attachment): # but lives in Quantum
    def realise():
        ... takes return data, does useful local work, but nothing
hypervisor-specific

I still think I'm a fan of getting the quantum-agent to do all the
work, but the above is a clean interface.



More information about the OpenStack-dev mailing list