[openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

Daniel P. Berrange berrange at redhat.com
Thu Dec 11 10:41:37 UTC 2014

On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote:
> On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
> > On 10 December 2014 at 01:31, Daniel P. Berrange <berrange at redhat.com>
> > wrote:
> >>
> >>
> >> So the problem of Nova review bandwidth is a constant problem across all
> >> areas of the code. We need to solve this problem for the team as a whole
> >> in a much broader fashion than just for people writing VIF drivers. The
> >> VIF drivers are really small pieces of code that should be straightforward
> >> to review & get merged in any release cycle in which they are proposed.
> >> I think we need to make sure that we focus our energy on doing this and
> >> not ignoring the problem by breaking stuff off out of tree.
> >
> >
> > The problem is that we effectively prevent running an out of tree Neutron
> > driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism
> > that isn't in Nova, as we can't use out of tree code and we won't accept in
> > code ones for out of tree drivers.
> The question is, do we really need such flexibility for so many nova vif types?
> I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
> nova shouldn't known too much details about switch backend, it should
> only care about the VIF itself, how the VIF is plugged to switch
> belongs to Neutron half.
> However I'm not saying to move existing vif driver out, those open
> backend have been used widely. But from now on the tap and vhostuser
> mode should be encouraged: one common vif driver to many long-tail
> backend.

Yes, I really think this is a key point. When we introduced the VIF type
mechanism we never intended for there to be soo many different VIF types
created. There is a very small, finite number of possible ways to configure
the libvirt guest XML and it was intended that the VIF types pretty much
mirror that. This would have given us about 8 distinct VIF type maximum.

I think the reason for the larger than expected number of VIF types, is
that the drivers are being written to require some arbitrary tools to
be invoked in the plug & unplug methods. It would really be better if
those could be accomplished in the Neutron code than the Nova code, via
a host agent run & provided by the Neutron mechanism.  This would let
us have a very small number of VIF types and so avoid the entire problem
that this thread is bringing up.

Failing that though, I could see a way to accomplish a similar thing
without a Neutron launched agent. If one of the VIF type binding
parameters were the name of a script, we could run that script on
plug & unplug. So we'd have a finite number of VIF types, and each
new Neutron mechanism would merely have to provide a script to invoke

eg consider the existing midonet & iovisor VIF types as an example.
Both of them use the libvirt "ethernet" config, but have different
things running in their plug methods. If we had a mechanism for
associating a "plug script" with a vif type, we could use a single
VIF type for both.

eg iovisor port binding info would contain


while midonet would contain


And so you see implementing a new Neutron mechanism in this way would
not require *any* changes in Nova whatsoever. The work would be entirely
self-contained within the scope of Neutron. It is simply a packaging
task to get the vif script installed on the compute hosts, so that Nova
can execute it.

This is essentially providing a flexible VIF plugin system for Nova,
without having to have it plug directly into the Nova codebase with
the API & RPC stability constraints that implies.

|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

More information about the OpenStack-dev mailing list