[openstack-dev] [Quantum] [Nova] improving vif-plugging

Daniel P. Berrange berrange at redhat.com
Wed Jan 23 11:02:42 UTC 2013

On Tue, Jan 22, 2013 at 11:41:55PM -0800, Dan Wendlandt wrote:
> Hi Daniel,
> More comments inline.
> On Wed, Jan 16, 2013 at 1:48 AM, Daniel P. Berrange <berrange at redhat.com>wrote:
> > On Tue, Jan 15, 2013 at 10:50:55PM -0800, Dan Wendlandt wrote:
> > > > As an example, when configuring OpenVSwitch one of the pieces of
> > > > information
> > > > require is an 'interfaceid'. Previously the libvirt VIF driver set the
> > > > interfaceid
> > > > based on the vif['uuid'] field, since that is what quantum expected.
> > This
> > > > is not
> > > > portable though. The correct approach is for nova.network.model to
> > have an
> > > > explicit 'ovs-interfaceid' field and the nova.network.quantumv2.api
> > driver
> > > > sets this based on the on vif['uuid']. The libvirt VIF driver can now
> > > > configure
> > > > OVS with any network driver, not simply Quantum. Similarly for things
> > like
> > > > the
> > > > OVS bridge name, of the TAP device names, all of which previously had
> > to be
> > > > hardcoded to Quantum specific data. This extends to bridging, 801.qbh,
> > > > etc,etc
> > > >
> > >
> > > I have no real concerns here with respect to these representations within
> > > Nova.  Since you mentioned that you view this as not being libvirt
> > > specific, would the VIF/Network model in nova/network/model.py be
> > extended
> > > with additional fields not currently represented in the spec (
> > > http://wiki.openstack.org/LibvirtVIFDrivers) for platforms like
> > XenServer
> > > (plug into XenServer network UUIDs), vmware (plug into port group-id), a
> > > hyper-v virtual network, etc?
> >
> > Sure, if we find additional pieces of information are needed in the models,
> > we may have to extend them further. I'm surprised that QUantum server
> > actually
> > has knowledge of the hypervisor specific concepts you mention above though.
> >
> I'm confused, as my understanding of your proposal requires Quantum know
> this type of information.  Perhaps I'm missing something?  From the code
> review, you have comments like:
> # TODO(berrange) temporary hack until Quantum can pass over the
> # name of the OVS bridge it is configured with
> and
> # TODO(berrange) Quantum should pass the bridge name
> # in another binding metadata field
> In this case, the name of the bridge is a concept specific to certain linux
> hypervisors. If I was using XenServer, the equivalent might be a Network
> UUID, or with ESX a port group uuid.
> My current thinking is that Quantum shouldn't have to know such information
> either, but based on your comments, I was assuming this was a point of
> disagreement. Can you clarify?

Actually the Xen OVS VIF driver does require a bridge name. The bridge
name is then used by the VIF driver to lookup the Xen network UUID.
So this is a good example of the same information being required by
multiple hypervisor drivers. Similarly the Xen OVS VIF driver also
requires the ovs interfaceid - again it currently hardcodes the
assumption the the interfaceid is based on the vif['id'] field.
So again my chage to the VIF model to include an explicit ovs_interfaceid
parameter makes sense for Xen too.

> >  Surely over time nova will have to add
> > > entirely new vif_types.  Since all Nova code for a given deployment will
> > be
> > > in sync, this is not a problem within Nova, but what if you deploy a
> > newer
> > > Quantum with an older Nova?  Let's say in "H" Nova introduces vif_type
> > > "qbx" (presumably some "improved" for of VEPA :P) with requires a
> > different
> > > set of parameters to configure.  If a tenant is running an "H" version of
> > > Quantum, with a Grizzly version of Nova, how does it know that it can't
> > > specify vif-type qbx?
> >
> > I mostly think that this is a documentation / deployment issue for
> > admins to take care of. In the scenario you describe if you have an
> > Hxxxx Quantum running with a Grizzly Nova, the admin should expect
> > that they won't neccessarily be able to use latest pieces of Quantum.
> >
> I think we both agree that using vif_type qbx would not work (and that this
> is reasonable).  That wasn't my question though.  My question was: If
> Quantum returns qbx, presumably the older Nova would error-out when
> provisioning the VM, so how does Quantum from the "H" series know that it
> is or is not OK to return vif-type "qbx"?   If we have to have an admin
> configure quantum with the set of vif_types the Nova install supports,
> where back to what we wanted to avoid: having to have an admin sync config
> between Nova + Quantum.

You seem to be considering the case of a new vif_type, which replaces a
previously used vif_type when upgrading Quantum. This is something which
should never be done, unless Quantum wants to mandate use of a newer Nova.
If you want to retain full version compatibility in all directions, then
when upgrading Quantum, the existing information should never change,
unless the admin has explicitly chosen to reconfigure the plugin in some

What is more likely is that some brand new Quantum plug is invented which
also invents a new vif_type. This doesn't really require any special config
handling. It is simply a documentation task to say that if you want to use
the new plugin, you also need to use a new Nova.

> > > The above items are possible with the existing vif-plugging, since
> > > different config flags can be passed to different nova-compute instances.
> > >
> > > It seems like in the new model, we would need Nova to pass some
> > information
> > > to Quantum so that Quantum can make the right decision about what
> > vif-type
> > > and data to send.
> >
> > Since the formal dependancy is really Nova -> Quantum, IMHO we should
> > really
> > just document that Nova must be at least as new as Quantum. Runing a Hxxxx
> > Nova against a Grizzly Quantum should present no problems, only the reverse
> > has issues.
> >
> This isn't just about old vs. new versions.  See the example above about a
> deployment that spans multiple cluster for a hypervisor like XenServer or
> ESX, thereby requiring that a different identifier is passed back based on
> which cluster.  Or an even more extreme example (but one I've already seen
> in production with Quantum) is that there are multiple hypervisor types, so
> even the vif_type that would need to be passed back may well be different
> on different hypervisors.

I don't think the vif_type should ever need to be different for different
hypervisors. The vif_type is describing how Quantum has setup the network.
Based off this, the hypervisor VIF drivers then decide how to configure
the hypervisor. You seem to be thinking of the vif_type as a description
of how the hypervisors must configure the network, which is backwards to
how it should be.

> > It is a subtle distinction, but I guess I don't think of the data as soley
> > being "what is the required VIF configuration", but rather "what
> > information
> > is required todo VIF configuration". As such knowing whether Quantum has
> > applied filtering falls within scope.
> >
> Yes, but 'filtering' as a term seems very broad.  Here's a made-up example:
> what if the quantum plugin performs L3/L4 filtering (e.g., security groups)
> but did not have the ability to prevent mac/arp spoofing, and instead
> wanted the virt layer to handle that.  To me, having values that explicitly
> state what Quantum wants the virt layer to do would be more clear and less
> likely to run into problems in the future.  I'm guessing the current
> definition is use to map closely to whether to push down any <filterref>
> arguments to libvirt? I'm just trying to think about how the concept would
> apply to other platforms.

Given the example above, then we should not return a 'filtered: true/false'
value, but rather return more fine grained data  l3_filter: true/false,
l4_flter:true/false, etc, etc. Nova can then decide what additional filtering
is required, if any.

The key point is that we should not be expecting the administrator to try
to figure out what type of filtering a particular quantum plugin has applied
& then configure nova manually. This implies the admin has to have far too
much detailed knowledge of Quantum implementation details, which is just
not acceptable. Quantum needs to inform Nova so it can do the right thing.

|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

More information about the OpenStack-dev mailing list