[openstack-dev] [Quantum] [Nova] improving vif-plugging

Dan Wendlandt dan at nicira.com
Wed Jan 16 06:50:55 UTC 2013

Thanks for the reply Daniel. From your description, I think we've both been
pushing in the same general direction, but are primarily looking at the
question from two subtly different angles: you focusing on internal nova
APIs + data structures representing VIF configuration data, and me focusing
on the Quantum API used by Nova to fetch that data from the Quantum plugin.
 Some responses inline to hopefully clarify some of my thinking in light of
this.  Thanks,


On Mon, Jan 14, 2013 at 1:07 PM, Daniel P. Berrange <berrange at redhat.com>wrote:

> On Mon, Jan 14, 2013 at 12:43:07PM -0800, Dan Wendlandt wrote:
> >
> > I think we agree on both goals and mechanism at a high-level.  The point
> I
> > was trying to make above is whether we have a FORMAL Quantum API
> definition
> > of what keys are included in this dictionary, and how this set of values
> > changes over time (in the "bad old days", many of the interfaces within
> > nova where just python dictionaries, where the only "definition" of what
> > they contained was the code that shoved k-v pairs into them... that is
> what
> > I want to avoid).
> I don't strong opinion about the specific format of the Quantum -> Nova
> data.
> What is ultimately important is the data model defined for the Nova virt
> drivers
> in nova/network/model.py, the VIF & Network classes. These are the
> integration
> point between the Nova network API driver and the virt drivers. The
> important
> thing is that the per-hypervisor VIF driver impls must be fully isolated
> from
> the implementation details of the network API driver.
> As an example, when configuring OpenVSwitch one of the pieces of
> information
> require is an 'interfaceid'. Previously the libvirt VIF driver set the
> interfaceid
> based on the vif['uuid'] field, since that is what quantum expected. This
> is not
> portable though. The correct approach is for nova.network.model to have an
> explicit 'ovs-interfaceid' field and the nova.network.quantumv2.api driver
> sets this based on the on vif['uuid']. The libvirt VIF driver can now
> configure
> OVS with any network driver, not simply Quantum. Similarly for things like
> the
> OVS bridge name, of the TAP device names, all of which previously had to be
> hardcoded to Quantum specific data. This extends to bridging, 801.qbh,
> etc,etc

I have no real concerns here with respect to these representations within
Nova.  Since you mentioned that you view this as not being libvirt
specific, would the VIF/Network model in nova/network/model.py be extended
with additional fields not currently represented in the spec (
http://wiki.openstack.org/LibvirtVIFDrivers) for platforms like XenServer
(plug into XenServer network UUIDs), vmware (plug into port group-id), a
hyper-v virtual network, etc?

> > Another interesting issue that I'd like to see discussed in the spec is
> > versioning as new network capabilities are added to a particular platform
> > like libvirt.  It seems like Quantum may need to know more about the
> > capabilities of the virt-layer itself in order to know what to pass back.
> >  An existing example of this is the fact that the libvirt XML constructs
> > for OVS are only available in libvirt 0.9.11.  If someone was using a old
> > version, presumably the port creation operation should either fall back
> to
> > using something that was available (e.g., plain bridge) or fail.
> Whether libvirt has support for this or not has no bearing on Quantum. It
> is purely an impl detail for the libvirt VIF driver - the following change
> demonstrates this
>    https://review.openstack.org/#/c/19125/

It looks like my example was not relevant given that your proposal masks
both types of OVS plugging behind a single type, but it seems like the
broader question still applies.  Surely over time nova will have to add
entirely new vif_types.  Since all Nova code for a given deployment will be
in sync, this is not a problem within Nova, but what if you deploy a newer
Quantum with an older Nova?  Let's say in "H" Nova introduces vif_type
"qbx" (presumably some "improved" for of VEPA :P) with requires a different
set of parameters to configure.  If a tenant is running an "H" version of
Quantum, with a Grizzly version of Nova, how does it know that it can't
specify vif-type qbx?

To me this points to an even larger question of how we handle heterogeneous
hosts.  A couple possible examples for discussion:
- Only the newer servers in your cloud have the fancy new NICs to support
the type of vif-plugging most desired by your quantum plugin.  for older
servers, the quantum plugin needs to use a legacy platform.
- Your identifier for indicating how a vif-should be plugged (e.g., a
xenserver network UUID) is specific to each cluster of servers, and your
quantum deployment spans many clusters.  How does the quantum server know
what value to provide?
- A quantum deployment spans hypervisors of multiple types (e.g., kvm,
xenserver, and esx) and the vif-type and values returned by the plugin need
to vary for different hypervisor platforms.

The above items are possible with the existing vif-plugging, since
different config flags can be passed to different nova-compute instances.

It seems like in the new model, we would need Nova to pass some information
to Quantum so that Quantum can make the right decision about what vif-type
and data to send.

> > There's also the opposite direction, which is how Nova knows whether the
> > corresponding Quantum server is running a updated enough version to have
> > all of the key-value pairs it expects (for example, it looks like a
> recent
> > commit just extended garyk's original extension:
> >
> https://review.openstack.org/#/c/19542/3/quantum/extensions/portbindings.py
> ).
> >  In this case, Nova should probably be checking the version of this
> > extension that is running to know what k-v pairs to expect.
> The patches I've written so far will allow a Grizzly based Nova to talk
> to a Folsom based Quantum - you'll simply have to configure the old VIF
> driver classes as Nova have in Folsom. Meanwhile a Grizzly release of
> Nova will be able to talk to a Grizzly release of Quantum. What won't
> neccessarily work is random development snapshots of Quantum & Nova.
> In general, for the future, Nova should treat any new fields as optional,
> so if Quantum does not provide them, Nova should fallback to some sensible
> back-compatible behaviour.

Yeah, mandating such a defensive posture when writing the Nova side of
things is one option, and seems reasonable.  Also, since this will be an
official API extension in Quantum, it will need to be properly versioned,
as it changes, so in theory Nova should know exactly what fields to expect
based on the version of the extension Quantum is running.

One other minor comment and question:

I noticed the "filtered" value in the spec.  The current description seems
to be a bit of a deviation from the rest of the spec, as
it explicitly describes the behavior of the quantum plugin, not the
configuration value for the virt layer (which I'm guessing
is implicitly the inverse, such that if this value is true, we do NOT
configure filtering on the VIF).

And one last item that I was curious about:  I noticed that one part of
VIF-configuration that is not part of the spec currently is information
about the Vif-driver mechanism (e.g., in libivrt, choosing model=virtio and
specifying a <driver> element, or on esx, choosing e1000 vs. vmxnet).  Is
your thinking on this would be that it is even though it is vif-config from
a compute perspective, its value is unlikely to depend on quantum and thus
can still just be configured as a Nova config value for the virt layer
(e.g., libvirt_use_virtio_for_bridges')?

> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/:|
> |: http://libvirt.org              -o-             http://virt-manager.org:|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/:|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc:|

Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130115/aa427f41/attachment-0001.html>

More information about the OpenStack-dev mailing list