[openstack-dev] [Quantum] [Nova] improving vif-plugging
Daniel P. Berrange
berrange at redhat.com
Wed Jan 16 09:48:53 UTC 2013
On Tue, Jan 15, 2013 at 10:50:55PM -0800, Dan Wendlandt wrote:
> > As an example, when configuring OpenVSwitch one of the pieces of
> > information
> > require is an 'interfaceid'. Previously the libvirt VIF driver set the
> > interfaceid
> > based on the vif['uuid'] field, since that is what quantum expected. This
> > is not
> > portable though. The correct approach is for nova.network.model to have an
> > explicit 'ovs-interfaceid' field and the nova.network.quantumv2.api driver
> > sets this based on the on vif['uuid']. The libvirt VIF driver can now
> > configure
> > OVS with any network driver, not simply Quantum. Similarly for things like
> > the
> > OVS bridge name, of the TAP device names, all of which previously had to be
> > hardcoded to Quantum specific data. This extends to bridging, 801.qbh,
> > etc,etc
> >
>
> I have no real concerns here with respect to these representations within
> Nova. Since you mentioned that you view this as not being libvirt
> specific, would the VIF/Network model in nova/network/model.py be extended
> with additional fields not currently represented in the spec (
> http://wiki.openstack.org/LibvirtVIFDrivers) for platforms like XenServer
> (plug into XenServer network UUIDs), vmware (plug into port group-id), a
> hyper-v virtual network, etc?
Sure, if we find additional pieces of information are needed in the models,
we may have to extend them further. I'm surprised that QUantum server actually
has knowledge of the hypervisor specific concepts you mention above though.
> > > Another interesting issue that I'd like to see discussed in the spec is
> > > versioning as new network capabilities are added to a particular platform
> > > like libvirt. It seems like Quantum may need to know more about the
> > > capabilities of the virt-layer itself in order to know what to pass back.
> > > An existing example of this is the fact that the libvirt XML constructs
> > > for OVS are only available in libvirt 0.9.11. If someone was using a old
> > > version, presumably the port creation operation should either fall back
> > to
> > > using something that was available (e.g., plain bridge) or fail.
> >
> > Whether libvirt has support for this or not has no bearing on Quantum. It
> > is purely an impl detail for the libvirt VIF driver - the following change
> > demonstrates this
> >
> > https://review.openstack.org/#/c/19125/
>
>
> It looks like my example was not relevant given that your proposal masks
> both types of OVS plugging behind a single type, but it seems like the
> broader question still applies. Surely over time nova will have to add
> entirely new vif_types. Since all Nova code for a given deployment will be
> in sync, this is not a problem within Nova, but what if you deploy a newer
> Quantum with an older Nova? Let's say in "H" Nova introduces vif_type
> "qbx" (presumably some "improved" for of VEPA :P) with requires a different
> set of parameters to configure. If a tenant is running an "H" version of
> Quantum, with a Grizzly version of Nova, how does it know that it can't
> specify vif-type qbx?
I mostly think that this is a documentation / deployment issue for
admins to take care of. In the scenario you describe if you have an
Hxxxx Quantum running with a Grizzly Nova, the admin should expect
that they won't neccessarily be able to use latest pieces of Quantum.
>
> To me this points to an even larger question of how we handle heterogeneous
> hosts. A couple possible examples for discussion:
> - Only the newer servers in your cloud have the fancy new NICs to support
> the type of vif-plugging most desired by your quantum plugin. for older
> servers, the quantum plugin needs to use a legacy platform.
> - Your identifier for indicating how a vif-should be plugged (e.g., a
> xenserver network UUID) is specific to each cluster of servers, and your
> quantum deployment spans many clusters. How does the quantum server know
> what value to provide?
> - A quantum deployment spans hypervisors of multiple types (e.g., kvm,
> xenserver, and esx) and the vif-type and values returned by the plugin need
> to vary for different hypervisor platforms.
>
> The above items are possible with the existing vif-plugging, since
> different config flags can be passed to different nova-compute instances.
>
> It seems like in the new model, we would need Nova to pass some information
> to Quantum so that Quantum can make the right decision about what vif-type
> and data to send.
Since the formal dependancy is really Nova -> Quantum, IMHO we should really
just document that Nova must be at least as new as Quantum. Runing a Hxxxx
Nova against a Grizzly Quantum should present no problems, only the reverse
has issues.
> > > There's also the opposite direction, which is how Nova knows whether the
> > > corresponding Quantum server is running a updated enough version to have
> > > all of the key-value pairs it expects (for example, it looks like a
> > recent
> > > commit just extended garyk's original extension:
> > >
> > https://review.openstack.org/#/c/19542/3/quantum/extensions/portbindings.py
> > ).
> > > In this case, Nova should probably be checking the version of this
> > > extension that is running to know what k-v pairs to expect.
> >
> > The patches I've written so far will allow a Grizzly based Nova to talk
> > to a Folsom based Quantum - you'll simply have to configure the old VIF
> > driver classes as Nova have in Folsom. Meanwhile a Grizzly release of
> > Nova will be able to talk to a Grizzly release of Quantum. What won't
> > neccessarily work is random development snapshots of Quantum & Nova.
> >
> > In general, for the future, Nova should treat any new fields as optional,
> > so if Quantum does not provide them, Nova should fallback to some sensible
> > back-compatible behaviour.
> >
>
> Yeah, mandating such a defensive posture when writing the Nova side of
> things is one option, and seems reasonable. Also, since this will be an
> official API extension in Quantum, it will need to be properly versioned,
> as it changes, so in theory Nova should know exactly what fields to expect
> based on the version of the extension Quantum is running.
>
> One other minor comment and question:
>
> I noticed the "filtered" value in the spec. The current description seems
> to be a bit of a deviation from the rest of the spec, as
> it explicitly describes the behavior of the quantum plugin, not the
> configuration value for the virt layer (which I'm guessing
> is implicitly the inverse, such that if this value is true, we do NOT
> configure filtering on the VIF).
It is a subtle distinction, but I guess I don't think of the data as soley
being "what is the required VIF configuration", but rather "what information
is required todo VIF configuration". As such knowing whether Quantum has
applied filtering falls within scope.
> And one last item that I was curious about: I noticed that one part of
> VIF-configuration that is not part of the spec currently is information
> about the Vif-driver mechanism (e.g., in libivrt, choosing model=virtio and
> specifying a <driver> element, or on esx, choosing e1000 vs. vmxnet). Is
> your thinking on this would be that it is even though it is vif-config from
> a compute perspective, its value is unlikely to depend on quantum and thus
> can still just be configured as a Nova config value for the virt layer
> (e.g., libvirt_use_virtio_for_bridges')?
Although it is not clearly distinguished in the libvirt XML, you have to
consider the guest config as having two distinct groups of information. In
one group there is the machine hardware specification, and in the other
group there is the host resource mapping. There is no dependancy between
the two groups. So the choice of NIC hardware model is completely unrelated
to the way you plug a VIF into the host network & as such it is not relevant
to Quantum
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
More information about the OpenStack-dev
mailing list