[openstack-dev] [Quantum] [Nova] improving vif-plugging

Daniel P. Berrange berrange at redhat.com
Thu Jan 24 11:46:33 UTC 2013


On Wed, Jan 23, 2013 at 09:46:46PM -0800, Dan Wendlandt wrote:
> Hi Daniel,
> 
> Ok, I think we're zeroing in on the key differences in our points of view.
>  Let me know if you think it would be more efficient to discuss this via
> IRC, as email lag when we're on different sides of the globe is tough.

Yeah, it could well be. I'm 'danpb' in #openstack-dev IRC and usually
online in some fashion between 9-7pm GMT

> On Wed, Jan 23, 2013 at 3:02 AM, Daniel P. Berrange <berrange at redhat.com>wrote:
> 
> > > I'm confused, as my understanding of your proposal requires Quantum know
> > > this type of information.  Perhaps I'm missing something?  From the code
> > > review, you have comments like:
> > >
> > > # TODO(berrange) temporary hack until Quantum can pass over the
> > > # name of the OVS bridge it is configured with
> > >
> > > and
> > >
> > > # TODO(berrange) Quantum should pass the bridge name
> > > # in another binding metadata field
> > >
> > > In this case, the name of the bridge is a concept specific to certain
> > linux
> > > hypervisors. If I was using XenServer, the equivalent might be a Network
> > > UUID, or with ESX a port group uuid.
> > >
> > > My current thinking is that Quantum shouldn't have to know such
> > information
> > > either, but based on your comments, I was assuming this was a point of
> > > disagreement. Can you clarify?
> >
> > Actually the Xen OVS VIF driver does require a bridge name. The bridge
> > name is then used by the VIF driver to lookup the Xen network UUID.
> 
> So this is a good example of the same information being required by
> > multiple hypervisor drivers. Similarly the Xen OVS VIF driver also
> > requires the ovs interfaceid - again it currently hardcodes the
> > assumption the the interfaceid is based on the vif['id'] field.
> > So again my chage to the VIF model to include an explicit ovs_interfaceid
> > parameter makes sense for Xen too.
> >
> 
> I completely agree that some fields make sense to multiple hypervisors... I
> certainly did not intend to say anything to the contrary.  The point I was
> making was that there is no single set of information is relevant to all
> hypervisors.  Do you agree with that statement, or are you advocating that
> there is a single set of such information?
> 
> Also, I'm still trying to get confirmation to my question above, namely
> that you do intend that Quantum would provide all such data needed to plug
> a VIF, for example, providing a bridge name to a hypervisor running KVM, or
> a port-group id for a hypervisor running ESX.

In essence yes. It is hard for me to answer your question about bridge
name vs port-group id for ESX because AFAICK there's no plugin that
exists for ESX + Quantum today - nova.virt.vmwareapi.vif certainly
doesn't appear to have any such code. I'm not overly concerned though.

I'm of the opinion that in general it should be possible to provide a
set of information that is usable by all hypervisors. It may be that
some hypervisors don't use all the pieces of information, but that's
OK, as long as it doesn't deteriorate to the point where we have
completely different data for every hypervisor. Given that the Quantum
plugins I've looked at don't change their behaviour based on the hypervisor
in use, I don't think we're going to have that problem in general.

> If so, I do not see how the existing proposal can handle the situation
> where two different sets of hypervisors that need different information are
> deployed simultaneously.  This could happen either because the two sets of
> hypervisors identify vswitches differently (e.g., linux bridge names vs.
> port-group ids) or because deployment constraints make it impossible to use
> the same vswitch identifier across all hypervisors (e.g., a vswitch is
> identified by a uuid, but that UUID is a per-cluster ID and the deplyoment
> has multiple clusters).   Helping me understand how you see this working
> would help me out a lot.

The key is that in the Quantum plugin code, we're not doing any different
work based on hypervisor in use. eg whether using Xen or KVM, the Linux
Bridge plugin in Quantum is doing the same work to setup the bridge and
has same requirements for port mapping. There's no  if xen...else ...
code there.

> > > I think we both agree that using vif_type qbx would not work (and that
> > this
> > > is reasonable).  That wasn't my question though.  My question was: If
> > > Quantum returns qbx, presumably the older Nova would error-out when
> > > provisioning the VM, so how does Quantum from the "H" series know that it
> > > is or is not OK to return vif-type "qbx"?   If we have to have an admin
> > > configure quantum with the set of vif_types the Nova install supports,
> > > where back to what we wanted to avoid: having to have an admin sync
> > config
> > > between Nova + Quantum.
> >
> > You seem to be considering the case of a new vif_type, which replaces a
> > previously used vif_type when upgrading Quantum. This is something which
> > should never be done, unless Quantum wants to mandate use of a newer Nova.
> 
> > If you want to retain full version compatibility in all directions, then
> > when upgrading Quantum, the existing information should never change,
> > unless the admin has explicitly chosen to reconfigure the plugin in some
> > way.
> >
> 
> 
> I don't think that's the use case I'm trying to describe.  If a Quantum
> plugin only supports one vif_type for a given release, its trivial for it
> to know which one to respond with.  I'm talking about the case of a Quantum
> plugin that supports two different vif_types simultaneousl, but needs to
> know information about what vif_types the corresponding nova-compute
> supports in order to know how to respond.  I believe some other Red Hat
> folks are keen on being able to realize this.

There's possible answers, depending on how I interpret what you describe
above. If you considering that one Quantum plugin can create mutiple
networks and each network can require a different vif_type, then I don't
think there's any problem. Upon upgrading Quantum, any existing network
that exists should retain use of its previous 'vif_type' value for
compatibility with all existing Nova deployments using it. If the
admin decides to create a new network with the new 'vif_type', then I
think it is perfectly OK for Nova to report an error if the admin tries
to start a VM on a Nova instance that doesn't support this new vif_type.

If on the other hand you are considering one Quantum plugin with one
network, where each port on the network can have a different vif-type
then we have a more complicated issue. In such a case, I think we would
have to make sure that when Nova issues the "create port" API call to
Quantum, it passes across a list of all the vif_types it is able to
handle.

Maybe we should just make Nova pass across its list of supported
vif types during 'create port' regardless of whether we need it
now and be done with it.


> > > This isn't just about old vs. new versions.  See the example above about
> > a
> > > deployment that spans multiple cluster for a hypervisor like XenServer or
> > > ESX, thereby requiring that a different identifier is passed back based
> > on
> > > which cluster.  Or an even more extreme example (but one I've already
> > seen
> > > in production with Quantum) is that there are multiple hypervisor types,
> > so
> > > even the vif_type that would need to be passed back may well be different
> > > on different hypervisors.
> >
> > I don't think the vif_type should ever need to be different for different
> > hypervisors. The vif_type is describing how Quantum has setup the network.
> > Based off this, the hypervisor VIF drivers then decide how to configure
> > the hypervisor. You seem to be thinking of the vif_type as a description
> > of how the hypervisors must configure the network, which is backwards to
> > how it should be.
> >
> 
> Yes, I think this is where a lot of the difference of opinion lies.  To me
> the key question is whether you would use the same vif_type to plug a VM
> into an OVS bridge and into an ESX port group (assuming you had a quantum
> plugin that could both manage OVS and the vmware vswitch)?  And even if you
> did, I feel that the identifiers that quantum would need to return would be
> different.  Or are you suggesting that something like a port-group would be
> modeled as a "bridge"?  It seems pretty clear from this commit (
> https://review.openstack.org/#/c/19117/5/nova/network/model.py) that bridge
> means a linux bridge, as there's a variable that describes the length of
> the bridge name as the length of a linux device name.
> 
> I think if we can get to the bottom of this use case, we'll be well on our
> way to being on the same page.

Ok, so you're describing a Quantum plugin that has two completely separate
modes of operation, depending on the hypervisor in use.  For that to work,
Nova is going to have to be telling Quantum what hypervisor is needs the
vport created for, and Quantum will do different work based on that. The
'vif_type' reflects the type of setup that Quantum did, so if the quantum
plugin has two different setups it can do, this implies two different
possible 'vif_type' values to be returned - one for each type of setup -
and each vif type will have its own appropriate data associated with it.
Yes, this leads to more possible 'vif_type' definitions, but I think
that is OK, as long as we consider their design carefully. The preference
should be to minimise the number of vif types and only introduce new ones
if there truely is a significant difference in needs. In other words we
want the same vif type to apply across multiple hypervisors wherever
it is reasonable todo so, and not needlessly create hypervisor specific
vif types.

Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|



More information about the OpenStack-dev mailing list