[openstack-dev] [Quantum] [Nova] improving vif-plugging

Dan Wendlandt dan at nicira.com
Thu Jan 24 18:58:43 UTC 2013


Hi Daniel,

I actually think (hope?) that we're converging here.  I'll try one more
response below, as I think the context of the thread is helpful, but if you
see this before you leave today and want to chat directly, you can ping me
on IRC (i'm danwent)

On Thu, Jan 24, 2013 at 3:46 AM, Daniel P. Berrange <berrange at redhat.com>wrote:

> On Wed, Jan 23, 2013 at 09:46:46PM -0800, Dan Wendlandt wrote:
> > Hi Daniel,
> >
> > Ok, I think we're zeroing in on the key differences in our points of
> view.
> >  Let me know if you think it would be more efficient to discuss this via
> > IRC, as email lag when we're on different sides of the globe is tough.
>
> Yeah, it could well be. I'm 'danpb' in #openstack-dev IRC and usually
> online in some fashion between 9-7pm GMT
>
> > On Wed, Jan 23, 2013 at 3:02 AM, Daniel P. Berrange <berrange at redhat.com
> >wrote:
> >
> > > > I'm confused, as my understanding of your proposal requires Quantum
> know
> > > > this type of information.  Perhaps I'm missing something?  From the
> code
> > > > review, you have comments like:
> > > >
> > > > # TODO(berrange) temporary hack until Quantum can pass over the
> > > > # name of the OVS bridge it is configured with
> > > >
> > > > and
> > > >
> > > > # TODO(berrange) Quantum should pass the bridge name
> > > > # in another binding metadata field
> > > >
> > > > In this case, the name of the bridge is a concept specific to certain
> > > linux
> > > > hypervisors. If I was using XenServer, the equivalent might be a
> Network
> > > > UUID, or with ESX a port group uuid.
> > > >
> > > > My current thinking is that Quantum shouldn't have to know such
> > > information
> > > > either, but based on your comments, I was assuming this was a point
> of
> > > > disagreement. Can you clarify?
> > >
> > > Actually the Xen OVS VIF driver does require a bridge name. The bridge
> > > name is then used by the VIF driver to lookup the Xen network UUID.
> >
> > So this is a good example of the same information being required by
> > > multiple hypervisor drivers. Similarly the Xen OVS VIF driver also
> > > requires the ovs interfaceid - again it currently hardcodes the
> > > assumption the the interfaceid is based on the vif['id'] field.
> > > So again my chage to the VIF model to include an explicit
> ovs_interfaceid
> > > parameter makes sense for Xen too.
> > >
> >
> > I completely agree that some fields make sense to multiple
> hypervisors... I
> > certainly did not intend to say anything to the contrary.  The point I
> was
> > making was that there is no single set of information is relevant to all
> > hypervisors.  Do you agree with that statement, or are you advocating
> that
> > there is a single set of such information?
> >
> > Also, I'm still trying to get confirmation to my question above, namely
> > that you do intend that Quantum would provide all such data needed to
> plug
> > a VIF, for example, providing a bridge name to a hypervisor running KVM,
> or
> > a port-group id for a hypervisor running ESX.
>
> In essence yes. It is hard for me to answer your question about bridge
> name vs port-group id for ESX because AFAICK there's no plugin that
> exists for ESX + Quantum today - nova.virt.vmwareapi.vif certainly
> doesn't appear to have any such code. I'm not overly concerned though.
>

I agree that if you look at the simple linux bridge or OVS plugins, they
follow a very basic model where a vif_type and even bridge name would be
uniform for an all KVM deployment.

But, for example, the NVP plugin can control KVM, XenServer, and soon ESX
(waiting on a code change to add some more logic to ESX vif-plugging, which
is one of the reasons I'm mentioning it as a specific example).  With KVM
vs. ESX, the data returned is different in kind (i.e., one is a linux
bridge name, another is a port-group).  And with KVM and XenServer, even
though they are same same in kind (both bridge names), they are very likely
to be different in form, since XenServer generates bridge names using a
standard format (e.g., xapi0, or xenbr1).  Below you propose something that
with a very minor tweak would solve this concern, I believe.


>
> I'm of the opinion that in general it should be possible to provide a
> set of information that is usable by all hypervisors. It may be that
> some hypervisors don't use all the pieces of information, but that's
> OK, as long as it doesn't deteriorate to the point where we have
> completely different data for every hypervisor. Given that the Quantum
> plugins I've looked at don't change their behaviour based on the hypervisor
> in use, I don't think we're going to have that problem in general.
>
> > If so, I do not see how the existing proposal can handle the situation
> > where two different sets of hypervisors that need different information
> are
> > deployed simultaneously.  This could happen either because the two sets
> of
> > hypervisors identify vswitches differently (e.g., linux bridge names vs.
> > port-group ids) or because deployment constraints make it impossible to
> use
> > the same vswitch identifier across all hypervisors (e.g., a vswitch is
> > identified by a uuid, but that UUID is a per-cluster ID and the
> deplyoment
> > has multiple clusters).   Helping me understand how you see this working
> > would help me out a lot.
>
> The key is that in the Quantum plugin code, we're not doing any different
> work based on hypervisor in use. eg whether using Xen or KVM, the Linux
> Bridge plugin in Quantum is doing the same work to setup the bridge and
> has same requirements for port mapping. There's no  if xen...else ...
> code there.
>

I think this is the key point that we need to agree on.  This is certainly
true for very simple plugins, but not for more advanced ones (e.g., NVP
today) or what others are trying to build (e.g., Bob's efforts for a module
L2 plugin).  I think the proposal below can handle this though.



>
> > > > I think we both agree that using vif_type qbx would not work (and
> that
> > > this
> > > > is reasonable).  That wasn't my question though.  My question was: If
> > > > Quantum returns qbx, presumably the older Nova would error-out when
> > > > provisioning the VM, so how does Quantum from the "H" series know
> that it
> > > > is or is not OK to return vif-type "qbx"?   If we have to have an
> admin
> > > > configure quantum with the set of vif_types the Nova install
> supports,
> > > > where back to what we wanted to avoid: having to have an admin sync
> > > config
> > > > between Nova + Quantum.
> > >
> > > You seem to be considering the case of a new vif_type, which replaces a
> > > previously used vif_type when upgrading Quantum. This is something
> which
> > > should never be done, unless Quantum wants to mandate use of a newer
> Nova.
> >
> > > If you want to retain full version compatibility in all directions,
> then
> > > when upgrading Quantum, the existing information should never change,
> > > unless the admin has explicitly chosen to reconfigure the plugin in
> some
> > > way.
> > >
> >
> >
> > I don't think that's the use case I'm trying to describe.  If a Quantum
> > plugin only supports one vif_type for a given release, its trivial for it
> > to know which one to respond with.  I'm talking about the case of a
> Quantum
> > plugin that supports two different vif_types simultaneousl, but needs to
> > know information about what vif_types the corresponding nova-compute
> > supports in order to know how to respond.  I believe some other Red Hat
> > folks are keen on being able to realize this.
>
> There's possible answers, depending on how I interpret what you describe
> above. If you considering that one Quantum plugin can create mutiple
> networks and each network can require a different vif_type, then I don't
> think there's any problem. Upon upgrading Quantum, any existing network
> that exists should retain use of its previous 'vif_type' value for
> compatibility with all existing Nova deployments using it. If the
> admin decides to create a new network with the new 'vif_type', then I
> think it is perfectly OK for Nova to report an error if the admin tries
> to start a VM on a Nova instance that doesn't support this new vif_type.
>
> If on the other hand you are considering one Quantum plugin with one
> network, where each port on the network can have a different vif-type
> then we have a more complicated issue. In such a case, I think we would
> have to make sure that when Nova issues the "create port" API call to
> Quantum, it passes across a list of all the vif_types it is able to
> handle.
>
> Maybe we should just make Nova pass across its list of supported
> vif types during 'create port' regardless of whether we need it
> now and be done with it.
>

Yes, this is what I was thinking as well.  Somewhat of a "negotiation"
where Nova sends a certain amount of information over (e.g., its supported
vif_types, its node-id) and then Quantum can determine what vif_type to
respond with.  I'm thinking that node-id may be needed to handle the case
where Quantum needs to respond with different data even for the same
vif_type (e.g., two different esx clusters that have different
port-group-ids).

This adds some more complexity to Quantum, as the centralized
quantum-server must know the mapping from a node-id to bridge-id +
vif_type, but some part of the Quantum plugin must know this information
already (e.g., an agent), so it would really just be a matter of shifting
bits around within Quantum, which seems reasonable given time to implement
this.

Dan



>
>
> > > > This isn't just about old vs. new versions.  See the example above
> about
> > > a
> > > > deployment that spans multiple cluster for a hypervisor like
> XenServer or
> > > > ESX, thereby requiring that a different identifier is passed back
> based
> > > on
> > > > which cluster.  Or an even more extreme example (but one I've already
> > > seen
> > > > in production with Quantum) is that there are multiple hypervisor
> types,
> > > so
> > > > even the vif_type that would need to be passed back may well be
> different
> > > > on different hypervisors.
> > >
> > > I don't think the vif_type should ever need to be different for
> different
> > > hypervisors. The vif_type is describing how Quantum has setup the
> network.
> > > Based off this, the hypervisor VIF drivers then decide how to configure
> > > the hypervisor. You seem to be thinking of the vif_type as a
> description
> > > of how the hypervisors must configure the network, which is backwards
> to
> > > how it should be.
> > >
> >
> > Yes, I think this is where a lot of the difference of opinion lies.  To
> me
> > the key question is whether you would use the same vif_type to plug a VM
> > into an OVS bridge and into an ESX port group (assuming you had a quantum
> > plugin that could both manage OVS and the vmware vswitch)?  And even if
> you
> > did, I feel that the identifiers that quantum would need to return would
> be
> > different.  Or are you suggesting that something like a port-group would
> be
> > modeled as a "bridge"?  It seems pretty clear from this commit (
> > https://review.openstack.org/#/c/19117/5/nova/network/model.py) that
> bridge
> > means a linux bridge, as there's a variable that describes the length of
> > the bridge name as the length of a linux device name.
> >
> > I think if we can get to the bottom of this use case, we'll be well on
> our
> > way to being on the same page.
>
> Ok, so you're describing a Quantum plugin that has two completely separate
> modes of operation, depending on the hypervisor in use.  For that to work,
> Nova is going to have to be telling Quantum what hypervisor is needs the
> vport created for, and Quantum will do different work based on that. The
> 'vif_type' reflects the type of setup that Quantum did, so if the quantum
> plugin has two different setups it can do, this implies two different
> possible 'vif_type' values to be returned - one for each type of setup -
> and each vif type will have its own appropriate data associated with it.
> Yes, this leads to more possible 'vif_type' definitions, but I think
> that is OK, as long as we consider their design carefully. The preference
> should be to minimise the number of vif types and only introduce new ones
> if there truely is a significant difference in needs. In other words we
> want the same vif type to apply across multiple hypervisors wherever
> it is reasonable todo so, and not needlessly create hypervisor specific
> vif types.
>
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/:|
> |: http://libvirt.org              -o-             http://virt-manager.org:|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/:|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc:|
>



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130124/7548b060/attachment.html>


More information about the OpenStack-dev mailing list