[openstack-dev] [Quantum] [Nova] improving vif-plugging
Dan Wendlandt
dan at nicira.com
Thu Jan 24 05:46:46 UTC 2013
Hi Daniel,
Ok, I think we're zeroing in on the key differences in our points of view.
Let me know if you think it would be more efficient to discuss this via
IRC, as email lag when we're on different sides of the globe is tough.
More comments inline.
On Wed, Jan 23, 2013 at 3:02 AM, Daniel P. Berrange <berrange at redhat.com>wrote:
> On Tue, Jan 22, 2013 at 11:41:55PM -0800, Dan Wendlandt wrote:
> > Hi Daniel,
> >
> > More comments inline.
> >
> > On Wed, Jan 16, 2013 at 1:48 AM, Daniel P. Berrange <berrange at redhat.com
> >wrote:
> >
> > > On Tue, Jan 15, 2013 at 10:50:55PM -0800, Dan Wendlandt wrote:
> > > > > As an example, when configuring OpenVSwitch one of the pieces of
> > > > > information
> > > > > require is an 'interfaceid'. Previously the libvirt VIF driver set
> the
> > > > > interfaceid
> > > > > based on the vif['uuid'] field, since that is what quantum
> expected.
> > > This
> > > > > is not
> > > > > portable though. The correct approach is for nova.network.model to
> > > have an
> > > > > explicit 'ovs-interfaceid' field and the nova.network.quantumv2.api
> > > driver
> > > > > sets this based on the on vif['uuid']. The libvirt VIF driver can
> now
> > > > > configure
> > > > > OVS with any network driver, not simply Quantum. Similarly for
> things
> > > like
> > > > > the
> > > > > OVS bridge name, of the TAP device names, all of which previously
> had
> > > to be
> > > > > hardcoded to Quantum specific data. This extends to bridging,
> 801.qbh,
> > > > > etc,etc
> > > > >
> > > >
> > > > I have no real concerns here with respect to these representations
> within
> > > > Nova. Since you mentioned that you view this as not being libvirt
> > > > specific, would the VIF/Network model in nova/network/model.py be
> > > extended
> > > > with additional fields not currently represented in the spec (
> > > > http://wiki.openstack.org/LibvirtVIFDrivers) for platforms like
> > > XenServer
> > > > (plug into XenServer network UUIDs), vmware (plug into port
> group-id), a
> > > > hyper-v virtual network, etc?
> > >
> > > Sure, if we find additional pieces of information are needed in the
> models,
> > > we may have to extend them further. I'm surprised that QUantum server
> > > actually
> > > has knowledge of the hypervisor specific concepts you mention above
> though.
> > >
> >
> > I'm confused, as my understanding of your proposal requires Quantum know
> > this type of information. Perhaps I'm missing something? From the code
> > review, you have comments like:
> >
> > # TODO(berrange) temporary hack until Quantum can pass over the
> > # name of the OVS bridge it is configured with
> >
> > and
> >
> > # TODO(berrange) Quantum should pass the bridge name
> > # in another binding metadata field
> >
> > In this case, the name of the bridge is a concept specific to certain
> linux
> > hypervisors. If I was using XenServer, the equivalent might be a Network
> > UUID, or with ESX a port group uuid.
> >
> > My current thinking is that Quantum shouldn't have to know such
> information
> > either, but based on your comments, I was assuming this was a point of
> > disagreement. Can you clarify?
>
> Actually the Xen OVS VIF driver does require a bridge name. The bridge
> name is then used by the VIF driver to lookup the Xen network UUID.
So this is a good example of the same information being required by
> multiple hypervisor drivers. Similarly the Xen OVS VIF driver also
> requires the ovs interfaceid - again it currently hardcodes the
> assumption the the interfaceid is based on the vif['id'] field.
> So again my chage to the VIF model to include an explicit ovs_interfaceid
> parameter makes sense for Xen too.
>
I completely agree that some fields make sense to multiple hypervisors... I
certainly did not intend to say anything to the contrary. The point I was
making was that there is no single set of information is relevant to all
hypervisors. Do you agree with that statement, or are you advocating that
there is a single set of such information?
Also, I'm still trying to get confirmation to my question above, namely
that you do intend that Quantum would provide all such data needed to plug
a VIF, for example, providing a bridge name to a hypervisor running KVM, or
a port-group id for a hypervisor running ESX.
If so, I do not see how the existing proposal can handle the situation
where two different sets of hypervisors that need different information are
deployed simultaneously. This could happen either because the two sets of
hypervisors identify vswitches differently (e.g., linux bridge names vs.
port-group ids) or because deployment constraints make it impossible to use
the same vswitch identifier across all hypervisors (e.g., a vswitch is
identified by a uuid, but that UUID is a per-cluster ID and the deplyoment
has multiple clusters). Helping me understand how you see this working
would help me out a lot.
>
> > > Surely over time nova will have to add
> > > > entirely new vif_types. Since all Nova code for a given deployment
> will
> > > be
> > > > in sync, this is not a problem within Nova, but what if you deploy a
> > > newer
> > > > Quantum with an older Nova? Let's say in "H" Nova introduces
> vif_type
> > > > "qbx" (presumably some "improved" for of VEPA :P) with requires a
> > > different
> > > > set of parameters to configure. If a tenant is running an "H"
> version of
> > > > Quantum, with a Grizzly version of Nova, how does it know that it
> can't
> > > > specify vif-type qbx?
> > >
> > > I mostly think that this is a documentation / deployment issue for
> > > admins to take care of. In the scenario you describe if you have an
> > > Hxxxx Quantum running with a Grizzly Nova, the admin should expect
> > > that they won't neccessarily be able to use latest pieces of Quantum.
> > >
> >
> > I think we both agree that using vif_type qbx would not work (and that
> this
> > is reasonable). That wasn't my question though. My question was: If
> > Quantum returns qbx, presumably the older Nova would error-out when
> > provisioning the VM, so how does Quantum from the "H" series know that it
> > is or is not OK to return vif-type "qbx"? If we have to have an admin
> > configure quantum with the set of vif_types the Nova install supports,
> > where back to what we wanted to avoid: having to have an admin sync
> config
> > between Nova + Quantum.
>
> You seem to be considering the case of a new vif_type, which replaces a
> previously used vif_type when upgrading Quantum. This is something which
> should never be done, unless Quantum wants to mandate use of a newer Nova.
If you want to retain full version compatibility in all directions, then
> when upgrading Quantum, the existing information should never change,
> unless the admin has explicitly chosen to reconfigure the plugin in some
> way.
>
I don't think that's the use case I'm trying to describe. If a Quantum
plugin only supports one vif_type for a given release, its trivial for it
to know which one to respond with. I'm talking about the case of a Quantum
plugin that supports two different vif_types simultaneousl, but needs to
know information about what vif_types the corresponding nova-compute
supports in order to know how to respond. I believe some other Red Hat
folks are keen on being able to realize this.
>
> What is more likely is that some brand new Quantum plug is invented which
> also invents a new vif_type. This doesn't really require any special config
> handling. It is simply a documentation task to say that if you want to use
> the new plugin, you also need to use a new Nova.
>
Yeah, this is the easy case. I have no concerns there.
> > > > The above items are possible with the existing vif-plugging, since
> > > > different config flags can be passed to different nova-compute
> instances.
> > > >
> > > > It seems like in the new model, we would need Nova to pass some
> > > information
> > > > to Quantum so that Quantum can make the right decision about what
> > > vif-type
> > > > and data to send.
> > >
> > > Since the formal dependancy is really Nova -> Quantum, IMHO we should
> > > really
> > > just document that Nova must be at least as new as Quantum. Runing a
> Hxxxx
> > > Nova against a Grizzly Quantum should present no problems, only the
> reverse
> > > has issues.
> > >
> >
> > This isn't just about old vs. new versions. See the example above about
> a
> > deployment that spans multiple cluster for a hypervisor like XenServer or
> > ESX, thereby requiring that a different identifier is passed back based
> on
> > which cluster. Or an even more extreme example (but one I've already
> seen
> > in production with Quantum) is that there are multiple hypervisor types,
> so
> > even the vif_type that would need to be passed back may well be different
> > on different hypervisors.
>
> I don't think the vif_type should ever need to be different for different
> hypervisors. The vif_type is describing how Quantum has setup the network.
> Based off this, the hypervisor VIF drivers then decide how to configure
> the hypervisor. You seem to be thinking of the vif_type as a description
> of how the hypervisors must configure the network, which is backwards to
> how it should be.
>
Yes, I think this is where a lot of the difference of opinion lies. To me
the key question is whether you would use the same vif_type to plug a VM
into an OVS bridge and into an ESX port group (assuming you had a quantum
plugin that could both manage OVS and the vmware vswitch)? And even if you
did, I feel that the identifiers that quantum would need to return would be
different. Or are you suggesting that something like a port-group would be
modeled as a "bridge"? It seems pretty clear from this commit (
https://review.openstack.org/#/c/19117/5/nova/network/model.py) that bridge
means a linux bridge, as there's a variable that describes the length of
the bridge name as the length of a linux device name.
I think if we can get to the bottom of this use case, we'll be well on our
way to being on the same page.
>
> > > It is a subtle distinction, but I guess I don't think of the data as
> soley
> > > being "what is the required VIF configuration", but rather "what
> > > information
> > > is required todo VIF configuration". As such knowing whether Quantum
> has
> > > applied filtering falls within scope.
> > >
> >
> > Yes, but 'filtering' as a term seems very broad. Here's a made-up
> example:
> > what if the quantum plugin performs L3/L4 filtering (e.g., security
> groups)
> > but did not have the ability to prevent mac/arp spoofing, and instead
> > wanted the virt layer to handle that. To me, having values that
> explicitly
> > state what Quantum wants the virt layer to do would be more clear and
> less
> > likely to run into problems in the future. I'm guessing the current
> > definition is use to map closely to whether to push down any <filterref>
> > arguments to libvirt? I'm just trying to think about how the concept
> would
> > apply to other platforms.
>
> Given the example above, then we should not return a 'filtered: true/false'
> value, but rather return more fine grained data l3_filter: true/false,
> l4_flter:true/false, etc, etc. Nova can then decide what additional
> filtering
> is required, if any.
>
> The key point is that we should not be expecting the administrator to try
> to figure out what type of filtering a particular quantum plugin has
> applied
> & then configure nova manually. This implies the admin has to have far too
> much detailed knowledge of Quantum implementation details, which is just
> not acceptable. Quantum needs to inform Nova so it can do the right thing.
>
I completely agree :)
Dan
>
>
> Daniel
> --
> |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/:|
> |: http://libvirt.org -o- http://virt-manager.org:|
> |: http://autobuild.org -o- http://search.cpan.org/~danberr/:|
> |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc:|
>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130123/c99c40be/attachment.html>
More information about the OpenStack-dev
mailing list