[openstack-dev] [Quantum] [Nova] improving vif-plugging

Dan Wendlandt dan at nicira.com
Wed Jan 23 07:41:55 UTC 2013


Hi Daniel,

More comments inline.

On Wed, Jan 16, 2013 at 1:48 AM, Daniel P. Berrange <berrange at redhat.com>wrote:

> On Tue, Jan 15, 2013 at 10:50:55PM -0800, Dan Wendlandt wrote:
> > > As an example, when configuring OpenVSwitch one of the pieces of
> > > information
> > > require is an 'interfaceid'. Previously the libvirt VIF driver set the
> > > interfaceid
> > > based on the vif['uuid'] field, since that is what quantum expected.
> This
> > > is not
> > > portable though. The correct approach is for nova.network.model to
> have an
> > > explicit 'ovs-interfaceid' field and the nova.network.quantumv2.api
> driver
> > > sets this based on the on vif['uuid']. The libvirt VIF driver can now
> > > configure
> > > OVS with any network driver, not simply Quantum. Similarly for things
> like
> > > the
> > > OVS bridge name, of the TAP device names, all of which previously had
> to be
> > > hardcoded to Quantum specific data. This extends to bridging, 801.qbh,
> > > etc,etc
> > >
> >
> > I have no real concerns here with respect to these representations within
> > Nova.  Since you mentioned that you view this as not being libvirt
> > specific, would the VIF/Network model in nova/network/model.py be
> extended
> > with additional fields not currently represented in the spec (
> > http://wiki.openstack.org/LibvirtVIFDrivers) for platforms like
> XenServer
> > (plug into XenServer network UUIDs), vmware (plug into port group-id), a
> > hyper-v virtual network, etc?
>
> Sure, if we find additional pieces of information are needed in the models,
> we may have to extend them further. I'm surprised that QUantum server
> actually
> has knowledge of the hypervisor specific concepts you mention above though.
>

I'm confused, as my understanding of your proposal requires Quantum know
this type of information.  Perhaps I'm missing something?  From the code
review, you have comments like:

# TODO(berrange) temporary hack until Quantum can pass over the
# name of the OVS bridge it is configured with

and

# TODO(berrange) Quantum should pass the bridge name
# in another binding metadata field

In this case, the name of the bridge is a concept specific to certain linux
hypervisors. If I was using XenServer, the equivalent might be a Network
UUID, or with ESX a port group uuid.

My current thinking is that Quantum shouldn't have to know such information
either, but based on your comments, I was assuming this was a point of
disagreement. Can you clarify?



>  Surely over time nova will have to add
> > entirely new vif_types.  Since all Nova code for a given deployment will
> be
> > in sync, this is not a problem within Nova, but what if you deploy a
> newer
> > Quantum with an older Nova?  Let's say in "H" Nova introduces vif_type
> > "qbx" (presumably some "improved" for of VEPA :P) with requires a
> different
> > set of parameters to configure.  If a tenant is running an "H" version of
> > Quantum, with a Grizzly version of Nova, how does it know that it can't
> > specify vif-type qbx?
>
> I mostly think that this is a documentation / deployment issue for
> admins to take care of. In the scenario you describe if you have an
> Hxxxx Quantum running with a Grizzly Nova, the admin should expect
> that they won't neccessarily be able to use latest pieces of Quantum.
>

I think we both agree that using vif_type qbx would not work (and that this
is reasonable).  That wasn't my question though.  My question was: If
Quantum returns qbx, presumably the older Nova would error-out when
provisioning the VM, so how does Quantum from the "H" series know that it
is or is not OK to return vif-type "qbx"?   If we have to have an admin
configure quantum with the set of vif_types the Nova install supports,
where back to what we wanted to avoid: having to have an admin sync config
between Nova + Quantum.


>
>
> >
> > To me this points to an even larger question of how we handle
> heterogeneous
> > hosts.  A couple possible examples for discussion:
> > - Only the newer servers in your cloud have the fancy new NICs to support
> > the type of vif-plugging most desired by your quantum plugin.  for older
> > servers, the quantum plugin needs to use a legacy platform.
> > - Your identifier for indicating how a vif-should be plugged (e.g., a
> > xenserver network UUID) is specific to each cluster of servers, and your
> > quantum deployment spans many clusters.  How does the quantum server know
> > what value to provide?
> > - A quantum deployment spans hypervisors of multiple types (e.g., kvm,
> > xenserver, and esx) and the vif-type and values returned by the plugin
> need
> > to vary for different hypervisor platforms.
> >
> > The above items are possible with the existing vif-plugging, since
> > different config flags can be passed to different nova-compute instances.
> >
> > It seems like in the new model, we would need Nova to pass some
> information
> > to Quantum so that Quantum can make the right decision about what
> vif-type
> > and data to send.
>
> Since the formal dependancy is really Nova -> Quantum, IMHO we should
> really
> just document that Nova must be at least as new as Quantum. Runing a Hxxxx
> Nova against a Grizzly Quantum should present no problems, only the reverse
> has issues.
>

This isn't just about old vs. new versions.  See the example above about a
deployment that spans multiple cluster for a hypervisor like XenServer or
ESX, thereby requiring that a different identifier is passed back based on
which cluster.  Or an even more extreme example (but one I've already seen
in production with Quantum) is that there are multiple hypervisor types, so
even the vif_type that would need to be passed back may well be different
on different hypervisors.

This is possible today with the existing vif-plugging mechanism, since the
equivalent of a vif-type + bridge name are configured on each hypervisor,
and thus can differ for different clusters or different hypervisor types.
 Its not clear to me how this would work with the existing mechanisms.


> >
> > One other minor comment and question:
> >
> > I noticed the "filtered" value in the spec.  The current description
> seems
> > to be a bit of a deviation from the rest of the spec, as
> > it explicitly describes the behavior of the quantum plugin, not the
> > configuration value for the virt layer (which I'm guessing
> > is implicitly the inverse, such that if this value is true, we do NOT
> > configure filtering on the VIF).
>
> It is a subtle distinction, but I guess I don't think of the data as soley
> being "what is the required VIF configuration", but rather "what
> information
> is required todo VIF configuration". As such knowing whether Quantum has
> applied filtering falls within scope.
>

Yes, but 'filtering' as a term seems very broad.  Here's a made-up example:
what if the quantum plugin performs L3/L4 filtering (e.g., security groups)
but did not have the ability to prevent mac/arp spoofing, and instead
wanted the virt layer to handle that.  To me, having values that explicitly
state what Quantum wants the virt layer to do would be more clear and less
likely to run into problems in the future.  I'm guessing the current
definition is use to map closely to whether to push down any <filterref>
arguments to libvirt? I'm just trying to think about how the concept would
apply to other platforms.


>
> > And one last item that I was curious about:  I noticed that one part of
> > VIF-configuration that is not part of the spec currently is information
> > about the Vif-driver mechanism (e.g., in libivrt, choosing model=virtio
> and
> > specifying a <driver> element, or on esx, choosing e1000 vs. vmxnet).  Is
> > your thinking on this would be that it is even though it is vif-config
> from
> > a compute perspective, its value is unlikely to depend on quantum and
> thus
> > can still just be configured as a Nova config value for the virt layer
> > (e.g., libvirt_use_virtio_for_bridges')?
>
> Although it is not clearly distinguished in the libvirt XML, you have to
> consider the guest config as having two distinct groups of information. In
> one group there is the machine hardware specification, and in the other
> group there is the host resource mapping. There is no dependancy between
> the two groups. So the choice of NIC hardware model is completely unrelated
> to the way you plug a VIF into the host network & as such it is not
> relevant
> to Quantum
>

I actually agree with you completely here.  I was more just trying to make
sure I understood your reasoning.

Dan



>
> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/:|
> |: http://libvirt.org              -o-             http://virt-manager.org:|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/:|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc:|
>



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130122/158f0af3/attachment-0001.html>


More information about the OpenStack-dev mailing list