[openstack-dev] [Quantum] [Nova] improving vif-plugging

Robert Kukura rkukura at redhat.com
Wed Jan 30 22:25:29 UTC 2013


On 01/30/2013 01:25 PM, Dan Wendlandt wrote:
> 
> 
> On Tue, Jan 29, 2013 at 4:30 AM, Daniel P. Berrange <berrange at redhat.com
> <mailto:berrange at redhat.com>> wrote:
> 
> 
>     > This led me to wonder if we could do the following: if the Quantum
>     plugin
>     > in use supports the  vif-plugging extension, ignore the local
>     vif-plugging
>     > config and use the "universal driver" with the config from quantum
>     > (probably while issuing a warning).  I think it should be feasible
>     in the
>     > "H" release to make the vif-plugging extension mandatory for
>     Quantum, at
>     > which point, there would always be a warning for any user that
>     specifies an
>     > old-style vif-plugging option (assuming H-release was being used
>     for Nova +
>     > Quantum).   Starting with H-series we would no longer document the
>     > old-style vif-plugging options, and users would get warnings if
>     they were
>     > left-over in configs due to an upgrade, but nothing would break,
>     because
>     > they weren't required to specify a new type of vif-plugging.
> 
>     Ah, you seem to be thinking that the 'libvirt_vif_driver' config
>     parameter
>     will go away completely. This is not my intention. I just want to have a
>     situation where admins do not need to use it for any configuration using
>     in-tree OpenStack components.
> 
> 
> Yup, I understand that that is what you are proposing.  But I was
> thinking about the issue that everyone currently using Quantum will have
> to change their configuration by switching vif_plugging in order to take
> advantage of the Quantum API extensions in Havana.  That just got me
> wondering if there could be a cleaner approach.  
>  
> 
> 
>     If someone develops a new Quantum plugin out of tree, and needs to also
>     do some work on the Nova side to support that, we still need to have the
>     'libvirt_vif_driver' parameter around.
> 
> 
> I understand the motivation, but in my experience, out-of-tree
> vif-drivers worked very poorly in practice, as subtle changes within
> nova would end up breaking those drivers without triggering unit test
> failures, since the drivers were out of tree (the Cisco plugin relied on
> such an out-of-tree driver and it seemingly was subject to frequent
> breakage). 
> 
> So to play devil's advocate, if starting in "H", we allow the quantum
> vif-plugging extension to return all information needed for Nova to plug
> vifs, in what scenario would we rather have someone create an
> out-of-tree vif-plugging mechanism, rather than just return the data via
> their Quantum plugin? 
> 
> 
>     So I don't want to have anything where we magically ignore the
>     'libvirt_vif_driver' parameter in some scenarios - we should always
>     honour that parameter. Just that with it configured to
>     LibvirtGenericVIFDriver,
>     99% of users will never need to touch it now.
> 
> 
> Yeah, I agree on the goal of having the LibvirtGenericVIFDriver as the
> default behavior, I'm must less convinced that in the long-term we'd
> even want to keep the vif-driver stuff around at all.  Having Quantum
> automatically use the generic vif-driver behavior if the vif-plugging
> extension is in use seems to provide a smoother transition, and a
> cleaner and more simple system in the end.   

I'm also becoming convinced the generic vif-driver behavior should
become the default behavior. But I've had some concerns about making
this work with the Modular L2 (ml2) Quantum plugin
(https://blueprints.launchpad.net/quantum/+spec/modular-l2). I've
discussed these concerns with Dan W. and Dan B. and want to followup here.

The ml2 plugin supports heterogeneous networking environments, using
type drivers to manage state for an extensible set of network types
(flat, vlan, gre, ...) along with mechanism drivers to support various
ways (Linux bridging, Open vSwitch via an agent, various OpenFlow
controllers, ...) for devices to access networks of those types. The
short term plan is for ml2 to support the existing linuxbridge,
openvswitch, and hyperv agents, with a modular L2 agent in the longer
term plans.

A basic use case for ml2 in Grizzly would be that the same network (a
VLAN on a particular physical network, for example) is accessed on some
nodes via the openvswitch agent, on others via the linuxbridge agent,
and on still others via a controller. Ml2 mechanism drivers could also
be implemented to provide hardware devices such as load balancers and
firewalls with direct access to that VLAN network.

My concern with the generic vif-driver behaviour for ml2 is that the
Quantum server, and hence the ml2 plugin, needs some way to choose which
networking mechanism to use when nova is plugging a quantum port. With
the current plugins, this choice is hard-wired. But in the ml2 use case
above, that choice depends on how that particular Nova node can connect
to the requested network. Typically this will have been predetermined by
the administrator having configured the node to run some particular
agent, and/or having wired it to a switch managed by some controller.
The ml2 plugin design for Grizzly had been expecting to leverage the
fact that the administrator of the Nova compute node would have
configured the proper VIF driver for Nova to use with that agent or
controller.

I am now convinced that having Nova set a host identifier attribute on
the quantum port before getting the vif_type and other details from the
port should be sufficient. The mechanism drivers in the ml2 plugin could
correlate this host identifier with identifiers of agents that have
registered with it, interact with a controller to find how the host is
connected, or do some sort configuration or DB lookup to find the
details for that host.

For this to work, we need Nova to set the binding:host_id attribute on
the Quantum port before obtaining the binding:vif_type attribute from
the port. We also might need to define binding:vif_type attribute values
to indicate that a binding has not yet been selected for the port, or
that it can't be bound (i.e. that compute node can't talk to the
requested network).

Note that both Nova and Quantum need to agree on values of the
binding:host_id attribute. I don't think we want to get into another
situation where Nova and Quantum have to each be configured consistently
with each other, so we need something that works out of the box across
multiple unrelated processes on the host, and that is suitable for an
administrator to deal with. Is the hostname the best option for this?

As a fallback, being able to configure the vif type on the compute node
may still be useful with ml2, so I'd like to see that option preserved.
Rather than configuring a vif driver class name, and having to maintain
the old vif drivers, I'd prefer a way to configure the details used by
the generic vif driver when Quantum can't provide this information.
These details would be used when either the portbindings extension is
not supported by the Quantum plugin, or when indicated by the port's
binding:vif_type attribute (something like VIF_TYPE_USE_DEFAULT).

Does all this seem workable and sensible?

-Bob

> 
> Thanks, 
> 
> Dan
> 
> 
>  
> 
> 
>     Regards,
>     Daniel
>     --
>     |: http://berrange.com      -o-  
>      http://www.flickr.com/photos/dberrange/ :|
>     |: http://libvirt.org              -o-            
>     http://virt-manager.org :|
>     |: http://autobuild.org       -o-        
>     http://search.cpan.org/~danberr/ :|
>     |: http://entangle-photo.org       -o-      
>     http://live.gnome.org/gtk-vnc :|
> 
> 
> 
> 
> -- 
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt 
> Nicira, Inc: www.nicira.com <http://www.nicira.com>
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




More information about the OpenStack-dev mailing list