[openstack-dev] [Quantum] Re: openvswitch provider patch

Chris Wright chrisw at sous-sol.org
Wed Sep 5 23:16:41 UTC 2012


* Robert Kukura (rkukura at redhat.com) wrote:
> [For those on openstack-dev, this is a discussion of the desired default
> behavior of the openvswitch plugin, which has changed somewhat with the
> recent provider network implementation. Specifically, normal "tenant"
> networks default to VLAN networks, which now require pre-configuration
> of an OVS bridge for the physical network on each node where the agent
> runs. Devstack by default configures openvswitch to use GRE tunnels
> instead, as before, but tunnels require kernel support not in some
> operating systems. See http://wiki.openstack.org/ConfigureOpenvswitch
> for a description of the current openvswitch plugin configuration.]
> 
> On 09/03/2012 02:00 PM, Dan Wendlandt wrote:
> 
> > I think we agree on the high-level goal of having a default setup that
> > does not require extra configuration that is not done automatically by
> > quantum.
> 
> Yes.
> 
> > 
> > In my thinking, using VLAN mode requires an understanding of which
> > physical network the traffic will go out onto, because we must provide
> > OVS with a range of VLANs that we already know that they are available
> > to be trunked on that physical network.  Thus, it doesn't really make
> > sense to allocate VLAN networks prior to knowing the physical network.
> 
> That is a very good point. I completely agree. To me, this means we
> should not create

Curious what you were thinking here?
I suspect we'd care about two things w.r.t. creation:

1) That the dynamic vlan range does not overlap the provider network
vlans on a particular physical network

2) If we need to create a specific named bridge paired w/ a named
provider network for a default (dynamic only) vlan case


> >  In tunnel mode, the segmentation_id is completely decoupled from the
> > physical network, meaning they can be assigned before the admin even
> > needs to think about multiple hosts and which physical network is in
> > use.  With this in mind, I would argue that the code should probably
> > default to creating tunnel networks.
> 
> I agree, assuming VLANs and GRE networks are the only options.
> 
> >                                       One option would be to have a
> > distro that does not support tunneling default to using VLANs, though
> > that distro would also likely have to automate the creation of the
> > external bridge, as I think defaulting to vlans without automating the
> > creation of the external bridge will lead to a lot of usability
> > complaints (the one thing I'm really sensitive to is people getting
> > the impression that quantum is 'busted' and then being scared away
> > from it more permemently).
> 
> I agree defaulting to VLANs would require auto-configuration of the
> bridge to achieve the single-box zero-config objective.
> 
> I'm very hesitant to try to make the defaults or anything else dependent
> on whether the OS supports OVS tunnels. This would just add confusion
> and complexity. In the long run, I expect OVS tunnel support will be
> moved into the Linux kernel source tree, and we'll just have to switch
> from using patch ports to using veths to make it work, at least on OSes
> with up-to-date kernels.

I agree.

> Other than defaulting to this sort of auto-configuring VLAN bridge
> approach, I can think of two other options:
> 
> 1) Default to allocating tenant networks as GRE networks, and make sure
> they work at least locally on OSes that do not support OVS tunnels. I
> think we'd mainly just have to detect the error when creating the patch
> port for the tunnel bridge, log the error, and continue. We'd probably
> also want to log an error every time the agent tried to bring up a GRE
> network. I'd generally prefer to have the agent exit when the tunnel
> bridge could not be properly configured, but logging and continuing may
> be acceptable. But I'm also thinking about non-uniform connectivity
> scenarios where some nodes support GRE tunnels and other do not - we
> don't want the agent to exit in that case.
>
> 2) We could introduce a new "local" provider:network_type that
> explicitly does not span systems. These networks would use the
> integration bridge with OVS, allowing communication between VMs and
> agents on the local host, but no external communication. This
> network_type would not require any segmentation_id to be allocated, and
> therefore not need a pool for allocation. A configuration flag in the
> plugin, defaulting to True, would cause tenant networks to be allocated
> as this network type. This seems like a clean solution, and very little
> work to implement, but it may just seem too odd to have this sort of
> network_type in quantum.

I think the ability to get off the box would come up quickly.  Even
basic tools default to a nat'd network so VMs can communicate w/ the
outside world.

> Any thoughts on either of these options vs. trying to auto-configure
> VLAN bridges? I'm including openstack-dev, and would welcome input from
> others as well.
> 
> > Regardless of the default, I definitely agree that we want to make
> > sure the agent fails cleanly (and loudly) if it is config says to use
> > tunnels, but the OVS version being used does not support tunnels.
> > There should probably be a check at the start of the agent to do so,
> > does that make sense?   I suspect we could just try to create a port
> > of type = 'gre', and if that fails, conclude that tunneling is not
> > supported.

Hmm, I don't think that will actually fail.

> I was thinking of just looking for an error when creating the patch port
> as the first indication GRE networks would not be supported. As you say,
> I'd prefer to have the agent "fail cleanly", but we if we go with option
> 1 above, we'd need to settle for "loudly".

We should think about this, esp. considering patch port support vs. tunnel
support different upstream plans.

> >>> That is, the 'basic' use case is single box, meaning no config of an
> >>> external bridge or tunnel-ip is needed.  This is how the plugin at
> >>> least used to work.
> >>
> >> I agree we want this basic use case to work without altering the default
> >> configuration.
> > 
> > Agreed, I think that's the most important point.  Thanks,
> > 
> > Dan
> 
> No problem. I should have tried to get a discussion going on the default
> configuration much earlier in the provider network development, but
> thought the defaults would turn out to be close enough to the previous
> ones to not be an issue.

I originally expected no change needed to the defaults, although I fully
understand how the patchset evolved.

thanks,
-chris



More information about the OpenStack-dev mailing list