[openstack-dev] [Neutron][L3] Representing a networks connected by routers

Assaf Muller amuller at redhat.com
Tue Jul 21 00:44:18 UTC 2015



----- Original Message -----
> I'm looking for feedback from anyone interest but, in particular, I'd
> like feedback from the following people for varying perspectives:
> Mark McClain (proposed alternate), John Belamaric (IPAM), Ryan Tidwell
> (BGP), Neil Jerram (L3 networks), Aaron Rosen (help understand
> multi-provider networks) and you if you're reading this list of names
> and thinking "he forgot me!"
> 
> We have been struggling to develop a way to model a network which is
> composed of disjoint L2 networks connected by routers.  The intent of
> this email is to describe the two proposals and request input on the
> two in attempt to choose a direction forward.  But, first:
> requirements.
> 
> Requirements:
> 
> The network should appear to end users as a single network choice.
> They should not be burdened with choosing between segments.  It might
> interest them that L2 communications may not work between instances on
> this network but that is all.  This has been requested by numerous
> operators [1][4].  It can be useful for external networks and provider
> networks.

I think that [1] and [4] are conflating the problem statement with the
proposed solutions, and lacking some lower level details regarding the
problem statement, which makes it a lot harder to engage in a discussion.

I'm looking at [4]:
What I don't see explicitly mentioned is: Does the same CIDR extend across racks,
or would each rack get its own CIDR(s)? I understand this can differ according to
the architectural choices you make in your data center, and that changes the
choices we'd need to make to Neutron in order to satisfy that requirement.

To clarify, option (1) means that a subnet is contained to a rack. Option (2)
means that a subnet may span across racks. I don't think we need to change the network/subnet
model at all to satisfy case (1). Each rack would have its own network/subnet
(Or perhaps multiple, if more than a single VLAN or other characteristic is desired).
Each network would be tagged with an AZ (This ties in nicely to the already proposed Neutron AZ spec),
and the Nova scheduler would become aware of Neutron network AZs. In this model
you don't want to connect to a network, you want Nova to schedule the VM and then have Nova choose
the network on that rack. If you want more than a single network in a rack, then there's
some difference between those networks that could be expressed in tags (Think: Network flavors),
such as the security zone. You'd then specify a tag that should be satisfied by the
network that the VM ends up connecting to, so that the tag may be added to the list
of Nova scheduler filters. Again, this goes back to keeping the Neutron network and subnet
just as they are but doing some work with AZs, tagging and the Nova scheduler.
We've known that the Nova scheduler must become Network aware for the past few years,
perhaps it's time to finally tackle that.

I can see why option (2) may require a fundamental change to how Neutron models networks/subnets.
I think it's essentially a different problem, and we'd have to see how we model
Neutron networks/subnets so that something like Calico would feel better. That being said,
if option (1) is worth pursuing, that would be a reasonable first step because any changes required
by option (2) are, I think, unrelated.

> 
> The model needs to be flexible enough to support two distinct types of
> addresses:  1) address blocks which are statically bound to a single
> segment and 2) address blocks which are mobile across segments using
> some sort of dynamic routing capability like BGP or programmatically
> injecting routes in to the infrastructure's routers with a plugin.
> 
> Overlay networks are not the answer to this.  The goal of this effort
> is to scale very large networks with many connected ports by doing L3
> routing (e.g. to the top of rack) instead of using a large continuous
> L2 fabric.  Also, the operators interested in this work do not want
> the complexity of overlay networks [4].
> 
> Proposal 1:
> 
> We refined this model [2] at the Neutron mid-cycle a couple of weeks
> ago.  This proposal has already resonated reasonably with operators,
> especially those from GoDaddy who attended the Neutron sprint.  Some
> key parts of this proposal are:
> 
> 1.  The routed super network is called a front network.  The segments
> are called back(ing) networks.
> 2.  Backing networks are modeled as admin-owned private provider
> networks but otherwise are full-blown Neutron networks.
> 3.  The front network is marked with a new provider type.
> 4.  A Neutron router is created to link the backing networks with
> internal ports.  It represents the collective routing ability of the
> underlying infrastructure.
> 5.  Backing networks are associated with a subset of hosts.
> 6.  Ports created on the front network must have a host binding and
> are actually created on a backing network when all is said and done.
> They carry the ID of the backing network in the DB.
> 
> Using Neutron networks to model the segments allows us to fully
> specify the details of each network using the regular Neutron model.
> They could be heterogeneous or homogeneous, it doesn't matter.
> 
> This proposal offers a clear separation between the statically bound
> and the mobile address blocks by associating the former with the
> backing networks and the latter with the front network.  The mobile
> addresses are modeled just like floating IPs are today but are
> implemented by some plugin code (possibly without NAT).
> 
> This proposal also provides some advantages for integrating dynamic
> routing.  Since each backing network will, by necessity, have a
> corresponding router in the infrastructure, the relationship between
> dynamic routing speaker, router, and network is clear in the model:
> network <-> speaker <-> router.
> 
> Proposal 2:
> 
> This alternate model has not been fully fleshed out.  Some parts of it
> are still unclear to me.  The basic idea is to give the IPAM system
> information about IP availability on a given host.  When creating a
> port, the binding information would be sent to the IPAM system and the
> system would choose an appropriate address block for the allocation.
> 
> 1. This alternate model offers no way to distinguish the two types of
> address blocks.
> 2. We don't have the benefit of modeling the segments with Neutron networks.
> 
> It was suggested that hierarchical port binding could help here but I
> see it as orthogonal to this.  Hierarchical port binding extends the
> L2 properties of a port to a hierarchical infrastructure to achieve
> continuous L2 connectivity.  It is also intended for overlay networks.
> That isn't what we're doing here and I don't think it fits.
> 
> I have also considered the multi-provider extension [3] for this.
> This is not yet clear to me either.  First, my understanding was that
> this extension describes multi-segment continuous L2 fabrics.  Second,
> there doesn't seem to be any host binding aspect to the multi-provider
> extension.  Third, not all L2 plugins support this extension.  It
> seems silly to require L2 plugin support in order to enable routing
> between segments.
> 
> It isn't clear to me how a dynamic routing speaker will fit in to this
> model.  My first thought is that it must be integrated with IPAM
> because the IPAM system has the understanding of how to map address
> blocks to infrastructure.  This pushes even more infrastructure
> knowledge down to the IPAM system.  If dynamic routing is pushed down
> to the IPAM system, it will also be necessary to push the association
> of mobile IPs or routed tenant subnets down in to the IPAM system too.
> This means Neutron needs to tell IPAM about every floating IP
> association and every tenant subnet behind a Neutron router in the
> same address scope as the external network.  I'm not convinced that
> IPAM and routing really belong together like this.
> 
> If you made it this far in this email, you must have some feedback.
> Please help us out.
> 
> Carl Baldwin
> 
> 
> [1] https://bugs.launchpad.net/neutron/+bug/1458890
> [2] https://review.openstack.org/#/c/196812/
> [3]
> http://developer.openstack.org/api-ref-networking-v2-ext.html#network_multi_provider-ext
> [4] https://etherpad.openstack.org/p/Network_Segmentation_Usecases
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



More information about the OpenStack-dev mailing list