[openstack-dev] [Neutron][L3] Representing a networks connected by routers

Assaf Muller amuller at redhat.com
Wed Jul 22 23:44:21 UTC 2015



----- Original Message -----
> 
> 
> The issue with the availability zone solution is that we now force
> availability zones in Nova to be constrained to network configuration. In
> the L3 ToR/no overlay configuration, this means every rack is its own
> availability zone. This is pretty annoying for users to deal with because
> they have to choose from potentially hundreds of availability zones and it
> rules out making AZs based on other things (e.g. current phase, cooling
> systems, etc).
> 
> I may be misunderstanding and you could be suggesting to not expose this
> availability zone to the end user and only make it available to the
> scheduler. However, this defeats one of the purposes of availability zones
> which is to let users select different AZs to spread their instances across
> failure domains.

No, you understood me correctly. You're right that it is problematic tying
AZs to network availability. We should introduce a new Neutron API then to
expose physnet mappings: For a given network, spit out all of the hosts
that can reach that network (Internally the Neutron server persists the physnet mappings
we get from agent reports). That API call will serve as a Nova filter in
case a network/port_id was requested when booting a VM. If a network/port_id
was not satisfied, another API call will be used for the inverse: Return a list
of possible networks for a given host, or the mappings between all hosts
and their networks reachability. So, for example:
neutron list-host-networks (Mappings between hosts and their networks)
neutron show-hosts-networks (Mapping between a host and its networks)
neutron show-network-hosts (Mapping between a network and what hosts can reach it).
neutron list-networks-hosts (Mapping between networks and their hosts).

Everything else I wrote remains the same.


> On Jul 22, 2015 2:41 PM, "Assaf Muller" < amuller at redhat.com > wrote:
> 
> 
> I added a summary of my thoughts about the enhancements I think we could
> make to the Nova scheduler in order to better support the Neutron provider
> networks use case.
> 
> ----- Original Message -----
> > On Tue, Jul 21, 2015 at 1:11 PM, John Belamaric < jbelamaric at infoblox.com >
> > wrote:
> > > Wow, a lot to digest in these threads. If I can summarize my
> > > understanding
> > > of the two proposals. Let me know whether I get this right. There are a
> > > couple problems that need to be solved:
> > > 
> > > a. Scheduling based on host reachability to the segments
> > > b. Floating IP functionality across the segments. I am not sure I am
> > > clear
> > > on this one but it sounds like you want the routers attached to the
> > > segments
> > > to advertise routes to the specific floating IPs. Presumably then they
> > > would
> > > do NAT or the instance would assign both the fixed IP and the floating IP
> > > to
> > > its interface?
> > > 
> > > In Proposal 1, (a) is solved by associating segments to the front network
> > > via a router - that association is used to provide a single hook into the
> > > existing API that limits the scope of segment selection to those
> > > associated
> > > with the front network. (b) is solved by tying the floating IP ranges to
> > > the
> > > same front network and managing the reachability with dynamic routing.
> > > 
> > > In Proposal 2, (a) is solved by tagging each network with some meta-data
> > > that the IPAM system uses to make a selection. This implies an IP
> > > allocation
> > > request that passes something other than a network/port to the IPAM
> > > subsystem. This fine from the IPAM point of view but there is no
> > > corresponding API for this right now. To solve (b) either the IPAM system
> > > has to publish the routes or the higher level management has to ALSO be
> > > aware of the mappings (rather than just IPAM).
> > 
> > John, from your summary above, you seem to have the best understanding
> > of the whole of what I was weakly attempting to communicate. Thank
> > you for summarizing.
> > 
> > > To throw some fuel on the fire, I would argue also that (a) is not
> > > sufficient and address availability needs to be considered as well (as
> > > described in [1]). Selecting a host based on reachability alone will fail
> > > when addresses are exhausted. Similarly, with (b) I think there needs to
> > > be
> > > consideration during association of a floating IP to the effect on
> > > routing.
> > > That is, rather than a huge number of host routes it would be ideal to
> > > allocate the floating IPs in blocks that can be associated with the
> > > backing
> > > networks (though we would want to be able to split these blocks as small
> > > as
> > > a /32 if necessary - but avoid it/optimize as much as possible).
> > 
> > Yes, address availability is a factor and must be considered in either
> > case. My email was getting long already and I thought that could be
> > considered separately since I believe it applies regardless of the
> > outcome of this thread. But, since it seems to be an essential part
> > of this conversation, let me say something about it.
> > 
> > Ultimately, we need to match up the host scheduled by Nova to the
> > addresses available to that host. We could do this by delaying
> > address assignment until after host binding or we could do it by
> > including segment information from Neutron during scheduling. The
> > latter has the advantage that we can consider IP availability during
> > scheduling. That is why GoDaddy implemented it that way.
> > 
> > > In fact, I think that these proposals are more or less the same - it's
> > > just
> > > in #1 the meta-data used to tie the backing networks together is another
> > > network. This allows it to fit in neatly with the existing APIs. You
> > > would
> > > still need to implement something prior to IPAM or within IPAM that would
> > > select the appropriate backing network.
> > 
> > They are similar but to say they're the same is going a bit too far.
> > If they were the same then we'd be done with this conversation. ;)
> > 
> > > As a (gulp) third alternative, we should consider that the front network
> > > here is in essence a layer 3 domain, and we have modeled layer 3 domains
> > > as
> > > address scopes in Liberty. The user is essentially saying "give me an
> > > address that is routable in this scope" - they don't care which actual
> > > subnet it gets allocated on. This is conceptually more in-line with [2] -
> > > modeling L3 domain separately from the existing Neutron concept of a
> > > network
> > > being a broadcast domain.
> > 
> > I will consider this some more. This is an interesting thought.
> > Address scopes and subnet pools could play a role here. I don't yet
> > see how it can all fit together but it is worth some thought.
> > 
> > One nit: the neutron network might have been conceived as being just
> > "a broadcast domain" but, in practice, it is L2 and L3. The Neutron
> > subnet is not really an L3 construct; it is just a cidr and doesn't
> > make sense on its own without considering its association with a
> > network and the other subnets associated with the same network.
> > 
> > > Fundamentally, however we associate the segments together, this comes
> > > down
> > > to a scheduling problem. Nova needs to be able to incorporate data from
> > > Neutron in its scheduling decision. Rather than solving this with a
> > > single
> > > piece of meta-data like network_id as described in proposal 1, it
> > > probably
> > > makes more sense to build out the general concept of utilizing network
> > > data
> > > for nova scheduling. We could still model this as in #1, or using address
> > > scopes, or some arbitrary data as in #2. But the harder problem to solve
> > > is
> > > the scheduling, not how we tag these things to inform that scheduling.
> > 
> > Yet how we tag these things seems to be a significant point of
> > interest. Maybe not with you but with Ian and Assaf it certainly is.
> > 
> > As I said above, I agree that the scheduling part is very important
> > and needs to be discussed but I still separate them in my mind from
> > this question.
> 
> I'm basing these ideas off my understanding of the GoDaddy, YY and Yahoo
> requirements in
> https://etherpad.openstack.org/p/Network_Segmentation_Usecases . I am
> purposely not looking
> at the problems being presented by Calico or similar /32's BGP advertising
> implementations,
> nor the idea of injecting floating IPs, as I believe those to be separate
> problems, and conflating
> them with everything else presented in that Etherpad would be a mistake. In
> other words I'm not
> trying to solve all of the problems that have ever existed, just some of them
> :) I'd love to get
> feedback from the authors of that Etherpad to see how much progress we'd be
> making here and if it's in the right direction.
> 
> Context:
> Neutron supports self service networking, often implemented by overlay
> networks. An overlay (GRE, VXLAN) based
> network is not location-sensitive, that is, all compute nodes would have
> access to such a network, as long as
> the compute nodes can ping each other (And this may be realized over layer 2
> or via routing in your data center).
> Some deployments opt out of this type of solution, and instead an admin
> pre-creates and shares a network(s) that is
> realized via VLANs. Tenants connect their VMs to these pre-created networks
> and don't create networks of their own.
> It may be the case where not all compute nodes would have access to such a
> network. Here's some pretty graphics:
> http://i.imgur.com/bHPgcTw.png
> 
> Problem 1:
> In this example VLANs 11 and 12, and subnets 10.0.1.0/24 , 10.0.2.0/24 are
> only available in rack 1.
> In this case the admin would create four Neutron networks (VLANs 11, 12, 13
> and 14 with their respective subnets).
> However, the Nova scheduler is not exposed to this information. This means
> that if a VM is booted on network 1
> (And an AZ is not specified), Nova may try to start it in rack 2, where
> network 1 is not available.
> Neutron port binding would fail in this case and the VM will end up in the
> error state.
> 
> Solution:
> Tag Neutron networks with an AZ as detailed here
> http://specs.openstack.org/openstack/neutron-specs/specs/liberty/availability-zone.html
> .
> This means that when the admin creates network 1 he'll put it in AZ1. When a
> VM is booted on network 1,
> the Nova scheduler will only consider hosts in AZ 1. If an AZ is specified,
> then Nova will fail-fast and yell
> if the specified AZ doesn't match the AZ the Neutron network is in. If a
> network is not in an AZ (AZ == None)
> then the behavior is backwards compatible. Currently it is assumed that all
> hosts have access to all networks,
> while now the assumption will be that all hosts in the same AZ have access to
> all networks in that AZ.
> 
> Problem & Solution 2:
> With tenant networking it makes sense to select the network a VM would boot
> on.
> For example if a tenant created three networks (Say: DB, backend and web
> tiers, each with its own network and security group)
> then each VM would need to go on a specific network according to its role.
> With provider networking, you may want to boot a VM and let Nova select the
> appropriate network for you.
> To clarify, you would not specify a network_id or port_id when booting a VM.
> Nova would schedule the VM to host 1 in rack 1,
> and then randomly select network 1 or 2 (Because those are available to the
> AZ that host 1 is in, and the Nova scheduler would know this with problem 1
> solved).
> 
> Problem 3:
> In case the 'nova selects a network for you' (Marked as problem/solution 2)
> proposal is implemented,
> you could run in to an issue where the IP addresses on the Nova selected
> network are exhausted,
> and that another network available in that rack/AZ should have been chosen
> instead.
> 
> Solution:
> The nova scheduler could depend on: https://review.openstack.org/#/c/180803/
> - A new API to report IP availability per network.
> Then, as an additional built-in scheduling filter, when choosing a network,
> make sure that the network has an IP address available.
> 
> Problem 4 (Disclaimer: This one isn't as well thought out):
> When Nova selects a network (Either from a specific AZ or not), solution 2
> suggests that it'll essentially be a random choice,
> apart from IP availability as defined in the solution to problem 3. It may be
> the case where the user doesn't want to specify which
> network the VM will be connected to, but specify a property that the network
> must satisfy, such as the security zone
> (This is taken straight from the Etherpad).
> 
> Solution:
> A new 'tags' property will be added to the network model, as a list of
> strings (Or perhaps couples consisting of a tag and its description).
> When creating a network you could specify arbitrary data to be placed in
> those tags. When a user boots a VM he could specify a tag (Or tags)
> instead of a network/port_id, and the Nova scheduler will filter out any
> networks that do not have that tag(s).
> Tags will be writable by the owner of the network (So an admin in case of
> provider networks) and readable by anyone else.
> Here's some more pretty graphics that would perhaps explain things better:
> http://i.imgur.com/89apoA8.png .
> 
> > 
> > > The optimization of routing for floating IPs is also a scheduling
> > > problem,
> > > though one that would require a lot more changes to how FIP are allocated
> > > and associated to solve.
> > > 
> > > John
> > > 
> > > [1] https://review.openstack.org/#/c/180803/
> > > [2] https://bugs.launchpad.net/neutron/+bug/1458890/comments/7
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



More information about the OpenStack-dev mailing list