[openstack-dev] [Neutron][L3] Representing a networks connected by routers

Carl Baldwin carl at ecbaldwin.net
Wed Jul 22 14:57:11 UTC 2015


On Tue, Jul 21, 2015 at 1:11 PM, John Belamaric <jbelamaric at infoblox.com> wrote:
> Wow, a lot to digest in these threads. If I can summarize my understanding
> of the two proposals. Let me know whether I get this right. There are a
> couple problems that need to be solved:
>
>  a. Scheduling based on host reachability to the segments
>  b. Floating IP functionality across the segments. I am not sure I am clear
> on this one but it sounds like you want the routers attached to the segments
> to advertise routes to the specific floating IPs. Presumably then they would
> do NAT or the instance would assign both the fixed IP and the floating IP to
> its interface?
>
> In Proposal 1, (a) is solved by associating segments to the front network
> via a router - that association is used to provide a single hook into the
> existing API that limits the scope of segment selection to those associated
> with the front network. (b) is solved by tying the floating IP ranges to the
> same front network and managing the reachability with dynamic routing.
>
> In Proposal 2, (a) is solved by tagging each network with some meta-data
> that the IPAM system uses to make a selection. This implies an IP allocation
> request that passes something other than a network/port to the IPAM
> subsystem. This fine from the IPAM point of view but there is no
> corresponding API for this right now. To solve (b) either the IPAM system
> has to publish the routes or the higher level management has to ALSO be
> aware of the mappings (rather than just IPAM).

John, from your summary above, you seem to have the best understanding
of the whole of what I was weakly attempting to communicate.  Thank
you for summarizing.

> To throw some fuel on the fire, I would argue also that (a) is not
> sufficient and address availability needs to be considered as well (as
> described in [1]). Selecting a host based on reachability alone will fail
> when addresses are exhausted. Similarly, with (b) I think there needs to be
> consideration during association of a floating IP to the effect on routing.
> That is, rather than a huge number of host routes it would be ideal to
> allocate the floating IPs in blocks that can be associated with the backing
> networks (though we would want to be able to split these blocks as small as
> a /32 if necessary - but avoid it/optimize as much as possible).

Yes, address availability is a factor and must be considered in either
case.  My email was getting long already and I thought that could be
considered separately since I believe it applies regardless of the
outcome of this thread.  But, since it seems to be an essential part
of this conversation, let me say something about it.

Ultimately, we need to match up the host scheduled by Nova to the
addresses available to that host.  We could do this by delaying
address assignment until after host binding or we could do it by
including segment information from Neutron during scheduling.  The
latter has the advantage that we can consider IP availability during
scheduling.  That is why GoDaddy implemented it that way.

> In fact, I think that these proposals are more or less the same - it's just
> in #1 the meta-data used to tie the backing networks together is another
> network. This allows it to fit in neatly with the existing APIs. You would
> still need to implement something prior to IPAM or within IPAM that would
> select the appropriate backing network.

They are similar but to say they're the same is going a bit too far.
If they were the same then we'd be done with this conversation.  ;)

> As a (gulp) third alternative, we should consider that the front network
> here is in essence a layer 3 domain, and we have modeled layer 3 domains as
> address scopes in Liberty. The user is essentially saying "give me an
> address that is routable in this scope" - they don't care which actual
> subnet it gets allocated on. This is conceptually more in-line with [2] -
> modeling L3 domain separately from the existing Neutron concept of a network
> being a broadcast domain.

I will consider this some more.  This is an interesting thought.
Address scopes and subnet pools could play a role here.  I don't yet
see how it can all fit together but it is worth some thought.

One nit:  the neutron network might have been conceived as being just
"a broadcast domain" but, in practice, it is L2 and L3.  The Neutron
subnet is not really an L3 construct; it is just a cidr and doesn't
make sense on its own without considering its association with a
network and the other subnets associated with the same network.

> Fundamentally, however we associate the segments together, this comes down
> to a scheduling problem. Nova needs to be able to incorporate data from
> Neutron in its scheduling decision. Rather than solving this with a single
> piece of meta-data like network_id as described in proposal 1, it probably
> makes more sense to build out the general concept of utilizing network data
> for nova scheduling. We could still model this as in #1, or using address
> scopes, or some arbitrary data as in #2. But the harder problem to solve is
> the scheduling, not how we tag these things to inform that scheduling.

Yet how we tag these things seems to be a significant point of
interest.  Maybe not with you but with Ian and Assaf it certainly is.

As I said above, I agree that the scheduling part is very important
and needs to be discussed but I still separate them in my mind from
this question.

> The optimization of routing for floating IPs is also a scheduling problem,
> though one that would require a lot more changes to how FIP are allocated
> and associated to solve.
>
> John
>
> [1] https://review.openstack.org/#/c/180803/
> [2] https://bugs.launchpad.net/neutron/+bug/1458890/comments/7



More information about the OpenStack-dev mailing list