[Openstack-operators] [nova] [neutron] Re: How do your end users use networking?
Kris G. Lindgren
klindgren at godaddy.com
Wed Jun 17 15:17:12 UTC 2015
Senior Linux Systems Engineer
On 6/17/15, 5:12 AM, "Neil Jerram" <Neil.Jerram at metaswitch.com> wrote:
>Apologies in advance for questions that are probably really dumb - but
>there are several points here that I don't understand.
>On 17/06/15 03:44, Kris G. Lindgren wrote:
>> We are doing pretty much the same thing - but in a slightly different
>> We extended the nova scheduler to help choose networks (IE. don't put
>> vm's on a network/host that doesn't have any available IP address).
>Why would a particular network/host not have any available IP address?
If a created network has 1024 ip's on it (/22) and we provision 1020 vms,
anything deployed after that will not have an additional ip address
the network doesn't have any available ip addresses (loose some ip's to
>> we add into the host-aggregate that each HV is attached to a network
>> metadata item which maps to the names of the neutron networks that host
>> supports. This basically creates the mapping of which host supports
>> networks, so we can correctly filter hosts out during scheduling. We do
>> allow people to choose a network if they wish and we do have the neutron
>> end-point exposed. However, by default if they do not supply a boot
>> command with a network, we will filter the networks down and choose one
>> for them. That way they never hit . This also works well for us,
>> because the default UI that we provide our end-users is not horizon.
>Why do you define multiple networks - as opposed to just one - and why
>would one of your users want to choose a particular one of those?
>(Do you mean multiple as in public-1, public-2, ...; or multiple as in
>public, service, ...?)
This is answered in the other email and original email as well. But
we have multiple L2 segments that only exists on certain switches and
only tied to certain hosts. With the way neutron is currently structured
need to create a network for each L2. So that¹s why we define multiple
For our end users - they only care about getting a vm with a single ip
in a "network" which is really a zone like "prod" or "dev" or "test".
caring after that point. So in the scheduler filter that we created we
exactly that. We will filter down from all the hosts and networks down
combo that intersects at a host that has space, with a network that has
And the network that was chosen is actually available to that host.
>> We currently only support one network per HV via this configuration, but
>> we would like to be able to expose a network "type" or "group" via
>> in the future.
>> I believe what you described below is also another way of phrasing the
>> that we had in . That you want to define multiple "top level"
>> in neutron: 'public' and 'service'. That is made up by multiple
>desperate? :-) I assume you probably meant "separate" here.
Sorry - yes.
>> L2 networks: 'public-1', 'public2,' ect which are independently
>> constrained to a specific set of hosts/switches/datacenter.
>If I'm understanding correctly, this is one of those places where I get
>confused about the difference between Neutron-as-an-API and
>Neutron-as-a-software-implementation. I guess what you mean here is
>that your deployment hardware is really providing those L2 segments
>directly, and hence you aren't using Neutron's software-based simulation
>of L2 segments. Is that right?
>> We have talked about working around this under our configuration one of
>> two ways. First, is to use availability zones to provide the separation
>> between: 'public' and 'service', or in our case: 'prod',
>> ect, ect.
>Why are availability zones involved here? Assuming you had 'prod',
>'pki','internal' etc. networks set up and represented as such in
>Neutron, why wouldn't you just say which of those networks each instance
>should connect to, when creating each instance?
Because neutron doesn't support our networking configuration, so we need to
present a way for the end user to choose what network "zone" they end up
Neutrons current model is "any network, anywhere" where network here is
defined as L2 domain. Which is simply not the way networks in our data
are being built theses days. We have moved from that model to a "folded
Design. In this design a "spine" interconnects all of the TOR's (leafs).
Terminates the L2 boundary for servers attached to it. All communication
spine is routed (L3). This design pushes the ARP/Broadcast traffic down
TOR's and frees up the CAM table space in the spine. Allowing it to scale
much better. We put together an RFE on this. 
 - https://bugs.launchpad.net/neutron/+bug/1458890
More information about the OpenStack-operators