[Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

Kris G. Lindgren klindgren at godaddy.com
Wed Jun 17 02:44:09 UTC 2015

We are doing pretty much the same thing - but in a slightly different way.
 We extended the nova scheduler to help choose networks (IE. don't put
vm's on a network/host that doesn't have any available IP address). Then,
we add into the host-aggregate that each HV is attached to a network
metadata item which maps to the names of the neutron networks that host
supports.  This basically creates the mapping of which host supports what
networks, so we can correctly filter hosts out during scheduling. We do
allow people to choose a network if they wish and we do have the neutron
end-point exposed. However, by default if they do not supply a boot
command with a network, we will filter the networks down and choose one
for them.  That way they never hit [1].  This also works well for us,
because the default UI that we provide our end-users is not horizon.

We currently only support one network per HV via this configuration, but
we would like to be able to expose a network "type" or "group" via neutron
in the future.  

I believe what you described below is also another way of phrasing the ask
that we had in [2].  That you want to define multiple "top level" networks
in neutron: 'public' and 'service'.  That is made up by multiple desperate
L2 networks: 'public-1', 'public2,' ect which are independently
constrained to a specific set of hosts/switches/datacenter.

We have talked about working around this under our configuration one of
two ways.  First, is to use availability zones to provide the separation
between: 'public' and 'service', or in our case: 'prod', 'pki','internal',
ect, ect.  This would work well for our current use case (one type of
network per HV), but would most likely be wasteful in yours.  You could
probably make the change that allows a HV to exist in more than one
availability zone, which would allow you to specify the same hypervisors
for the "public" and "service" AZ's and thus not be wasteful.  Second, is
by creating additional flavors that have an network_group attribute and do
some extra filtering on that.  We had some other other ideas - but had a
number of "opens" how to get them fully implemented.

[2] https://bugs.launchpad.net/neutron/+bug/1458890

Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.

On 6/16/15, 6:31 PM, "Sam Morrison" <sorrison at gmail.com> wrote:

>We at NeCTAR are starting the transition to neutron from nova-net and
>neutron almost does what we want.
>We have 10 ³public" networks and 10 ³service" networks and depending on
>which compute node you land on you get attached to one of them.
>In neutron speak we have multiple shared externally routed provider
>networks. We don¹t have any tenant networks or any other fancy stuff yet.
>How I¹ve currently got this set up is by creating 10 networks and
>subsequent subnets eg. public-1, public-2, public-3 Š and service-1,
>service-2, service-3 and so on.
>In nova we have made a slight change in allocate for instance [1] whereby
>the compute node has a designated hardcoded network_ids for the public
>and service network it is physically attached to.
>We have also made changes in the nova API so users can¹t select a network
>and the neutron endpoint is not registered in keystone.
>That all works fine but ideally I want a user to be able to choose if
>they want a public and or service network. We can¹t let them as we have
>10 public networks, we almost need something in neutron like a "network
>group² or something that allows a user to select ³public² and it
>allocates them a port in one of the underlying public networks.
>I tried going down the route of having 1 public and 1 service network in
>neutron then creating 10 subnets under each. That works until you get to
>things like dhcp-agent and metadata agent although this looks like it
>could work with a few minor changes. Basically I need a dhcp-agent to be
>spun up per subnet and ensure they are spun up in the right place.
>I¹m not sure what the correct way of doing this. What are other people
>doing in the interim until this kind of use case can be done in Neutron?
>> On 17 Jun 2015, at 12:20 am, Jay Pipes <jaypipes at gmail.com> wrote:
>> Adding -dev because of the reference to the Neutron "Get me a network
>>spec". Also adding [nova] and [neutron] subject markers.
>> Comments inline, Kris.
>> On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
>>> During the Openstack summit this week I got to talk to a number of
>>> operators of large Openstack deployments about how they do networking.
>>>  I was happy, surprised even, to find that a number of us are using a
>>> similar type of networking strategy.  That we have similar challenges
>>> around networking and are solving it in our own but very similar way.
>>>  It is always nice to see that other people are doing the same things
>>> as you or see the same issues as you are and that "you are not crazy".
>>> So in that vein, I wanted to reach out to the rest of the Ops Community
>>> and ask one pretty simple question.
>>> Would it be accurate to say that most of your end users want almost
>>> nothing to do with the network?
>> That was my experience at AT&T, yes. The vast majority of end users
>>could not care less about networking, as long as the connectivity was
>>reliable, performed well, and they could connect to the Internet (and
>>have others connect from the Internet to their VMs) when needed.
>>> In my experience what the majority of them (both internal and external)
>>> want is to consume from Openstack a compute resource, a property of
>>> which is it that resource has an IP address.  They, at most, care about
>>> which "network" they are on.  Where a "network" is usually an arbitrary
>>> definition around a set of real networks, that are constrained to a
>>> location, in which the company has attached some sort of policy.  For
>>> example, I want to be in the production network vs's the xyz lab
>>> network, vs's the backup network, vs's the corp network.  I would say
>>> for Godaddy, 99% of our use cases would be defined as: I want a compute
>>> resource in the production network zone, or I want a compute resource
>>> this other network zone.  The end user only cares that the IP the vm
>>> receives works in that zone, outside of that they don't care any other
>>> property of that IP.  They do not care what subnet it is in, what vlan
>>> it is on, what switch it is attached to, what router its attached to,
>>> how data flows in/out of that network.  It just needs to work. We have
>>> also found that by giving the users a floating ip address that can be
>>> moved between vm's (but still constrained within a "network" zone) we
>>> can solve almost all of our users asks.  Typically, the internal need
>>> for a floating ip is when a compute resource needs to talk to another
>>> protected internal or external resource. Where it is painful (read:
>>> slow) to have the acl's on that protected resource updated. The
>>> need is from our hosting customers who have a domain name (or many)
>>> to an IP address and changing IP's/DNS is particularly painful.
>> This is precisely my experience as well.
>>> Since the vast majority of our end users don't care about any of the
>>> technical network stuff, we spend a large amount of time/effort in
>>> abstracting or hiding the technical stuff from the users view. Which
>>> lead to a number of patches that we carry on both nova and neutron (and
>>> are available on our public github).
>> You may be interested to learn about the "Get Me a Network"
>>specification that was discussed in a session at the summit. I had
>>requested some time at the summit to discuss this exact use case --
>>where users of Nova actually didn't care much at all about network
>>constructs and just wanted to see Nova exhibit similar behaviour as the
>>nova-network behaviour of "admin sets up a bunch of unassigned networks
>>and the first time a tenant launches a VM, she just gets an available
>>network and everything is just done for her".
>> The spec is here:
>> https://review.openstack.org/#/c/184857/
>> > At the same time we also have a
>>> *very* small subset of (internal) users who are at the exact opposite
>>> end of the scale.  They care very much about the network details,
>>> possibly all the way down to that they want to boot a vm to a specific
>>> HV, with a specific IP address on a specific network segment.  The
>>> difference however, is that these users are completely aware of the
>>> topology of the network and know which HV's map to which network
>>> segments and are essentially trying to make a very specific ask for
>>> scheduling.
>> Agreed, at Mirantis (and occasionally at AT&T), we do get some
>>customers (mostly telcos, of course) that would like total control over
>>all things networking.
>> Nothing wrong with this, of course. But the point of the above spec is
>>to allow "normal" users to not have to think or know about all the
>>advanced networking stuffs if they don't need it. The Neutron API should
>>be able to handle both sets of users equally well.
>> Best,
>> -jay
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>OpenStack-operators mailing list
>OpenStack-operators at lists.openstack.org

More information about the OpenStack-operators mailing list