[Openstack-operators] Routed provider networks...

Chris Marino chris at romana.io
Tue May 23 21:48:06 UTC 2017


Kevin, should have been more clear....

For the specific operator that is running L3 to host, with only a few /20
blocks.......dynamic routing is absolutely necessary.

The /16 scenario you describe is totally fine without it.

CM



On Tue, May 23, 2017 at 2:40 PM, Kevin Benton <kevin at benton.pub> wrote:

> >Dynamic routing is absolutely necessary, though. Large blocks of 1918
> addresses are scarce, even inside the DC.
>
> I just described a 65 thousand VM topology and it used a /16. Dynamic
> routing is not necessary or even helpful in this scenario if you plan on
> ever running close to your max server density.
>
> Routed networks allows you to size your subnets specifically to the
> maximum number of VMs you can support in a segment, so there is very little
> IP waste once you actually start to use your servers to run VMs.
>
> On Tue, May 23, 2017 at 6:38 AM, Chris Marino <chris at romana.io> wrote:
>
>> On Mon, May 22, 2017 at 9:12 PM, Kevin Benton <kevin at benton.pub> wrote:
>>
>>> The operators that were asking for the spec were using private IP space
>>> and that is probably going to be the most common use case for routed
>>> networks. Splitting a /21 up across the entire data center isn't really
>>> something you would want to do because you would run out of IPs quickly
>>> like you mentioned.
>>>
>>> The use case routed networks is almost exactly like your Romana project.
>>> For example, you have a large chunk of IPs (e.g. 10.0.0.0/16) and
>>> you've setup the infrastructure so each rack gets a /23 with the ToR as the
>>> gateway which would buy you 509 VMs across 128 racks.
>>>
>>
>> Yes, it is. That's what brought me back to this. Working with an operator
>> that's using L2 provider networks today, but will bring L3 to host in their
>> new design.
>>
>> Dynamic routing is absolutely necessary, though. Large blocks of 1918
>> addresses are scarce, even inside the DC. VRFs and/or NAT just not an
>> option.
>>
>> CM
>>
>>
>>>
>>>
>>> On May 22, 2017 2:53 PM, "Chris Marino" <chris at romana.io> wrote:
>>>
>>> Thanks Jon, very helpful.
>>>
>>> I think a more common use case for provider networks (in enterprise,
>>> AFAIK) is that they'd have a small number of /20 or /21 networks (VLANs)
>>> that they would trunk to all hosts. The /21s are part of the larger
>>> datacenter network with segment firewalls and access to other datacenter
>>> resources (no NAT). Each functional area would get their own network (i.e.
>>> QA, Prod, Dev, Test, etc.) but users would have access to only certain
>>> networks.
>>>
>>> For various reasons, they're moving to spine/leaf L3 networks and they
>>> want to use the same provider network CIDRs with the new L3 network. While
>>> technically this is covered by the use case described in the spec,
>>> splitting a /21 into segments (i.e.one for each rack/ToRs) severely limits
>>> the scheduler (since each rack only get a part of the whole /21).
>>>
>>> This can be solved with route advertisement/distribution and/or IPAM
>>> coordination w/Nova, but this isn't possible today. Which brings me back to
>>> my earlier question, how useful are routed provider network?
>>>
>>> CM
>>>>>>
>>> On Mon, May 22, 2017 at 1:08 PM, Jonathan Proulx <jon at csail.mit.edu>
>>> wrote:
>>>
>>>>
>>>> Not sure if this is what you're looking for but...
>>>>
>>>> For my private cloud in research environment we have a public provider
>>>> network available to all projects.
>>>>
>>>> This is externally routed and has basically been in the same config
>>>> since Folsom (currently we're upto Mitaka).  It provides public ipv4
>>>> addresses. DHCP is done in neutron (of course) the lower portion of
>>>> the allocated subnet is excluded from the dynamic range.  We allow
>>>> users to register DNS names in this range (through pre-exisiting
>>>> custom, external IPAM tools) and to specify the fixed ip address when
>>>> launching VMs.
>>>>
>>>> This network typically has 1k VMs running. We've assigned a /18 to
>>>> which is obviously overkill.
>>>>
>>>> A few projects also have provider networks plumbed in to bridge they
>>>> legacy physical networks into OpenStack.  For these there's no dynamic
>>>> range and users must specify fixed ip, these are generally considered
>>>> "a bad idea" and were used to facilitate dumping VMs from old Xen
>>>> infrastructures into OpenStack with minimal changes.
>>>>
>>>> These are old patterns I wouldn't necessarily suggest anyone
>>>> replicate, but they are the truth of my world...
>>>>
>>>> -Jon
>>>>
>>>> On Mon, May 22, 2017 at 12:47:01PM -0700, Chris Marino wrote:
>>>> :Hello operators, I will be talking about the new routed provider
>>>> network
>>>> :<https://docs.openstack.org/ocata/networking-guide/config-r
>>>> outed-networks.html>
>>>> :features in OpenStack at a Meetup
>>>> :<https://www.meetup.com/openstack/events/239889735/>next week and
>>>> would
>>>> :like to get a better sense of how provider networks are currently being
>>>> :used and if anyone has deployed routed provider networks?
>>>> :
>>>> :A typical L2 provider network is deployed as VLANs to every host. But
>>>> :curious to know how how many hosts or VMs an operator would allow on
>>>> this
>>>> :network before you wanted to split into segments? Would you split hosts
>>>> :between VLANs, or trunk the VLANs to all hosts? How do you handle
>>>> :scheduling VMs across two provider networks?
>>>> :
>>>> :If you were to go with L3 provider networks, would it be L3 to the
>>>> ToR, or
>>>> :L3 to the host?
>>>> :
>>>> :Are the new routed provider network features useful in their current
>>>> form?
>>>> :
>>>> :Any experience you can share would be very helpful.
>>>> :CM
>>>> :
>>>> :
>>>> :ᐧ
>>>>
>>>> :_______________________________________________
>>>> :OpenStack-operators mailing list
>>>> :OpenStack-operators at lists.openstack.org
>>>> :http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta
>>>> ck-operators
>>>>
>>>>
>>>> --
>>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>>>
>
>
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170523/5102f841/attachment-0001.html>


More information about the OpenStack-operators mailing list