<div dir="auto"><div>The operators that were asking for the spec were using private IP space and that is probably going to be the most common use case for routed networks. Splitting a /21 up across the entire data center isn't really something you would want to do because you would run out of IPs quickly like you mentioned. </div><div dir="auto"><br></div><div dir="auto">The use case routed networks is almost exactly like your Romana project. For example, you have a large chunk of IPs (e.g. <a href="http://10.0.0.0/16">10.0.0.0/16</a>) and you've setup the infrastructure so each rack gets a /23 with the ToR as the gateway which would buy you 509 VMs across 128 racks. </div><div dir="auto"><br></div><div dir="auto"><br><div class="gmail_extra" dir="auto"><br><div class="gmail_quote">On May 22, 2017 2:53 PM, "Chris Marino" <<a href="mailto:chris@romana.io">chris@romana.io</a>> wrote:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks Jon, very helpful.<div><br></div><div>I think a more common use case for provider networks (in enterprise, AFAIK) is that they'd have a small number of /20 or /21 networks (VLANs) that they would trunk to all hosts. The /21s are part of the larger datacenter network with segment firewalls and access to other datacenter resources (no NAT). Each functional area would get their own network (i.e. QA, Prod, Dev, Test, etc.) but users would have access to only certain networks. </div><div><br></div><div>For various reasons, they're moving to spine/leaf L3 networks and they want to use the same provider network CIDRs with the new L3 network. While technically this is covered by the use case described in the spec, splitting a /21 into segments (i.e.one for each rack/ToRs) severely limits the scheduler (since each rack only get a part of the whole /21).<br></div><div><div><br></div><div>This can be solved with route advertisement/distribution and/or IPAM coordination w/Nova, but this isn't possible today. Which brings me back to my earlier question, how useful are routed provider network?</div></div><div><br></div><div>CM</div></div><div hspace="streak-pt-mark" style="max-height:1px"><img alt="" style="width:0px;max-height:0px;overflow:hidden" src="https://mailfoogae.appspot.com/t?sender=aY2hyaXNAcGFuaW5ldHdvcmtzLmNvbQ%3D%3D&type=zerocontent&guid=efba9cce-16f0-4deb-8ef4-9e61293c743b"><font color="#ffffff" size="1">ᐧ</font></div><div class="elided-text"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, May 22, 2017 at 1:08 PM, Jonathan Proulx <span dir="ltr"><<a href="mailto:jon@csail.mit.edu" target="_blank">jon@csail.mit.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Not sure if this is what you're looking for but...<br>
<br>
For my private cloud in research environment we have a public provider<br>
network available to all projects.<br>
<br>
This is externally routed and has basically been in the same config<br>
since Folsom (currently we're upto Mitaka). It provides public ipv4<br>
addresses. DHCP is done in neutron (of course) the lower portion of<br>
the allocated subnet is excluded from the dynamic range. We allow<br>
users to register DNS names in this range (through pre-exisiting<br>
custom, external IPAM tools) and to specify the fixed ip address when<br>
launching VMs.<br>
<br>
This network typically has 1k VMs running. We've assigned a /18 to<br>
which is obviously overkill.<br>
<br>
A few projects also have provider networks plumbed in to bridge they<br>
legacy physical networks into OpenStack. For these there's no dynamic<br>
range and users must specify fixed ip, these are generally considered<br>
"a bad idea" and were used to facilitate dumping VMs from old Xen<br>
infrastructures into OpenStack with minimal changes.<br>
<br>
These are old patterns I wouldn't necessarily suggest anyone<br>
replicate, but they are the truth of my world...<br>
<br>
-Jon<br>
<br>
On Mon, May 22, 2017 at 12:47:01PM -0700, Chris Marino wrote:<br>
:Hello operators, I will be talking about the new routed provider network<br>
:<<a href="https://docs.openstack.org/ocata/networking-guide/config-routed-networks.html" rel="noreferrer" target="_blank">https://docs.openstack.org/o<wbr>cata/networking-guide/config-r<wbr>outed-networks.html</a>><br>
:features in OpenStack at a Meetup<br>
:<<a href="https://www.meetup.com/openstack/events/239889735/" rel="noreferrer" target="_blank">https://www.meetup.com/opens<wbr>tack/events/239889735/</a>>next week and would<br>
<span>:like to get a better sense of how provider networks are currently being<br>
:used and if anyone has deployed routed provider networks?<br>
:<br>
:A typical L2 provider network is deployed as VLANs to every host. But<br>
:curious to know how how many hosts or VMs an operator would allow on this<br>
:network before you wanted to split into segments? Would you split hosts<br>
:between VLANs, or trunk the VLANs to all hosts? How do you handle<br>
:scheduling VMs across two provider networks?<br>
:<br>
:If you were to go with L3 provider networks, would it be L3 to the ToR, or<br>
:L3 to the host?<br>
:<br>
:Are the new routed provider network features useful in their current form?<br>
:<br>
:Any experience you can share would be very helpful.<br>
:CM<br>
</span>:<br>
:<br>
:ᐧ<br>
<br>
:_____________________________<wbr>__________________<br>
:OpenStack-operators mailing list<br>
:<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.ope<wbr>nstack.org</a><br>
:<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/cg<wbr>i-bin/mailman/listinfo/opensta<wbr>ck-operators</a><br>
<span class="m_-3382601816245511626HOEnZb"><font color="#888888"><br>
<br>
--<br>
</font></span></blockquote></div><br></div>
</div><br>______________________________<wbr>_________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.<wbr>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-operators</a><br>
<br></blockquote></div><br></div></div></div>