<div dir="ltr">Thanks Kevin,<div><br></div><div>If I understood you well, scalability isn't impacted by number of tenants, but rather by number of ports by network / security group / tenant router; so, if I have a single giant tenant network with several thousands ports, perhaps I'll have a problem.</div><div><br></div><div>Partitioning the load into various "tenant networks" should mitigate these problems, independently of the total number of tenants. So I could I keep running the cloud fine with a *single* tenant owning *several* internal networks, right?</div><div><br></div><div>Gustavo</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 14, 2015 at 6:56 PM, Kevin Benton <span dir="ltr"><<a href="mailto:blak111@gmail.com" target="_blank">blak111@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Neutron scalability isn't impacted directly by the number of tenants<br>
so that shouldn't matter too much. The following are a few things to<br>
consider.<br>
<br>
Number of ports per security group: Every time a member of a security<br>
group (a port) is removed/added or has it's IP changed, a notification<br>
goes out to the L2 agents so they can update their firewall rules. If<br>
you have thousands of ports and lots of churn, the L2 agents will be<br>
busy all of the time processing the changes and may fall behind<br>
impacting the time it takes for ports to gain connectivity.<br>
<br>
Number of ports per network: Each network is a broadcast domain so a<br>
single network with hundreds of ports will get pretty chatty with<br>
broadcast and multicast traffic. Also, if you use l2pop, each l2 agent<br>
has to know the location of every port that shares a network with the<br>
ports on the agent. I don't think this has as much impact as the<br>
security groups updating, but it's something to keep in mind.<br>
<br>
Number of ports behind a single tenant router: Any traffic that goes<br>
to an external network that doesn't have a floating IP associated with<br>
it needs to go via the assigned centralized SNAT node for that router.<br>
If a lot of your VMs don't have floating IPs and generate lots of<br>
traffic, this single translation point will quickly become a<br>
bottleneck.<br>
<br>
Number of centralized SNAT agents: Even if you have lots of tenant<br>
routers to address the issue above, you need to make sure you have<br>
plenty of L3 agents with access to the external network and<br>
'agent_mode' set to 'dvr_snat' so they can be used as centralized SNAT<br>
nodes. Otherwise, if you only have one centralized SNAT node,<br>
splitting the traffic across a bunch of tenant routers doesn't buy you<br>
much.<br>
<br>
Let me know if you need me to clarify anything.<br>
<br>
Cheers,<br>
Kevin Benton<br>
<div><div class="h5"><br>
On Thu, May 14, 2015 at 9:15 AM, Gustavo Randich<br>
<<a href="mailto:gustavo.randich@gmail.com">gustavo.randich@gmail.com</a>> wrote:<br>
> Hi!<br>
><br>
> We are evaluating the migration of our private cloud of several thousand VMs<br>
> from multi-host nova-network to neutron/DVR. For historical reasons, we<br>
> currently use a single tenant because group administration is made outside<br>
> openstack (users don't talk to OS API). The number of compute nodes we have<br>
> now is approx. 400, and growing.<br>
><br>
> My question is:<br>
><br>
> Srictly regarding the scalability and performance fo the DVR/Neutron virtual<br>
> networking components inside compute nodes (OVS virtual switches, iptables,<br>
> VXLAN tunnel mesh, etc.), should we mantain this single-tenant /<br>
> single-network architecture in Neutron/DVR? Or should we partition our next<br>
> cloud into several tenants each corresponding to different groups/verticals<br>
> inside the company, and possibly each with their several private networks?<br>
><br>
> Thanks!<br>
><br>
><br>
</div></div>> _______________________________________________<br>
> OpenStack-operators mailing list<br>
> <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
><br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
<br>
--<br>
Kevin Benton<br>
</font></span></blockquote></div><br></div>