[Openstack-operators] Neutron/DVR scalability of one giant single tenant VS multiple tenants

Kevin Benton blak111 at gmail.com
Fri May 15 00:34:19 UTC 2015


Yes, correct. Tenants basically are just used as a tag to filter and
restrict API operations.
On May 14, 2015 4:35 PM, "Gustavo Randich" <gustavo.randich at gmail.com>
wrote:

> Thanks Kevin,
>
> If I understood you well, scalability isn't impacted by number of tenants,
> but rather by number of ports by network / security group / tenant router;
> so, if I have a single giant tenant network with several thousands ports,
> perhaps I'll have a problem.
>
> Partitioning the load into various "tenant networks" should mitigate these
> problems, independently of the total number of tenants. So I could I keep
> running the cloud fine with a *single* tenant owning *several* internal
> networks, right?
>
> Gustavo
>
>
> On Thu, May 14, 2015 at 6:56 PM, Kevin Benton <blak111 at gmail.com> wrote:
>
>> Neutron scalability isn't impacted directly by the number of tenants
>> so that shouldn't matter too much. The following are a few things to
>> consider.
>>
>> Number of ports per security group: Every time a member of a security
>> group (a port) is removed/added or has it's IP changed, a notification
>> goes out to the L2 agents so they can update their firewall rules. If
>> you have thousands of ports and lots of churn, the L2 agents will be
>> busy all of the time processing the changes and may fall behind
>> impacting the time it takes for ports to gain connectivity.
>>
>> Number of ports per network: Each network is a broadcast domain so a
>> single network with hundreds of ports will get pretty chatty with
>> broadcast and multicast traffic. Also, if you use l2pop, each l2 agent
>> has to know the location of every port that shares a network with the
>> ports on the agent. I don't think this has as much impact as the
>> security groups updating, but it's something to keep in mind.
>>
>> Number of ports behind a single tenant router: Any traffic that goes
>> to an external network that doesn't have a floating IP associated with
>> it needs to go via the assigned centralized SNAT node for that router.
>> If a lot of your VMs don't have floating IPs and generate lots of
>> traffic, this single translation point will quickly become a
>> bottleneck.
>>
>> Number of centralized SNAT agents: Even if you have lots of tenant
>> routers to address the issue above, you need to make sure you have
>> plenty of L3 agents with access to the external network and
>> 'agent_mode' set to 'dvr_snat' so they can be used as centralized SNAT
>> nodes. Otherwise, if you only have one centralized SNAT node,
>> splitting the traffic across a bunch of tenant routers doesn't buy you
>> much.
>>
>> Let me know if you need me to clarify anything.
>>
>> Cheers,
>> Kevin Benton
>>
>> On Thu, May 14, 2015 at 9:15 AM, Gustavo Randich
>> <gustavo.randich at gmail.com> wrote:
>> > Hi!
>> >
>> > We are evaluating the migration of our private cloud of several
>> thousand VMs
>> > from multi-host nova-network to neutron/DVR. For historical reasons, we
>> > currently use a single tenant because group administration is made
>> outside
>> > openstack (users don't talk to OS API). The number of compute nodes we
>> have
>> > now is approx. 400, and growing.
>> >
>> > My question is:
>> >
>> > Srictly regarding the scalability and performance fo the DVR/Neutron
>> virtual
>> > networking components inside compute nodes (OVS virtual switches,
>> iptables,
>> > VXLAN tunnel mesh, etc.), should we mantain this single-tenant /
>> > single-network architecture in Neutron/DVR? Or should we partition our
>> next
>> > cloud into several tenants each corresponding to different
>> groups/verticals
>> > inside the company, and possibly each with their several private
>> networks?
>> >
>> > Thanks!
>> >
>> >
>> > _______________________________________________
>> > OpenStack-operators mailing list
>> > OpenStack-operators at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>>
>>
>>
>> --
>> Kevin Benton
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150514/512bed90/attachment.html>


More information about the OpenStack-operators mailing list