[openstack-dev] [neutron] Neutron scaling datapoints?

Kevin Benton blak111 at gmail.com
Sat Apr 11 04:34:55 UTC 2015


Which periodic updates did you have in mind to eliminate? One of the few
remaining ones I can think of is sync_routers but it would be great if you
can enumerate the ones you observed because eliminating overhead in agents
is something I've been working on as well.

One of the most common is the heartbeat from each agent. However, I don't
think we can't eliminate them because they are used to determine if the
agents are still alive for scheduling purposes. Did you have something else
in mind to determine if an agent is alive?

On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas <afazekas at redhat.com> wrote:

> I'm 99.9% sure, for scaling above 100k managed node,
> we do not really need to split the openstack to multiple smaller openstack,
> or use significant number of extra controller machine.
>
> The problem is openstack using the right tools SQL/AMQP/(zk),
> but in a wrong way.
>
> For example.:
> Periodic updates can be avoided almost in all cases
>
> The new data can be pushed to the agent just when it needed.
> The agent can know when the AMQP connection become unreliable (queue or
> connection loose),
> and needs to do full sync.
> https://bugs.launchpad.net/neutron/+bug/1438159
>
> Also the agents when gets some notification, they start asking for details
> via the
> AMQP -> SQL. Why they do not know it already or get it with the
> notification ?
>
>
> ----- Original Message -----
> > From: "Neil Jerram" <Neil.Jerram at metaswitch.com>
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> > Sent: Thursday, April 9, 2015 5:01:45 PM
> > Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
> >
> > Hi Joe,
> >
> > Many thanks for your reply!
> >
> > On 09/04/15 03:34, joehuang wrote:
> > > Hi, Neil,
> > >
> > >  From theoretic, Neutron is like a "broadcast" domain, for example,
> > >  enforcement of DVR and security group has to touch each regarding host
> > >  where there is VM of this project resides. Even using SDN controller,
> the
> > >  "touch" to regarding host is inevitable. If there are plenty of
> physical
> > >  hosts, for example, 10k, inside one Neutron, it's very hard to
> overcome
> > >  the "broadcast storm" issue under concurrent operation, that's the
> > >  bottleneck for scalability of Neutron.
> >
> > I think I understand that in general terms - but can you be more
> > specific about the broadcast storm?  Is there one particular message
> > exchange that involves broadcasting?  Is it only from the server to
> > agents, or are there 'broadcasts' in other directions as well?
> >
> > (I presume you are talking about control plane messages here, i.e.
> > between Neutron components.  Is that right?  Obviously there can also be
> > broadcast storm problems in the data plane - but I don't think that's
> > what you are talking about here.)
> >
> > > We need layered architecture in Neutron to solve the "broadcast domain"
> > > bottleneck of scalability. The test report from OpenStack cascading
> shows
> > > that through layered architecture "Neutron cascading", Neutron can
> > > supports up to million level ports and 100k level physical hosts. You
> can
> > > find the report here:
> > >
> http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers
> >
> > Many thanks, I will take a look at this.
> >
> > > "Neutron cascading" also brings extra benefit: One cascading Neutron
> can
> > > have many cascaded Neutrons, and different cascaded Neutron can
> leverage
> > > different SDN controller, maybe one is ODL, the other one is
> OpenContrail.
> > >
> > > ----------------Cascading Neutron-------------------
> > >              /         \
> > > --cascaded Neutron--   --cascaded Neutron-----
> > >         |                  |
> > > ---------ODL------       ----OpenContrail--------
> > >
> > >
> > > And furthermore, if using Neutron cascading in multiple data centers,
> the
> > > DCI controller (Data center inter-connection controller) can also be
> used
> > > under cascading Neutron, to provide NaaS ( network as a service )
> across
> > > data centers.
> > >
> > > ---------------------------Cascading Neutron--------------------------
> > >              /            |          \
> > > --cascaded Neutron--  -DCI controller-  --cascaded Neutron-----
> > >         |                 |            |
> > > ---------ODL------           |         ----OpenContrail--------
> > >                           |
> > > --(Data center 1)--   --(DCI networking)--  --(Data center 2)--
> > >
> > > Is it possible for us to discuss this in OpenStack Vancouver summit?
> >
> > Most certainly, yes.  I will be there from mid Monday afternoon through
> > to end Friday.  But it will be my first summit, so I have no idea yet as
> > to how I might run into you - please can you suggest!
> >
> > > Best Regards
> > > Chaoyi Huang ( Joe Huang )
> >
> > Regards,
> >       Neil
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150410/72e33a72/attachment.html>


More information about the OpenStack-dev mailing list