[openstack-dev] Moving task flow to conductor - concern about scale

Joe Gordon joe.gordon0 at gmail.com
Fri Jul 19 17:23:33 UTC 2013


On Jul 19, 2013 9:57 AM, "Day, Phil" <philip.day at hp.com> wrote:
>
> > -----Original Message-----
> > From: Dan Smith [mailto:dms at danplanet.com]
> > Sent: 19 July 2013 15:15
> > To: OpenStack Development Mailing List
> > Cc: Day, Phil
> > Subject: Re: [openstack-dev] Moving task flow to conductor - concern
about
> > scale
> >
> > > There's nothing I've seen so far that causes me alarm,  but then again
> > > we're in the very early stages and haven't moved anything really
> > > complex.
> >
> > The migrations (live, cold, and resize) are moving there now. These are
some
> > of the more complex stateful operations I would expect conductor to
manage in
> > the near term, and maybe ever.
> >
> > > I just don't buy into this line of thinking - I need more than one API
> > > node for HA as well - but that doesn't mean that therefore I want to
> > > put anything else that needs more than one node in there.
> > >
> > > I don't even think these do scale-with-compute in the same way;  DB
> > > proxy scales with the number of compute hosts because each new host
> > > introduces an amount of DB load though its periodic tasks.    Task
> >
> > > to create / modify servers - and that's not directly related to the
> > > number of hosts.
> >
> > Unlike API, the only incoming requests that generate load for the
conductor are
> > things like migrations, which also generate database traffic.
> >
> > > So rather than asking "what doesn't work / might not work in the
> > > future" I think the question should be "aside from them both being
> > > things that could be described as a conductor - what's the
> > > architectural reason for wanting to have these two separate groups of
> > > functionality in the same service ?"
> >
> > IMHO, the architectural reason is "lack of proliferation of services
and the
> > added complexity that comes with it."
> >
>
> IMO I don't think reducing the number of services is a good enough reason
to group unrelated services (db-proxy, task_workflow).  Otherwise why
aren't we arguing to just add all of these to the existing scheduler
service ?
>
> > If one expects the proxy workload to
> > always overshadow the task workload, then making these two things a
single
> > service makes things a lot simpler.
>
> Not if you have to run 40 services to cope with the proxy load, but don't
want the risk/complexity of havign 40 task workflow engines working in
parallel.
>
> > > If they were separate services and it turns out that I can/want/need
> > > to run the same number of both then I can pretty easily do that  - but
> > > the current approach is removing what to be seems a very important
> > > degree of freedom around deployment on a large scale system.
> >
> > I guess the question, then, is whether other folks agree that the
scaling-
> > separately problem is concerning enough to justify at least an RPC
topic split
> > now which would enable the services to be separated later if need be.
> >
>
> Yep - that's the key question.   An in the interest of keeping the system
stable at scale while we roll through this I think we should be erring on
the side of caution/keeping deployment options open rather than waiting to
see if there's a problem.

++, unless there is some downside to a RPC topic split, this seems like a
reasonable precaution.

>
> > I would like to point out, however, that the functions are being split
into
> > different interfaces currently. While that doesn't reach low enough on
the stack
> > to allow hosting them in two different places, it does provide
organization such
> > that if we later needed to split them, it would be a relatively simple
(hah)
> > matter of coordinating an RPC upgrade like anything else.
> >
> > --Dan
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130719/999a59bd/attachment.html>


More information about the OpenStack-dev mailing list