[openstack-dev] Moving task flow to conductor - concern about scale

Joshua Harlow harlowja at yahoo-inc.com
Fri Jul 19 20:21:24 UTC 2013


I remember trying to make this argument myself about a month or 2 ago. I agree with the thought & splitting up "principle", just unsure of the timing.

Taskflow (the library) I am hoping can become a useful library for making these complicates less complex. WIP of course :)

Honestly I think it's not just nova that sees this issue with flows and how to scale them outwards reliably. But this is  one of the big challenges (changing the tires on the car while its moving)...

Sent from my really tiny device...

On Jul 19, 2013, at 7:01 AM, "Day, Phil" <philip.day at hp.com> wrote:

> Hi Josh,
> 
> My idea's really pretty simple - make "DB proxy" and "Task workflow" separate services, and allow people to co-locate them if they want to.
> 
> Cheers.
> Phil
> 
>> -----Original Message-----
>> From: Joshua Harlow [mailto:harlowja at yahoo-inc.com]
>> Sent: 17 July 2013 14:57
>> To: OpenStack Development Mailing List
>> Cc: OpenStack Development Mailing List
>> Subject: Re: [openstack-dev] Moving task flow to conductor - concern about
>> scale
>> 
>> Hi Phil,
>> 
>> I understand and appreciate your concern and I think everyone is trying to keep
>> that in mind. It still appears to me to be to early in this refactoring and task
>> restructuring effort to tell where it may "end up". I think that's also good news
>> since we can get these kinds of ideas (componentized conductors if u will) to
>> handle your (and mine) scaling concerns. It would be pretty neat if said
>> conductors could be scaled at different rates depending on there component,
>> although as u said we need to get much much better with handling said
>> patterns (as u said just 2 schedulers is a pita right now). I believe we can do it,
>> given the right kind of design and scaling "principles" we build in from the start
>> (right now).
>> 
>> Would like to hear more of your ideas so they get incorporated earlier rather
>> than later.
>> 
>> Sent from my really tiny device..
>> 
>> On Jul 16, 2013, at 9:55 AM, "Dan Smith" <dms at danplanet.com> wrote:
>> 
>>>> In the original context of using Conductor as a database proxy then
>>>> the number of conductor instances is directly related to the number
>>>> of compute hosts I need them to serve.
>>> 
>>> Just a point of note, as far as I know, the plan has always been to
>>> establish conductor as a thing that sits between the api and compute
>>> nodes. However, we started with the immediate need, which was the
>>> offloading of database traffic.
>>> 
>>>> What I not sure is that I would also want to have the same number of
>>>> conductor instances for task control flow - historically even running
>>>> 2 schedulers has been a problem, so the thought of having 10's of
>>>> them makes me very concerned at the moment.   However I can't see any
>>>> way to specialise a conductor to only handle one type of request.
>>> 
>>> Yeah, I don't think the way it's currently being done allows for
>>> specialization.
>>> 
>>> Since you were reviewing actual task code, can you offer any specifics
>>> about the thing(s) that concern you? I think that scaling conductor
>>> (and its tasks) horizontally is an important point we need to achieve,
>>> so if you see something that needs tweaking, please point it out.
>>> 
>>> Based on what is there now and proposed soon, I think it's mostly
>>> fairly safe, straightforward, and really no different than what two
>>> computes do when working together for something like resize or migrate.
>>> 
>>>> So I guess my question is, given that it may have to address two
>>>> independent scale drivers, is putting task work flow and DB proxy
>>>> functionality into the same service really the right thing to do - or
>>>> should there be some separation between them.
>>> 
>>> I think that we're going to need more than one "task" node, and so it
>>> seems appropriate to locate one scales-with-computes function with
>>> another.
>>> 
>>> Thanks!
>>> 
>>> --Dan
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list