<p dir="ltr"><br>
On Jul 19, 2013 9:57 AM, "Day, Phil" <<a href="mailto:philip.day@hp.com">philip.day@hp.com</a>> wrote:<br>
><br>
> > -----Original Message-----<br>
> > From: Dan Smith [mailto:<a href="mailto:dms@danplanet.com">dms@danplanet.com</a>]<br>
> > Sent: 19 July 2013 15:15<br>
> > To: OpenStack Development Mailing List<br>
> > Cc: Day, Phil<br>
> > Subject: Re: [openstack-dev] Moving task flow to conductor - concern about<br>
> > scale<br>
> ><br>
> > > There's nothing I've seen so far that causes me alarm, but then again<br>
> > > we're in the very early stages and haven't moved anything really<br>
> > > complex.<br>
> ><br>
> > The migrations (live, cold, and resize) are moving there now. These are some<br>
> > of the more complex stateful operations I would expect conductor to manage in<br>
> > the near term, and maybe ever.<br>
> ><br>
> > > I just don't buy into this line of thinking - I need more than one API<br>
> > > node for HA as well - but that doesn't mean that therefore I want to<br>
> > > put anything else that needs more than one node in there.<br>
> > ><br>
> > > I don't even think these do scale-with-compute in the same way; DB<br>
> > > proxy scales with the number of compute hosts because each new host<br>
> > > introduces an amount of DB load though its periodic tasks. Task<br>
> ><br>
> > > to create / modify servers - and that's not directly related to the<br>
> > > number of hosts.<br>
> ><br>
> > Unlike API, the only incoming requests that generate load for the conductor are<br>
> > things like migrations, which also generate database traffic.<br>
> ><br>
> > > So rather than asking "what doesn't work / might not work in the<br>
> > > future" I think the question should be "aside from them both being<br>
> > > things that could be described as a conductor - what's the<br>
> > > architectural reason for wanting to have these two separate groups of<br>
> > > functionality in the same service ?"<br>
> ><br>
> > IMHO, the architectural reason is "lack of proliferation of services and the<br>
> > added complexity that comes with it."<br>
> ><br>
><br>
> IMO I don't think reducing the number of services is a good enough reason to group unrelated services (db-proxy, task_workflow). Otherwise why aren't we arguing to just add all of these to the existing scheduler service ?<br>
><br>
> > If one expects the proxy workload to<br>
> > always overshadow the task workload, then making these two things a single<br>
> > service makes things a lot simpler.<br>
><br>
> Not if you have to run 40 services to cope with the proxy load, but don't want the risk/complexity of havign 40 task workflow engines working in parallel.<br>
><br>
> > > If they were separate services and it turns out that I can/want/need<br>
> > > to run the same number of both then I can pretty easily do that - but<br>
> > > the current approach is removing what to be seems a very important<br>
> > > degree of freedom around deployment on a large scale system.<br>
> ><br>
> > I guess the question, then, is whether other folks agree that the scaling-<br>
> > separately problem is concerning enough to justify at least an RPC topic split<br>
> > now which would enable the services to be separated later if need be.<br>
> ><br>
><br>
> Yep - that's the key question. An in the interest of keeping the system stable at scale while we roll through this I think we should be erring on the side of caution/keeping deployment options open rather than waiting to see if there's a problem.</p>
<p dir="ltr">++, unless there is some downside to a RPC topic split, this seems like a reasonable precaution.</p>
<p dir="ltr">><br>
> > I would like to point out, however, that the functions are being split into<br>
> > different interfaces currently. While that doesn't reach low enough on the stack<br>
> > to allow hosting them in two different places, it does provide organization such<br>
> > that if we later needed to split them, it would be a relatively simple (hah)<br>
> > matter of coordinating an RPC upgrade like anything else.<br>
> ><br>
> > --Dan<br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</p>