[openstack-dev] Conductor?

Joshua Harlow harlowja at yahoo-inc.com
Mon Feb 4 22:41:19 UTC 2013


Thanks for the input russell.

Has anyone been running these types of scaling tests. I know I have just
been focused a lot on getting folsom to work currently.

It would be nice to have said info early rather than later, I'll see what
I can do to help, but no guarantees, does RH have a perf testing lab that
they are running?

I'm not really worried about the horizontal scalability of it, but more of
the lack of horizontal scalability in the message que, of which sending a
large amount of new traffic could cause a meltdown.

Hopefully can get phase 2 started soon, more details soon :-P

-Josh

On 2/2/13 3:48 PM, "Russell Bryant" <rbryant at redhat.com> wrote:

>On 02/02/2013 02:18 AM, Joshua Harlow wrote:
>> Hi all,
>> 
>> Rohit (from NTT) and I have been tracking the nova conductor service
>> that is being built and at least I had some questions that the community
>> might be able to answer and/or enlighten us with.
>> 
>> It seems like right now u have been moving the nova-compute DB calls to
>> the message queue calls. This in general seems like a good move for
>> security but has there been any examination of the performance impact
>> this will have in medium -> large clusters? Has there been any plan for
>> said analysis and determination of side effects so that deployers can
>> prepare for said change? I'd just like to avoid a change that could be a
>> big problem later by preemptively determining its effects on the MQ.
>
>There hasn't been extensive performance testing on it.  nova-conductor
>is horizontally scalable like other services.  If the messge volume
>becomes a problem, that would be good feedback to hear.
>
>Note that if you don't care about actually removing db access from
>nova-compute, you can configure it to bypass the use of messaging
>completely and still do all db access locally:
>
>in nova.conf,
>
>[conductor]
>use_local=True
>
>> After moving some of the compute node calls to the DB to the MQ I was
>> thinking about what happens next.
>> 
>> Personally I would like to see if we can get said conductor MQ calls
>> moved up a layer (so that the compute node doesn't make such calls at
>> all) 
>
>Quite a bit of refactoring was done to send more data to compute to
>avoid db lookups.  In the cases where it was clear what data was needed
>up front, nova-api just passes it to compute to remove db lookups from
>compute.
>
>Not all cases are that straight forward.  For example, writes had to be
>handled some other way.  Also, lookups that are a bit more dynamic
>couldn't be done up front, either.
>
>> but instead a layer above it (aka the orchestration layer) invokes
>> said calls 'on-behalf' of the compute node (since said orchestration
>> should ensure all resources can be obtained before attempting to
>> interact with them) and after obtaining said resource the orchestration
>> would form a virtualization 'document' (fully qualified) for the
>> hypervisor to start (+- some acknowledgements of resource usage it will
>> have to send out to claim said resources).
>
>> Rohit and I and others at Y! want to make sure that code gets in
>> sometime in the future (it will drastically simplify and centralize the
>> maze that is nova state-transition management currently). It would also
>> allow for  resource rollbacks (more easier to understand retries, the
>> current retry is 'hairy' to say the least), better error messaging, more
>> comprehensive scheduling and the like.
>> 
>> Was wondering you the communities thoughts on this were, since Rohit and
>> I want to make sure we don't go to far astray implementing this.
>
>As discussed in an earlier thread on nova-conductor, this sort of thing
>is the vision for what conductor should become.  None of the more
>complex rework of compute operations has been done yet, but feel free to
>work on it, of course.  conductor is the right place to put this type of
>thing.  We've just been still focused on phase 1, no-db-compute.
>
>For reference, the earlier conductor thread:
>
>http://lists.openstack.org/pipermail/openstack-dev/2012-November/002573.ht
>ml
>
>-- 
>Russell Bryant




More information about the OpenStack-dev mailing list