[nova][ironic] Lock-related performance issue with update_resources periodic job

Jason Anderson jasonanderson at uchicago.edu
Mon May 13 19:38:47 UTC 2019


Hey OpenStackers,

I work on a cloud that allows users to reserve and provision bare metal instances with Ironic. We recently performed a long-overdue upgrade of our core components, all the way from Ocata up through Rocky. During this, we noticed that instance build requests were taking 4-5x (!!) as long as before. We have two deployments, one with ~150 bare metal nodes, and another with ~300. These are each managed by one nova-compute process running the Ironic driver.

After investigation, the root cause appeared to be contention between the update_resources periodic task and the instance claim step. There is one semaphore "compute_resources" that is used to control every access within the resource_tracker. In our case, what was happening was the update_resources job, which runs every minute by default, was constantly queuing up accesses to this semaphore, because each hypervisor is updated independently, in series. This meant that, for us, each Ironic node was being processed and was holding the semaphore during its update (which took about 2-5 seconds in practice.) Multiply this by 150 and our update task was running constantly. Because an instance claim also needs to access this semaphore, this led to instances getting stuck in the "Build" state, after scheduling, for tens of minutes on average. There seemed to be some probabilistic effect here, which I hypothesize is related to the locking mechanism not using a "fair" lock (first-come, first-served) by default.

Our fix was to drastically increase the interval this task runs at--from every 1 minute to every 12 hours. We only provision bare metal, so my rationale was that the periodic full resource sync was less important and mostly helpful for fixing weird things where somehow Placement's state got out of sync with Nova's somehow.

I'm wondering, after all this, if it makes sense to rethink this one-semaphore thing, and instead create a per-hypervisor semaphore when doing the resource syncing. I can't think of a reason why the entire set of hypervisors needs to be considered as a whole when doing this task, but I could very well be missing something.

TL;DR: if you have one nova-compute process managing lots of Ironic hypervisors, consider tweaking the update_resources_interval to a higher value, especially if you're seeing instances stuck in the Build state for a while.

Cheers,

Jason Anderson

Cloud Computing Software Developer
Consortium for Advanced Science and Engineering, The University of Chicago
Mathematics & Computer Science Division, Argonne National Laboratory

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190513/fd7b2e6d/attachment-0001.html>


More information about the openstack-discuss mailing list