[openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

Nadathur, Sundar sundar.nadathur at intel.com
Fri Mar 23 04:27:23 UTC 2018


Hi all,
     There seems to be a possibility of a race condition in the 
Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to 
the proposed Cyborg/Nova spec 
<https://review.openstack.org/#/c/554717/1/doc/specs/rocky/cyborg-nova-sched.rst> 
for details.)

Consider the scenario where the flavor specifies a resource class for a 
device type, and also specifies a function (e.g. encrypt) in the extra 
specs. The Nova scheduler would only track the device type as a 
resource, and Cyborg needs to track the availability of functions. 
Further, to keep it simple, say all the functions exist all the time (no 
reprogramming involved).

To recap, here is the scheduler flow for this case:

  * A request spec with a flavor comes to Nova conductor/scheduler. The
    flavor has a device type as a resource class, and a function in the
    extra specs.
  * Placement API returns the list of RPs (compute nodes) which contain
    the requested device types (but not necessarily the function).
  * Cyborg will provide a custom filter which queries Cyborg DB. This
    needs to check which hosts contain the needed function, and filter
    out the rest.
  * The scheduler selects one node from the filtered list, and the
    request goes to the compute node.

For the filter to work, the Cyborg DB needs to maintain a table with 
triples of (host, function type, #free units). The filter checks if a 
given host has one or more free units of the requested function type. 
But, to keep the # free units up to date, Cyborg on the selected compute 
node needs to notify the Cyborg API to decrement the #free units when an 
instance is spawned, and to increment them when resources are released.

Therein lies the catch: this loop from the compute node to controller is 
susceptible to race conditions. For example, if two simultaneous 
requests each ask for function A, and there is only one unit of that 
available, the Cyborg filter will approve both, both may land on the 
same host, and one will fail. This is because Cyborg on the controller 
does not decrement resource usage due to one request before processing 
the next request.

This is similar to this previous Nova scheduling issue 
<https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/placement-claims.html>. 
That was solved by having the scheduler claim a resource in Placement 
for the selected node. I don't see an analog for Cyborg, since it would 
not know which node is selected.

Thanks in advance for suggestions and solutions.

Regards,
Sundar






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180322/8b90dfaa/attachment.html>


More information about the OpenStack-dev mailing list