[openstack-dev] [ironic] [nova] Ironic virt driver resources reporting

Vladyslav Drok vdrok at mirantis.com
Fri Dec 30 16:40:45 UTC 2016


Hi all!

There is a long standing problem of resources reporting in ironic virt
driver. It's described in a couple of bugs I've found - [0], [1]. Switching
to placement API will make things better, but still there are some problems
there. For example, there are cases when ironic needs to say "this node is
not available", and it reports the vcpus=memory_mb=local_gb as 0 in this
case. Placement API does not allow 0s, so in [2] it is proposed to remove
inventory records in this case.

But the whole logic here [3] seems not that obvious to me, so I'd like to
discuss when do we need to report 0s to placement API. I'm thinking about
the following (copy-pasted from my comment on [2]):


   - If there is an instance_uuid on the node, no matter what
   provision/power state it's in, consider the resources as used. In case it's
   an orphan, an admin will need to take some manual action anyway.
   - If there is no instance_uuid and a node is in cleaning/clean wait
   after tear down, it is a part of normal node lifecycle, report all
   resources as used. This means we need a way to determine if it's a manual
   or automated clean.
   - If there is no instance_uuid, and a node:
      - has a bad power state or
      - is in maintenance
      - or actually in any other case, consider it unavailable, report
      available resources = used resources = 0. Provision state does not matter
      in this logic, all cases that we wanted to take into account are
described
      in the first two bullets.


Any thoughts?

[0]. https://bugs.launchpad.net/nova/+bug/1402658
[1]. https://bugs.launchpad.net/nova/+bug/1637449
[2]. https://review.openstack.org/414214
[3].
https://github.com/openstack/nova/blob/1506c36b4446f6ba1487a2d68e4b23cb3fca44cb/nova/virt/ironic/driver.py#L262

Happy holidays to everyone!
-Vlad
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20161230/7988b3c9/attachment.html>


More information about the OpenStack-dev mailing list