[openstack-dev] [ironic][nova][grenade] Ironic CI grenade job degradation

Vasyl Saienko vsaienko at mirantis.com
Tue Aug 15 06:10:45 UTC 2017


Hello Community!

Recently with CI performance degradation ironic team meet with the
following problem. Quick automated cleaning is enabled on grenade jobs
which is started exactly after nova instance is deleted.
We do not wait for cleaning is finished in nova virt driver before mark
instance as deleted [0] as result
new tests may be started when ironic perform cleaning of nodes from
previous tests.
During last time CI become much slower which leads to grenade job failures.

To fix it we need to wait for cleaning is completed before start new
tests/grenade phases (ironic resources should be available again after base
smoke tests/resources destroy phase).
Since grenade cleanup resources in reverse order [1] there is no way to
wait for resources are available again on ironic side.

The possible options here are:

   1. Wait for resource are available again in ironic grenade plugin after
   base smoke tests finished before running resources phase.
   2. Ensure that ironic node is available again right after destroy phase.
   Two options are available here
      1. Modify nova resources destroy phase [2] to honor ironic case and
      wait for resources there.
      2. Add new phase right after 'destroy'. (Previously there were
      'force_destroy' phase which we tried to use [3] )

[0]
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L1137
[1]
 https://github.com/openstack-dev/grenade/blob/11dd94308ed5c25a8f28f86b03b20b251f0a05a1/inc/plugin#L111
<https://github.com/openstack-dev/grenade/blob/11dd94308ed5c25a8f28f86b03b20b251f0a05a1/inc/plugin#L111>
[2]
https://github.com/openstack-dev/grenade/blob/11dd94308ed5c25a8f28f86b03b20b251f0a05a1/projects/60_nova/resources.sh#L142
[3] https://review.openstack.org/#/c/489410/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170815/2f21b3db/attachment.html>


More information about the OpenStack-dev mailing list