[ironic] Tips on testing custom hardware manager?

Arne Wiebalck arne.wiebalck at cern.ch
Thu Sep 19 15:50:07 UTC 2019


Jason,

One thing we do is having the image pull in the custom hardware
manager branch via git: we build the image once and make changes
on the branch which is then pulled in on the next iteration.
As this avoids rebuilding/uploading the image for every change,
our dev cycle has become much shorter.

Another thing we do for debugging our custom hardware manager is
to add (debug) steps to it. These steps wait for certain file to
appear before moving on: the IPA will basically spin in this step
until we log in and touch the flag file. With one or two steps like
this we can set "breakpoints" to check things while developing our
hardware manager.

HTH,
  Arne

On 19.09.19 03:34, Jason Anderson wrote:
> Hi all,
> 
> I am hoping to get some tips on how to test out a custom hardware 
> manager. One of my colleagues is working on a project that involves 
> implementing a custom in-band cleaning step, which we are implementing 
> by creating our own ramdisk image that includes an extra library, which 
> is necessary for the clean step. We already have created the image and 
> ensured it has IPA installed and that all seems to work fine (in that, 
> it executes on the node and we see our code running--and failing!)
> 
> The issue we are having is that we encounter some issues in our fully 
> integrated environment (such as the provisioning network having 
> different networking rules) and replicating this environment in some 
> local development context is very difficult. Right now our workflow is 
> really onerous as a result: my colleague has to rebuild the ramdisk 
> image, re-upload it to Glance, update the test Ironic node to reference 
> that image, then perform a rebuild. One cycle of this takes a while as 
> you can imagine. I was wondering: is it possible to somehow interrupt or 
> give a larger window for some interactive debugging? The amount of time 
> we have to run some quick tests/debugging is small because the deploy 
> will time out and cancel itself or it will proceed and fail.
> 
> Thusfar I haven't found any documentation or written experience on this 
> admittedly niche task. Perhaps somebody has already gone down this road 
> and can advise on some tips? It would be much appreciated!
> 
> Cheers,
> 
> -- 
> Jason Anderson
> 
> Chameleon DevOps Lead
> *Consortium for Advanced Science and Engineering, The University of Chicago*
> *Mathematics & Computer Science Division, Argonne National Laboratory*



More information about the openstack-discuss mailing list