something abount ironic-python-agent

Kaifeng Wang kaifeng.w at
Sun Dec 27 06:04:38 UTC 2020

On Sun, Dec 27, 2020 at 12:12 AM Ankele zhang <ankelezhang at> wrote:

> Hi ~
> I have deployed OpenStack platform in  Rocky version and integreted
> Ironic(Rocky) into it according to the official documents. I download the
> coreOS vmlinuz and ramdisk images from
> and if I don't clean my devices I can deploy baremetal node server
> currently. When I cleaning my disk using "openstack baremetal node clean
> --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]' my_bm",
> I got an error message "Clean step failed: Error performing clean_step
> erase_devices: No HardwareManager found to handle method: Could not find
> method: erase_block_device" from ironic-conductor.log and I got the same
> error in coreOS by "sudo journalctl -u ironic-python-agent -f". I download
> the Rocky ironic-python-agent source code from github and I search
> 'erase_block_devices' ,it returned from +781. So, the first
> problem , I don't know why I got this error message.

CoreOS based ramdisk was not maintained for a while, you could try a rocky
release at

> I downloaded the source code of the ironic-python-agent and tried to build
> the coreOS image by 'Makefile', but I always got error as following, this
> is my second problem.
Building an IPA ramdisk is moved from diskimage-builder to
ironic-python-agent-builder, you can find the document here:

> Finally, I want to code my own 'custom HardwareManager' to support 'raid
> config' for the ironic-python-agent, but I don't know how to get started.
For an in-band RAID support, you'll need to create a new hardware manager
and implement some deploy steps to support RAID configuration,
specifically, the create_configuration and delete_configuration steps.
The following links may provide some starting points for you.

Hope this helps.

> Looking forward to your help. Thank you.
> Ankele
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the openstack-discuss mailing list