[openstack-dev] [Ironic] A ramdisk agent

Devananda van der Veen devananda.vdv at gmail.com
Fri Mar 7 00:41:06 UTC 2014


The Ironic team has been discussing the need for a "deploy agent" since
well before the last summit -- we even laid out a few blueprints along
those lines. That work was deferred  and we have been using the same deploy
ramdisk that nova-baremetal used, and we will continue to use that ramdisk
for the PXE driver in the Icehouse release.

That being the case, at the sprint this week, a team from Rackspace shared
work they have been doing to create a more featureful hardware agent and an
Ironic driver which utilizes that agent. Early drafts of that work can be
found here:


I've updated the original blueprint and assigned it to Josh. For reference:


I believe this agent falls within the scope of the baremetal provisioning
program, and welcome their contributions and collaboration on this. To that
effect, I have suggested that the code be moved to a new OpenStack project
named "openstack/ironic-python-agent". This would follow an independent
release cycle, and reuse some components of tripleo (os-*-config). To keep
the collaborative momentup up, I would like this work to be done now (after
all, it's not part of the Ironic repo or release). The new driver which
will interface with that agent will need to stay on github -- or in a
gerrit feature branch -- until Juno opens, at which point it should be
proposed to Ironic.

The agent architecture we discussed is roughly:
- a pluggable JSON transport layer by which the Ironic driver will pass
information to the ramdisk. Their initial implementation is a REST API.
- a collection of hardware-specific utilities (python modules, bash
scripts, what ever) which take JSON as input and perform specific actions
(whether gathering data about the hardware or applying changes to it).
- and an agent which routes the incoming JSON to the appropriate utility,
and routes the response back via the transport layer.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140306/2f7c6a6b/attachment.html>

More information about the OpenStack-dev mailing list