[openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

Sam Betts (sambetts) sambetts at cisco.com
Mon Nov 2 16:07:11 UTC 2015


Auto discovery is a topic which has been discussed a few times in the past for

Ironic, and its interesting to solve because its a bit of a chicken and egg

problem. The ironic inspector allows us to inspect nodes that we don't know

the mac addresses for yet, to do this we run a global DHCP PXE rule that will

respond to all mac addresses and PXE boot any machine that requests it,

this means its possible for machines that we haven't been asked to

inspect to boot into the inspector ramdisk and send their information to

inspector's API. To prevent this data from being processed further by

inspector if its a machine we shouldn't care about, we do a node lookup. If the data

fails this node lookup we used to drop this data and continue no further, in

release 2.0.0 we added a hook point to intercept this state called the Node Not

Found hook point which allows us to run some python code at this point in

processing before failing and dropping the inspection data. Something we've

discussed as a use for this hook point is, enrolling a node that fails the

lookup into Ironic, and then having inspector continue to process the

inspection data as we would for any other node that had inspection requested

for it, this allows us to auto-discover unknown nodes into Ironic.


If this auto discovery hook was enabled this would be the flow when inspector

receives inspection data from the inspector ramdisk:


- Run pre-process on the inspection data to sanitise the data and ready it for

  the rest of the process


- Node lookup using fields from the inspection data:

  - If in inspector node cache return node info


  - If not in inspector node cache and but is in ironic node database, fail

    inspection because its a known node and inspection hasn't been requested

    for it.


  - If not in inspector node cache or ironic node database, enroll the node in

    ironic and return node info


- Process inspection data


The remaining question for this idea is how to handle the driver settings for

each node that we discover, we've currently discussed 3 different options:


1. Enroll the node in ironic using the fake driver, and leave it to the operator

   to set the driver type and driver info before they move the node from enroll

   to manageable.


2. Allow for the default driver and driver info information to be set in the

   ironic inspector configuration file, this will be set on every node that is

   auto discovered. Possible config file example:


   [autodiscovery]

   driver = pxe_ipmitool

   address_field = <ipmi_address>

   username_field = <ipmi_username>

   password_field = <ipmi_password>


3. A possibly vendor specific option that was suggested at the summit was to

   provide an ability to look up out of band credentials from an external CMDB.


The first option is technically possible using the second option, by setting

the driver to fake and leaving the driver info blank.


With IPMI based drivers most IPMI related information can be retrieved from the

node by the inspector ramdisk, however for non-ipmi based drivers such as the

cimc/ucs drivers this information isn't accessible from an in-band OS command.


A problem with option 2 is that it can not account for a mixed driver

environment.


We have also discussed for IPMI based drivers inspector could set a new

randomly generated password on to the freshly discovered node, with the idea

being fresh hardware often comes with a default password, and if you used

inspector to discover it then it could set a unique password on it and

automatically make ironic aware of that.


We're throwing this idea out onto the mailer because we'd like to get feedback

from the community to see if this would be useful for people using inspector,

and to see if people have any opinions on what the right way to handle the node

driver settings is.


Sam (sambetts)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151102/823120f1/attachment.html>


More information about the OpenStack-dev mailing list