[openstack-dev] [ironic] Question about pxe_ssh drivers

Julia Kreger juliaashleykreger at gmail.com
Tue Nov 21 19:25:26 UTC 2017


On Tue, Nov 21, 2017 at 10:59 AM, Waines, Greg
<Greg.Waines at windriver.com> wrote:
> i am now thinking that perhaps i am thinking of a USE CASE that is NOT the
> typical IRONIC USE CASE.

\o/

> i.e.
>
> I think the ‘typical’ IRONIC USE CASE is that there are a pool of physical
> servers
> that are available to run requested Instances/Images on.
>
>
> However,
>
> the USE CASE that i am thinking of is where there are ‘DEDICATED’ physical
> servers
>
> deployed for specific purposes where I was thinking of using IRONIC to
> manage the Images running on these servers.
>
> ( This is for say a Industrial / Control type scenario )
>
> ( It’s sort of using IRONIC as simply a generic boot server ... of bare
> metal servers )

As it stands, your assessment of the typical use case is correct. But
at the same time, we do and have seen some operators directly use
ironic to deploy dedicated servers, in either a directly orchestrated
against ironic's API process. Largely the use cases have been managed
services/lab environments in data centers where machines may need to
be rebuilt or redeployed fairly often.

I believe it would be possible, at least conceptually with power
control, as ironic expects to check and be able to assert power
control state. Part of this behavior can be disabled, or human driven
with a button push, but from experience, it is much easier to be able
to remotely tell something to cycle power.

Operating with-in Ironic's management framework without modifying the
process or concepts, I guess in an industrial/controls scenario, we
would likely end up with some sort of PLC integrated power driver,
with enough configuration for each baremetal node to potentially
control power. The closest thing we have to that right now is an SNMP
power driver (https://docs.openstack.org/ironic/pike/admin/drivers/snmp.html).
The SNMP power driver is intended for use with power distribution
units where the outlet's power can be turned on/off remotely.

> Any comments on whether it makes sense to do this with Ironic ?

I think it comes down to the needs as it relates to automating such a
deployment process, or possibly more clearly defining the deployment
process as you would see.


> Is there an alternative OpenStack project to do this sort of thing ?

People do use bifrost to build a standalone ironic installation to
kind of do things along these lines. Bifrost is not intended to be for
long-lived management of machines, but it does now support hooking up
keystone as well, so it could be a nice happy median, and we're always
happy to accept patches to bifrost that solves someone's problem.

> ( in my use case, the end user is already using OpenStack in a traditional
> sense for
>   VMs on compute nodes ... but would like to extend the solution to manage
> images
>   on bare metal servers )

Totally viable, either with or without nova, depending on the process
that needs to be fulfilled. One thing worth noting with baremetal
hardware is the security implications that do exist. Ironic's API is
built around the concept that the user is not a normal user, but an
administrative user. I'd personally love to change that with time, but
it is far from a simple undertaking in a generic sense.

Going back to the previous question of reset functions and, ironic
actually turns the power off, and then back on to perform a reset.
With cases like wake-on-lan, there is nothing that can be done. I
believe that driver might just not attempt to do anything, hence my
comment about maybe some sort of PLC power driver, or perhaps the SNMP
driver.

As for is anything deployed immediately upon enroll, ironic presently
does not do that as it expects you to step through the state machine
so the node passes through cleaning. If your hardware or use case does
not need or require cleaning, then you can run through the process
quite quickly with a script or an ansible playbook (which is exactly
what bifrost does for CI testing)

Hope this helps!

-Julia



More information about the OpenStack-dev mailing list