[openstack-dev] [Ironic][Agent]

Josh Gachnang josh at pcsforeducation.com
Tue Apr 8 18:16:48 UTC 2014


>
> I'm more accustomed to using PDUs for this type of thing. I.e., a
> power strip you can ssh into or hit via a web API to toggle power to
> individual ports.
> Machines are configured to power up on power restore, plus PXE boot.
> You have less control than with IPMI -- all you can do is toggle power
> to the outlet -- but it works well, even for some desktop machines I
> have in a lab.
> I don't have a compelling need, but I've often wondered if such a
> driver would be useful. I can imagine it also being useful if people
> want to power up non-compute stuff, though that's probably not a top
> priority right now.


I believe someone was talking about this yesterday in the meeting. It would
be very possible to write an IPMI driver (possibly being renamed for this
reason) to control the power of a node via a PDU. You could then plug that
into the agent driver as the power driver to create something like
AgentAndPDUDriver. The current agent driver doesn't do anything with IPMI
except set boot device. The inability to set boot device would be the
biggest issue with a PDU driver as far as I can see, but that's not
insurmountable.

How much hardware information do we intend to store in Ironic? (Note
> that I'm genuinely asking this, not challenging your assertion.) It
> seems reasonable, but I think there's a lot of hardware information
> that could be useful (say, lspci output, per-processor flags, etc.),
> but stuffing it all in extra[] seems kind of messy.


Right now the hardware manager on the agent is pluggable, so what we're
storing is currently "whatever you want!". I think in our current
iteration, it is just the MACs of the NICs. We haven't fully fleshed this
out yet.

---
Josh Gachnang
Tech Blog: ServerCobra.com, @ServerCobra
Github.com/PCsForEducation


On Tue, Apr 8, 2014 at 10:46 AM, Matt Wagner <matt.wagner at redhat.com> wrote:

> On 08/04/14 14:04 +0400, Vladimir Kozhukalov wrote:
> <snip>
>
>  0) There are a plenty of old hardware which does not have IPMI/ILO at all.
>> How Ironic is supposed to power them off and on? Ssh? But Ironic is not
>> supposed to interact with host OS.
>>
>
> I'm more accustomed to using PDUs for this type of thing. I.e., a
> power strip you can ssh into or hit via a web API to toggle power to
> individual ports.
>
> Machines are configured to power up on power restore, plus PXE boot.
> You have less control than with IPMI -- all you can do is toggle power
> to the outlet -- but it works well, even for some desktop machines I
> have in a lab.
>
> I don't have a compelling need, but I've often wondered if such a
> driver would be useful. I can imagine it also being useful if people
> want to power up non-compute stuff, though that's probably not a top
> priority right now.
>
>
>  1) We agreed that Ironic is that place where we can store hardware info
>> ('extra' field in node model). But many modern hardware configurations
>> support hot pluggable hard drives, CPUs, and even memory. How Ironic will
>> know that hardware configuration is changed? Does it need to know about
>> hardware changes at all? Is it supposed that some monitoring agent (NOT
>> ironic agent) will be used for that? But if we already have discovering
>> extension in Ironic agent, then it sounds rational to use this extension
>> for monitoring as well. Right?
>>
>
> How much hardware information do we intend to store in Ironic? (Note
> that I'm genuinely asking this, not challenging your assertion.) It
> seems reasonable, but I think there's a lot of hardware information
> that could be useful (say, lspci output, per-processor flags, etc.),
> but stuffing it all in extra[] seems kind of messy.
>
> I don't have an overall answer for this question; I'm curious myself.
>
> -- Matt
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140408/aee6a872/attachment.html>


More information about the OpenStack-dev mailing list