[openstack-dev] [Ironic][Agent]

Dickson, Mike (HP Servers) mike.dickson at hp.com
Tue Apr 8 13:50:55 UTC 2014



From: Jim Rollenhagen [mailto:jim at jimrollenhagen.com]
Sent: Tuesday, April 08, 2014 9:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][Agent]


Guys, thank you very much for your comments,

I thought a lot about why we need to be so limited in IPA use cases. Now it much clearer for me. Indeed, having some kind of agent running inside host OS is not what many people want to see. And now I'd rather agree with that.

But there are still some questions which are difficult to answer for me.
0) There are a plenty of old hardware which does not have IPMI/ILO at all. How Ironic is supposed to power them off and on? Ssh? But Ironic is not supposed to interact with host OS.

I’m not sure about this yet. I’m inclined to say “we don’t support such hardware”, at least in the short-term. How does Ironic handle hardware without a power management interface today?

[Dickson, Mike (HP Servers)] I’d be inclined to agree.  Server class hardware would have a BMC of some sort.   I suppose you could alternatively do a driver for a smart PDU and let it control power brute force.  But irregardless  I don’t think relying on OS level power control is enough so essentially any “server” without some sort of power control outside of the OS is sort of a non starter.

1) We agreed that Ironic is that place where we can store hardware info ('extra' field in node model). But many modern hardware configurations support hot pluggable hard drives, CPUs, and even memory. How Ironic will know that hardware configuration is changed? Does it need to know about hardware changes at all? Is it supposed that some monitoring agent (NOT ironic agent) will be used for that? But if we already have discovering extension in Ironic agent, then it sounds rational to use this extension for monitoring as well. Right?

I believe that hardware changes should not be made while an instance is deployed to a node (except maybe swapping a dead stick of RAM or something). If a user wants a node with more RAM (for example), they should provision a new node and destroy the old one, just like they would do with VMs provisioned by Nova.

[Dickson, Mike (HP Servers)] I think this would depend on the driver in use.  iLO for instance can get many hardware details real time and I don’t see a reason why a driver couldn’t support that.  Maybe some attributes that describe the drivers capabilities?  In the absence of that you could run a ram disk and inventory the server on reboots. It wouldn’t catch hot plug changes until a reboot occurred of course.

Mike

2) When I deal with some kind of hypervisor, I can always use 'virsh list --all' command in order to know which nodes are running and which aren't. How am I supposed to know which nodes are still alive in case of Ironic? IPMI? Again IPMI is not always available. And if IPMI is available, then why do we need heartbeat in Ironic agent?

Every power driver today has some sort of “power status” command that Ironic relies on to tell if the node is alive, and I think we can continue to rely on this. We have a heartbeat in the agent to ensure that the agent process is still alive and reachable, as the agent might run for a long time before an instance is deployed to the node, and bugs happen.

Is that helpful?

// jim



Vladimir Kozhukalov

On Fri, Apr 4, 2014 at 9:46 PM, Ezra Silvera <EZRA at il.ibm.com<mailto:EZRA at il.ibm.com>> wrote:
> Ironic's responsibility ends where the host OS begins. Ironic is a bare metal provisioning service, not a configuration management service.
I agree with the above, but just to clarify I would say that Ironic shouldn't *interact*  with the host OS once it booted. Obviously it can still perform BM tasks underneath the OS (while it's up and running)  if needed (e.g., force shutdown through IPMI, etc..)





Ezra



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140408/286b8115/attachment.html>


More information about the OpenStack-dev mailing list