[openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer
Devananda van der Veen
devananda.vdv at gmail.com
Thu Nov 21 20:37:09 UTC 2013
On Thu, Nov 21, 2013 at 12:08 AM, Ladislav Smola <lsmola at redhat.com> wrote:
> Responses inline.
>
>
> On 11/20/2013 07:14 PM, Devananda van der Veen wrote:
>
> Responses inline.
>
> On Wed, Nov 20, 2013 at 2:19 AM, Ladislav Smola <lsmola at redhat.com>wrote:
>
>> Ok, I'll try to summarize what will be done in the near future for
>> Undercloud monitoring.
>>
>> 1. There will be Central agent running on the same host(hosts once the
>> central agent horizontal scaling is finished) as Ironic
>>
>
> Ironic is meant to be run with >1 conductor service. By i-2 milestone we
> should be able to do this, and running at least 2 conductors will be
> recommended. When will Ceilometer be able to run with multiple agents?
>
>
> Here it is described and tracked:
> https://blueprints.launchpad.net/ceilometer/+spec/central-agent-improvement
>
>
Thanks - I've subscribed to it.
> On a side note, it is a bit confusing to call something a "central
> agent" if it is meant to be horizontally scaled. The ironic-conductor
> service has been designed to scale out in a similar way to nova-conductor;
> that is, there may be many of them in an AZ. I'm not sure that there is a
> need for Ceilometer's agent to scale in exactly a 1:1 relationship with
> ironic-conductor?
>
>
> Yeah we have already talked about that. Maybe some renaming will be in
> place later. :-) I don't think it has to be 1:1 mapping. There was only
> requirement to have "Hardware agent" only on hosts with ironic-conductor,
> so it has access to management network, right?
>
>
Correct.
2. It will have SNMP pollster, SNMP pollster will be able to get list of
>> hosts and their IPs from Nova (last time I
>> checked it was in Nova) so it can poll them for stats. Hosts to poll
>> can be also defined statically in config file.
>>
>
> Assuming all the undercloud images have an SNMP daemon baked in, which
> they should, then this is fine. And yes, Nova can give you the IP addresses
> for instances provisioned via Ironic.
>
>
>
> Yes.
>
>
> 3. It will have IPMI pollster, that will poll Ironic API, getting list
>> of hosts and a fixed set of stats (basically everything
>> that we can get :-))
>>
>
> No -- I thought we just agreed that Ironic will not expose an API for
> IPMI data. You can poll Nova to get a list of instances (that are on bare
> metal) and you can poll Ironic to get a list of nodes (either nodes that
> have an instance associated, or nodes that are unprovisioned) but this will
> only give you basic information about the node (such as the MAC addresses
> of its network ports, and whether it is on/off, etc).
>
>
> Ok sorry I have misunderstood the:
> "If there is a fixed set of information (eg, temp, fan speed, etc) that
> ceilometer will want,let's make a list of that and add a driver interface
> within Ironic to abstract the collection of that information from physical
> nodes. Then, each driver will be able to implement it as necessary for that
> vendor. Eg., an iLO driver may poll its nodes differently than a generic
> IPMI driver, but the resulting data exported to Ceilometer should have the
> same structure."
>
> I thought I've read the data will be exposed, but it will be just internal
> Ironic abstraction, that will be polled by Ironic and send directly do
> Ceilometer collector. So same as the point 4., right? Yeah I guess this
> will be easier to implement.
>
>
Yes -- you are correct. I was referring to an internal abstraction around
different hardware drivers.
>
>
>
>> 4. Ironic will also emit messages (basically all events regarding the
>> hardware) and send them directly to Ceilometer collector
>>
>
> Correct. I've updated the BP:
>
> https://blueprints.launchpad.net/ironic/+spec/add-ceilometer-agent
>
> Let me know if that looks like a good description.
>
>
> Yeah, seems great. I would maybe remove the word 'Agent', seems Ironic
> will send it directly to Ceilometer collector, so Ironic acts as agent,
> right?
>
Fair point - I have updated the BP and renamed it to
https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer
>
>
>
> -Devananda
>
>
>
>> Does it seems to be correct? I think that is the basic we must have to
>> have Undercloud monitored. We can then build on that.
>>
>> Kind regards,
>> Ladislav
>>
>>
>
>> On 11/20/2013 09:22 AM, Julien Danjou wrote:
>>
>>> On Tue, Nov 19 2013, Devananda van der Veen wrote:
>>>
>>> If there is a fixed set of information (eg, temp, fan speed, etc) that
>>>> ceilometer will want,
>>>>
>>> Sure, we want everything.
>>>
>>> let's make a list of that and add a driver interface
>>>> within Ironic to abstract the collection of that information from
>>>> physical
>>>> nodes. Then, each driver will be able to implement it as necessary for
>>>> that
>>>> vendor. Eg., an iLO driver may poll its nodes differently than a generic
>>>> IPMI driver, but the resulting data exported to Ceilometer should have
>>>> the
>>>> same structure.
>>>>
>>> I like the idea.
>>>
>>> An SNMP agent doesn't fit within the scope of Ironic, as far as I see, so
>>>> this would need to be implemented by Ceilometer.
>>>>
>>> We're working on adding pollster for that indeed.
>>>
>>> As far as where the SNMP agent would need to run, it should be on the
>>>> same host(s) as ironic-conductor so that it has access to the
>>>> management network (the physically-separate network for hardware
>>>> management, IPMI, etc). We should keep the number of applications with
>>>> direct access to that network to a minimum, however, so a thin agent
>>>> that collects and forwards the SNMP data to the central agent would be
>>>> preferable, in my opinion.
>>>>
>>> We can keep things simple by having the agent only doing that polling I
>>> think. Building a new agent sounds like it will complicate deployment
>>> again.
>>>
>>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131121/001cf625/attachment.html>
More information about the OpenStack-dev
mailing list