<div dir="ltr">Also, some systems have more sophisticated <span class="" style>IPMI</span> topology than a single node instance, like in case of chassis-based systems. Some other systems might use vendor-specific <span class="" style>IPMI</span> extensions or alternate platform management protocols, that could require vendor-specific drivers to terminate.<div>
<br></div><div>Going for #2 might require <span class="" style>Ceilometer</span> to implement vendor-specific drivers at the end, slightly overlapping what Ironic is doing today. Just from pure architectural point of view, having a single driver is very <span class="" style>preferrable</span>.</div>
<div><br></div><div>Regards,</div><div><span class="" style>Gergely</span></div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Mar 26, 2014 at 8:02 PM, Devananda van der Veen <span dir="ltr"><<a href="mailto:devananda.vdv@gmail.com" target="_blank">devananda.vdv@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr">I haven't gotten to my email back log yet, but want to point out that I agree with everything Robert just said. I also raised these concerns on the original ceilometer BP, which is what gave rise to all the work in ironic that Haomeng has been doing (on the linked ironic BP) to expose these metrics for ceilometer to consume.</p>
<p dir="ltr">Typing quickly on a mobile,<br>
Deva</p><div class="HOEnZb"><div class="h5">
<div class="gmail_quote">On Mar 26, 2014 11:34 AM, "Robert Collins" <<a href="mailto:robertc@robertcollins.net" target="_blank">robertc@robertcollins.net</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 27 March 2014 06:28, Eoghan Glynn <<a href="mailto:eglynn@redhat.com" target="_blank">eglynn@redhat.com</a>> wrote:<br>
><br>
><br>
>> On 3/25/2014 1:50 PM, Matt Wagner wrote:<br>
>> > This would argue to me that the easiest thing for Ceilometer might be<br>
>> > to query us for IPMI stats, if the credential store is pluggable.<br>
>> > "Fetch these bare metal statistics" doesn't seem too off-course for<br>
>> > Ironic to me. The alternative is that Ceilometer and Ironic would both<br>
>> > have to be configured for the same pluggable credential store.<br>
>><br>
>> There is already a blueprint with a proposed patch here for Ironic to do<br>
>> the querying:<br>
>> <a href="https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer" target="_blank">https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer</a>.<br>
><br>
> Yes, so I guess there are two fundamentally different approaches that<br>
> could be taken here:<br>
><br>
> 1. ironic controls the cadence of IPMI polling, emitting notifications<br>
> at whatever frequency it decides, carrying whatever level of<br>
> detail/formatting it deems appropriate, which are then consumed by<br>
> ceilometer which massages these provided data into usable samples<br>
><br>
> 2. ceilometer acquires the IPMI credentials either via ironic or<br>
> directly from keystone/barbican, before calling out over IPMI at<br>
> whatever cadence it wants and transforming these raw data into<br>
> usable samples<br>
><br>
> IIUC approach #1 is envisaged by the ironic BP[1].<br>
><br>
> The advantage of approach #2 OTOH is that ceilometer is in the driving<br>
> seat as far as cadence is concerned, and the model is far more<br>
> consistent with how we currently acquire data from the hypervisor layer<br>
> and SNMP daemons.<br>
<br>
The downsides of #2 are:<br>
- more machines require access to IPMI on the servers (if a given<br>
ceilometer is part of the deployed cloud, not part of the minimal<br>
deployment infrastructure). This sets of security red flags in some<br>
organisations.<br>
- multiple machines (ceilometer *and* Ironic) talking to the same<br>
IPMI device. IPMI has a limit on sessions, and in fact the controllers<br>
are notoriously buggy - having multiple machines talking to one IPMI<br>
device is a great way to exceed session limits and cause lockups.<br>
<br>
These seem fundamental showstoppers to me.<br>
<br>
-Rob<br>
<br>
--<br>
Robert Collins <<a href="mailto:rbtcollins@hp.com" target="_blank">rbtcollins@hp.com</a>><br>
Distinguished Technologist<br>
HP Converged Cloud<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div>
</div></div><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>