[openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone
gergely.matefi2 at gmail.com
Wed Mar 26 20:22:32 UTC 2014
Also, some systems have more sophisticated IPMI topology than a single node
instance, like in case of chassis-based systems. Some other systems might
use vendor-specific IPMI extensions or alternate platform management
protocols, that could require vendor-specific drivers to terminate.
Going for #2 might require Ceilometer to implement vendor-specific drivers
at the end, slightly overlapping what Ironic is doing today. Just from pure
architectural point of view, having a single driver is very preferrable.
On Wed, Mar 26, 2014 at 8:02 PM, Devananda van der Veen <
devananda.vdv at gmail.com> wrote:
> I haven't gotten to my email back log yet, but want to point out that I
> agree with everything Robert just said. I also raised these concerns on the
> original ceilometer BP, which is what gave rise to all the work in ironic
> that Haomeng has been doing (on the linked ironic BP) to expose these
> metrics for ceilometer to consume.
> Typing quickly on a mobile,
> On Mar 26, 2014 11:34 AM, "Robert Collins" <robertc at robertcollins.net>
>> On 27 March 2014 06:28, Eoghan Glynn <eglynn at redhat.com> wrote:
>> >> On 3/25/2014 1:50 PM, Matt Wagner wrote:
>> >> > This would argue to me that the easiest thing for Ceilometer might be
>> >> > to query us for IPMI stats, if the credential store is pluggable.
>> >> > "Fetch these bare metal statistics" doesn't seem too off-course for
>> >> > Ironic to me. The alternative is that Ceilometer and Ironic would
>> >> > have to be configured for the same pluggable credential store.
>> >> There is already a blueprint with a proposed patch here for Ironic to
>> >> the querying:
>> >> https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
>> > Yes, so I guess there are two fundamentally different approaches that
>> > could be taken here:
>> > 1. ironic controls the cadence of IPMI polling, emitting notifications
>> > at whatever frequency it decides, carrying whatever level of
>> > detail/formatting it deems appropriate, which are then consumed by
>> > ceilometer which massages these provided data into usable samples
>> > 2. ceilometer acquires the IPMI credentials either via ironic or
>> > directly from keystone/barbican, before calling out over IPMI at
>> > whatever cadence it wants and transforming these raw data into
>> > usable samples
>> > IIUC approach #1 is envisaged by the ironic BP.
>> > The advantage of approach #2 OTOH is that ceilometer is in the driving
>> > seat as far as cadence is concerned, and the model is far more
>> > consistent with how we currently acquire data from the hypervisor layer
>> > and SNMP daemons.
>> The downsides of #2 are:
>> - more machines require access to IPMI on the servers (if a given
>> ceilometer is part of the deployed cloud, not part of the minimal
>> deployment infrastructure). This sets of security red flags in some
>> - multiple machines (ceilometer *and* Ironic) talking to the same
>> IPMI device. IPMI has a limit on sessions, and in fact the controllers
>> are notoriously buggy - having multiple machines talking to one IPMI
>> device is a great way to exceed session limits and cause lockups.
>> These seem fundamental showstoppers to me.
>> Robert Collins <rbtcollins at hp.com>
>> Distinguished Technologist
>> HP Converged Cloud
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev