[openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone
eglynn at redhat.com
Wed Mar 26 20:21:56 UTC 2014
----- Original Message -----
> On 27 March 2014 06:28, Eoghan Glynn <eglynn at redhat.com> wrote:
> >> On 3/25/2014 1:50 PM, Matt Wagner wrote:
> >> > This would argue to me that the easiest thing for Ceilometer might be
> >> > to query us for IPMI stats, if the credential store is pluggable.
> >> > "Fetch these bare metal statistics" doesn't seem too off-course for
> >> > Ironic to me. The alternative is that Ceilometer and Ironic would both
> >> > have to be configured for the same pluggable credential store.
> >> There is already a blueprint with a proposed patch here for Ironic to do
> >> the querying:
> >> https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
> > Yes, so I guess there are two fundamentally different approaches that
> > could be taken here:
> > 1. ironic controls the cadence of IPMI polling, emitting notifications
> > at whatever frequency it decides, carrying whatever level of
> > detail/formatting it deems appropriate, which are then consumed by
> > ceilometer which massages these provided data into usable samples
> > 2. ceilometer acquires the IPMI credentials either via ironic or
> > directly from keystone/barbican, before calling out over IPMI at
> > whatever cadence it wants and transforming these raw data into
> > usable samples
> > IIUC approach #1 is envisaged by the ironic BP.
> > The advantage of approach #2 OTOH is that ceilometer is in the driving
> > seat as far as cadence is concerned, and the model is far more
> > consistent with how we currently acquire data from the hypervisor layer
> > and SNMP daemons.
> The downsides of #2 are:
> - more machines require access to IPMI on the servers (if a given
> ceilometer is part of the deployed cloud, not part of the minimal
> deployment infrastructure). This sets of security red flags in some
> - multiple machines (ceilometer *and* Ironic) talking to the same
> IPMI device. IPMI has a limit on sessions, and in fact the controllers
> are notoriously buggy - having multiple machines talking to one IPMI
> device is a great way to exceed session limits and cause lockups.
> These seem fundamental showstoppers to me.
Thanks Robert, that's really useful information, and I agree a
compelling argument to invert control in this case.
More information about the OpenStack-dev