[openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

Devananda van der Veen devananda.vdv at gmail.com
Wed Mar 26 19:02:44 UTC 2014


I haven't gotten to my email back log yet, but want to point out that I
agree with everything Robert just said. I also raised these concerns on the
original ceilometer BP, which is what gave rise to all the work in ironic
that Haomeng has been doing (on the linked ironic BP) to expose these
metrics for ceilometer to consume.

Typing quickly on a mobile,
Deva
On Mar 26, 2014 11:34 AM, "Robert Collins" <robertc at robertcollins.net>
wrote:

> On 27 March 2014 06:28, Eoghan Glynn <eglynn at redhat.com> wrote:
> >
> >
> >> On 3/25/2014 1:50 PM, Matt Wagner wrote:
> >> > This would argue to me that the easiest thing for Ceilometer might be
> >> > to query us for IPMI stats, if the credential store is pluggable.
> >> > "Fetch these bare metal statistics" doesn't seem too off-course for
> >> > Ironic to me. The alternative is that Ceilometer and Ironic would both
> >> > have to be configured for the same pluggable credential store.
> >>
> >> There is already a blueprint with a proposed patch here for Ironic to do
> >> the querying:
> >> https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
> >
> > Yes, so I guess there are two fundamentally different approaches that
> > could be taken here:
> >
> > 1. ironic controls the cadence of IPMI polling, emitting notifications
> >    at whatever frequency it decides, carrying whatever level of
> >    detail/formatting it deems appropriate, which are then consumed by
> >    ceilometer which massages these provided data into usable samples
> >
> > 2. ceilometer acquires the IPMI credentials either via ironic or
> >    directly from keystone/barbican, before calling out over IPMI at
> >    whatever cadence it wants and transforming these raw data into
> >    usable samples
> >
> > IIUC approach #1 is envisaged by the ironic BP[1].
> >
> > The advantage of approach #2 OTOH is that ceilometer is in the driving
> > seat as far as cadence is concerned, and the model is far more
> > consistent with how we currently acquire data from the hypervisor layer
> > and SNMP daemons.
>
> The downsides of #2 are:
>  - more machines require access to IPMI on the servers (if a given
> ceilometer is part of the deployed cloud, not part of the minimal
> deployment infrastructure). This sets of security red flags in some
> organisations.
>  - multiple machines (ceilometer *and* Ironic) talking to the same
> IPMI device. IPMI has a limit on sessions, and in fact the controllers
> are notoriously buggy - having multiple machines talking to one IPMI
> device is a great way to exceed session limits and cause lockups.
>
> These seem fundamental showstoppers to me.
>
> -Rob
>
> --
> Robert Collins <rbtcollins at hp.com>
> Distinguished Technologist
> HP Converged Cloud
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140326/18808fd5/attachment.html>


More information about the OpenStack-dev mailing list