[openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

Devananda van der Veen devananda.vdv at gmail.com
Tue Apr 1 01:49:34 UTC 2014


On the ceilometer integration front, I think that, over the course of
Icehouse, the proposed Ironic driver API for gathering metrics was fleshed
out and agreed upon internally. I am hoping that work can be completed
early in Juno, at which point we'll be looking to Ceilometer to start
consuming it.

On the "where does Ironic store credentials" front, yes, I think we do need
to talk at the summit about that. It might not warrant a whole session, but
it seems like we need to chat with Keystone and Barbican to figure out the
right place and format for secure hardware/BMC credential storage. I'm
still leaning heavily towards this being natively inside Ironic to preserve
the layer separation; otoh, if it is reasonable for operators to run a
"private keystone" and a "public keystone" (or private/public barbican),
then it may be reasonable to put the BMC credentials in there...

Regards,
Devananda



On Wed, Mar 26, 2014 at 1:25 PM, Eoghan Glynn <eglynn at redhat.com> wrote:

>
>
> > I haven't gotten to my email back log yet, but want to point out that I
> agree
> > with everything Robert just said. I also raised these concerns on the
> > original ceilometer BP, which is what gave rise to all the work in ironic
> > that Haomeng has been doing (on the linked ironic BP) to expose these
> > metrics for ceilometer to consume.
>
> Thanks Devananda, so it seems like closing out the ironic work started
> in the icehouse BP[1] is the way to go, while on the ceilometer side
> we can look into consuming these notifications.
>
> If Haomeng needs further input from the ceilometer side, please shout.
> And if there are some non-trivial cross-cutting issues to discuss, perhaps
> we could consider having another joint session at the Juno summit?
>
> Cheers,
> Eoghan
>
> [1] https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer
>
> > Typing quickly on a mobile,
> > Deva
> > On Mar 26, 2014 11:34 AM, "Robert Collins" < robertc at robertcollins.net >
> > wrote:
> >
> >
> > On 27 March 2014 06:28, Eoghan Glynn < eglynn at redhat.com > wrote:
> > >
> > >
> > >> On 3/25/2014 1:50 PM, Matt Wagner wrote:
> > >> > This would argue to me that the easiest thing for Ceilometer might
> be
> > >> > to query us for IPMI stats, if the credential store is pluggable.
> > >> > "Fetch these bare metal statistics" doesn't seem too off-course for
> > >> > Ironic to me. The alternative is that Ceilometer and Ironic would
> both
> > >> > have to be configured for the same pluggable credential store.
> > >>
> > >> There is already a blueprint with a proposed patch here for Ironic to
> do
> > >> the querying:
> > >> https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
> > >
> > > Yes, so I guess there are two fundamentally different approaches that
> > > could be taken here:
> > >
> > > 1. ironic controls the cadence of IPMI polling, emitting notifications
> > > at whatever frequency it decides, carrying whatever level of
> > > detail/formatting it deems appropriate, which are then consumed by
> > > ceilometer which massages these provided data into usable samples
> > >
> > > 2. ceilometer acquires the IPMI credentials either via ironic or
> > > directly from keystone/barbican, before calling out over IPMI at
> > > whatever cadence it wants and transforming these raw data into
> > > usable samples
> > >
> > > IIUC approach #1 is envisaged by the ironic BP[1].
> > >
> > > The advantage of approach #2 OTOH is that ceilometer is in the driving
> > > seat as far as cadence is concerned, and the model is far more
> > > consistent with how we currently acquire data from the hypervisor layer
> > > and SNMP daemons.
> >
> > The downsides of #2 are:
> > - more machines require access to IPMI on the servers (if a given
> > ceilometer is part of the deployed cloud, not part of the minimal
> > deployment infrastructure). This sets of security red flags in some
> > organisations.
> > - multiple machines (ceilometer *and* Ironic) talking to the same
> > IPMI device. IPMI has a limit on sessions, and in fact the controllers
> > are notoriously buggy - having multiple machines talking to one IPMI
> > device is a great way to exceed session limits and cause lockups.
> >
> > These seem fundamental showstoppers to me.
> >
> > -Rob
> >
> > --
> > Robert Collins < rbtcollins at hp.com >
> > Distinguished Technologist
> > HP Converged Cloud
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140331/d7188794/attachment.html>


More information about the OpenStack-dev mailing list