[openstack-dev] [nova] Next steps for resource providers work

Sean Dague sean at dague.net
Wed Aug 31 11:57:16 UTC 2016


On 08/31/2016 05:07 AM, Jay Pipes wrote:
> On 08/29/2016 12:40 PM, Matt Riedemann wrote:
>> I've been out for a week and not very involved in the resource providers
>> work, but after talking about the various changes up in the air at the
>> moment a bunch of us thought it would be helpful to lay out next steps
>> for the work we want to get done this week.
> 
> Apologies to all. BT Internet has been out most of the time in the house
> I've been staying at in Cheshire while on holiday and so I've been
> having to trek to a Starbucks to try and get work done. :(
> 
>> Keep in mind feature freeze is more or less Thursday 9/1.
>>
>> Also keep in mind the goal from the midcycle:
>>
>> "Jay's personal goal for Newton is for the resource tracker to be
>> writing inventory and allocation data via the placement API. Get the
>> data pumping into the placement API in Newton so we can start using it
>> in Ocata."
> 
> Indeed, that is the objective...
> 
>> 1. The ResourceTracker work starts here:
>>
>> https://review.openstack.org/#/c/358797/
>>
>> That relies on the placement service being in the service catalog and
>> will be optional for Newton.
> 
> Technically, the original revision of that patch *didn't* require the
> placement API service to be in the service catalog. If it wasn't there,
> the scheduler reporting client wouldn't bomb out, it would just log a
> warning and an admin could restart nova-compute when a placement API
> service entry was added to Keystone's service catalog.
> 
> But then I was asked to "robustify" things instead of use a simple error
> marker variable in the reporting client to indicate an unrecoverable
> problem with connectivity to the placement service. And the patch I
> pushed for that robustification failed all over the place, quite
> predictably. I was trying to keep the patch size to a minimum originally
> and incrementally add robust retry logic and better error handling. I
> also as noted in the commit message, used the exact same code we were
> using in the Cinder volume driver for finding the service endpoint via
> the keystone service catalog:
> 
> https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L71-L83
> 
> That has been changed in the latest patch from Sean to use the
> keystoneauth1.session.Session object instead of a requests.Session
> object directly. Not sure why, but that's fine I suppose.

Because the requests code literally couldn't work. It had no keystone
auth pieces. Using keystoneauth sessions we actually get the token
handling there, and still get a low level interface as was asked for.

The cinder code isn't a good analog here, because it's actually acting
on behalf of a user in their request context.

https://github.com/openstack/nova/blob/1abb6f7b4e190c6ef3f409c7d418fda1c857423e/nova/volume/cinder.py#L71
only works because the context is user generated, and we can convert our
context back into something we can send to keystone client.

This doesn't work in the placement API case, because we're not doing it
with user context, we're doing it behind the scenes with a service user.
Doing context.elevated() and then trying to do this kind of call just
doesn't work (which was in the original patch) because you can't just
conjure keystone admin credentials that way. If so, we'd have a crazy
security issue. :)

Neutron is a better analog here, because we have to do some actions
without a user context.

	-Sean

-- 
Sean Dague
http://dague.net



More information about the OpenStack-dev mailing list