[openstack-dev] [nova] vendordata v2 ocata summit recap
Matt Riedemann
mriedem at linux.vnet.ibm.com
Wed Nov 9 20:11:28 UTC 2016
Michael Still led a session on completing the vendordata v2 work that
was started in the Newton release. The full etherpad is here:
https://etherpad.openstack.org/p/ocata-nova-summit-vendoradatav2
Michael started by advertising a bit what it is since it's a new feature
and it's meant to replace the old class-path loading way of getting
vendor metadata (and ultimately allows us to remove hooks).
The majority of the session was spent discussing a gap we have in
providing token information on the request to the vendordata server.
For example, when creating a server we have a user content and token and
can provide that information to the vendordata REST API, but on
subsequent GETs from the guest itself we don't have a token. After quite
a bit of discussion in the room, including with Adam and Dolph from the
keystone team, we decided to:
1. Stash the user's roles from the initial create in the nova database
and re-use those on subsequent GET requests.
2. Use a service token to pass the other information to the vendordata
v2 REST API so that it knows the request is coming from Nova. This was
considered a bug fix and not a new feature so we can backport the
functionality.
Other things that are needed at some point:
1. Add some caching of the response using the Cache-Control header.
2. Add a configuration option to toggle whether or not the server create
should fail if a vendordata response is not 200. Today if we get a
non-200 response we log a warning and return {} to the caller. Some
vendordata scenarios require that the metadata get into the guest as
soon as it's created or else it becomes essentially a zombie and
cleaning it up later is painful. So provide an option to fail that
server create if we can't get the necessary data into the guest on
server create. Note that this would only fail the server build if using
config drive since nova is the caller. When cloud-init is making the
request from within the guest, nova has lost control at that point and
any failures are going to have to be cleaned up separately.
--
Thanks,
Matt Riedemann
More information about the OpenStack-dev
mailing list