[openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova
Joe Gordon
joe.gordon0 at gmail.com
Sun Nov 3 11:22:13 UTC 2013
On Nov 1, 2013 6:46 PM, "John Garbutt" <john at johngarbutt.com> wrote:
>
> On 29 October 2013 16:11, Eddie Sheffield <eddie.sheffield at rackspace.com>
wrote:
> >
> > "John Garbutt" <john at johngarbutt.com> said:
> >
> >> Going back to Joe's comment:
> >>> Can both of these cases be covered by configuring the keystone
catalog?
> >> +1
> >>
> >> If both v1 and v2 are present, pick v2, otherwise just pick what is in
> >> the catalogue. That seems cool. Not quite sure how the multiple glance
> >> endpoints works in the keystone catalog, but should work I assume.
> >>
> >> We hard code nova right now, and so we probably want to keep that
route too?
> >
> > Nova doesn't use the catalog from Keystone when talking to Glance.
There is a config value "glance_api_servers" which defines a list of Glance
servers that gets randomized and cycled through. I assume that's what
you're referring to with "we hard code nova." But currently there's nowhere
in this path (internal nova to glance) where the keystone catalog is
available.
>
> Yes. I was not very clear. I am proposing we change that. We could try
> shoehorn the multiple glance nodes in the keystone catalog, then cache
> that in the context, but maybe that doesn't make sense. This is a
> separate change really.
FYI: We cache the cinder endpoints from keystone catalog in the context
already. So doing something like that with glance won't be without
president.
>
> But clearly, we can't drop the direct configuration of glance servers
> for some time either.
>
> > I think some of the confusion may be that Glanceclient at the
programmatic client level doesn't talk to keystone. That happens happens
higher in the CLI level which doesn't come into play here.
> >
> >> From: "Russell Bryant" <rbryant at redhat.com>
> >>> On 10/17/2013 03:12 PM, Eddie Sheffield wrote:
> >>>> Might I propose a compromise?
> >>>>
> >>>> 1) For the VERY short term, keep the config value and get the change
otherwise
> >>>> reviewed and hopefully accepted.
> >>>>
> >>>> 2) Immediately file two blueprints:
> >>>> - python-glanceclient - expose a way to discover available
versions
> >>>> - nova - depends on the glanceclient bp and allowing
autodiscovery of glance
> >>>> version
> >>>> and making the config value optional (tho not deprecated
/ removed)
> >>>
> >>> Supporting both seems reasonable. At least then *most* people don't
> >>> need to worry about it and it "just works", but the override is there
if
> >>> necessary, since multiple people seem to be expressing a desire to
have
> >>> it available.
> >>
> >> +1
> >>
> >>> Can we just do this all at once? Adding this to glanceclient doesn't
> >>> seem like a huge task.
> >>
> >> I worry about us never getting the full solution, but it seems to have
> >> got complicated.
> >
> > The glanceclient side is done, as far as allowing access to the list of
available API versions on a given server. It's getting Nova to use this
info that's a bit sticky.
>
> Hmm, OK. Could we not just cache the detected version, to reduce the
> impact of that decision.
>
> >> On 28 October 2013 15:13, Eddie Sheffield <
eddie.sheffield at rackspace.com> wrote:
> >>> So...I've been working on this some more and hit a bit of a snag. The
> >>> Glanceclient change was easy, but I see now that doing this in nova
will require
> >>> a pretty huge change in the way things work. Currently, the API
version is
> >>> grabbed from the config value, the appropriate driver is
instantiated, and calls
> >>> go through that. The problem comes in that the actually glance server
isn't
> >>> communicated with until very late in the process. Nothing "sees" the
servers at
> >>> the level where the driver is determined. Also there isn't a single
glance server
> >>> but a list of them, and in the even of certain communication failures
the list is
> >>> cycled through until success or a number of retries has passed.
> >>>
> >>> So to change this to auto configuring will require turning this
upside down,
> >>> cycling through the servers at a higher level, choosing the
appropriate driver
> >>> for that server, and handling retries at that same level.
> >>>
> >>> Doable, but a much larger task than I first was thinking.
> >>>
> >>> Also, I don't really want the added overhead of getting the api
versions before
> >>> every call, so I'm thinking that going through the list of servers at
startup and
> >>> discovering the versions then and caching that somehow would be
helpful as well.
> >>>
> >>> Thoughts?
> >>
> >> I do worry about that overhead. But with Joe's comment, does it not
> >> just boil down to caching the keystone catalog in the context?
> >>
> >> I am not a fan of all the specific talk to glance code we have in
> >> nova, moving more of that into glanceclient can only be a good thing.
> >> For the XenServer itegration, for efficiency reasons, we need glance
> >> to talk from dom0, so it has dom0 making the final HTTP call. So we
> >> would need a way of extracting that info from the glance client. But
> >> that seems better than having that code in nova.
> >
> > I know in Glance we've largely taken the view that the client should be
as thin and lightweight as possible so users of the client can make use of
it however they best see fit. There was an earlier patch that would have
moved the whole image service layer into glanceclient that was rejected. So
I think there is a division in philosophies here as well
>
> Hmm, I would be a fan of supporting both use cases, "nova style" and
> more complex. Just seems better for glance to own as much as possible
> of the glance client-like code. But I am a nova guy, I would say that!
> Anyway, that's a different conversation.
>
> John
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131103/a8a1b655/attachment-0001.html>
More information about the OpenStack-dev
mailing list