[openstack-dev] [ironic] why is glance image service code so complex?

Pavlo Shchelokovskyy pshchelokovskyy at mirantis.com
Wed May 24 10:08:04 UTC 2017


Hi,

regarding #1: there are actually 4 methods there that are not used anywhere
in ironic, which are to list images and create/update/delete an image in
Glance.
The question is do we consider those classes to be a part of public ironic
Python API? Are we safe to remove them right away? Or should we go a
standard deprecation process on those - log runtime warnings when they are
used in Pike (unfortunately it seems it won't be possible to issue a single
warning on conductor start) and remove in Queens?

I'd also like to add a question #4:

In the image-related code we have special handling of "glance://" URL
scheme. Is anyone using that still? Do we really have to support it or can
we deprecate it as a recognized URL scheme for image_source?

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Tue, May 23, 2017 at 7:11 PM, Dmitry Tantsur <dtantsur at redhat.com> wrote:

> On 05/23/2017 05:52 PM, Pavlo Shchelokovskyy wrote:
>
>> Hi all,
>>
>> I've started to dig through the part of Ironic code that deals with
>> glance and I am confused by some things:
>>
>> 1) Glance image service classes have methods to create, update and delete
>> images. What's the use case behind them? Is ironic supposed to actively
>> manage images? Besides, these do not seem to be used anywhere else in
>> ironic code.
>>
>
> Yeah, I don't think we upload anything to glance. We may upload stuff to
> Swift though, but that's another story.
>
>
>> 2) Some parts of code (and quite a handful of options in [glance] config
>> section) AFAIU target a situation when both ironic and glance are deployed
>> standalone with possibly multiple glance API services so there is no
>> keystone catalog to discover the (load-balanced) glance endpoint from. We
>> even have our own round-robin implementation for those multiple glance
>> hosts o_0
>>
>> 3) Glance's direct_url handling - AFAIU this will work iff there is a
>> single conductor service and single glance registry service configured with
>> simple file backend deployed on the same host (with appropriate file access
>> permissions between ironic and glance), and glance is configured to
>> actually provide direct_url for the image - very much a DevStack (though
>> with non-standard settings).
>>
>> Do we actually have to support such narrow deployment scenarios as in 2)
>> and 3)? While for 2) we probably should continue support standalone Glance,
>> keeping implementations for our own round-robin load-balancing and retries
>> seems out of ironic scope.
>>
>
> Yeah, I'd expect people to deploy HA proxy or something similar for
> load-balancing. Not sure what you mean by retries though.
>
> Number 3, I suspect, is for simple all-in-one deployments. I don't
> remember the whole background, so I can't comment more.
>
>
>> Most of those do seem to be a legacy code crust from nova-baremetal era,
>> but I might be missing something. I'm eager to hear your comments.
>>
>
> #1 and #2 probably. I'm fine with getting rid of them.
>
>
>> Cheers,
>>
>> Dr. Pavlo Shchelokovskyy
>> Senior Software Engineer
>> Mirantis Inc
>> www.mirantis.com
>> <http://www.mirantis.com>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170524/9c477caa/attachment.html>


More information about the OpenStack-dev mailing list