[openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

Erno Kuvaja ekuvaja at redhat.com
Thu Aug 4 19:01:57 UTC 2016

On Thu, Aug 4, 2016 at 7:13 PM, Tim Bell <Tim.Bell at cern.ch> wrote:
>> On 04 Aug 2016, at 19:34, Erno Kuvaja <ekuvaja at redhat.com> wrote:
>> On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum <clint at fewbar.com> wrote:
>>> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +0000:
>>>> On 04 Aug 2016, at 17:27, Mikhail Fedosin <mfedosin at mirantis.com<mailto:mfedosin at mirantis.com>> wrote:
>>>>> Hi all,
>>>>>>> after 6 months of Glare v1 API development we have decided to continue
>>>>> our work in a separate project in the "openstack" namespace with its own
>>>>> core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
>>>>> Alexander Tivelkov). We want to thank Glance community for their support
>>>>> during the incubation period, valuable advice and suggestions - this time
>>>>> was really productive for us. I believe that this step will allow the
>>>>> Glare project to concentrate on feature development and move forward
>>>>> faster. Having the independent service also removes inconsistencies
>>>>> in understanding what Glance project is: it seems that a single project
>>>>> cannot own two different APIs with partially overlapping functionality. So
>>>>> with the separation of Glare into a new project, Glance may continue its
>>>>> work on the OpenStack Images API, while Glare will become the reference
>>>>> implementation of the new OpenStack Artifacts API.
>>>> I would suggest looking at more than just the development process when
>>>> reflecting on this choice.
>>>> While it may allow more rapid development, doing on your own will increase
>>>> costs for end users and operators in areas like packaging, configuration,
>>>> monitoring, quota … gaining critical mass in production for Glare will
>>>> be much more difficult if you are not building on the Glance install base.
>>> I have to agree with Tim here. I respect that it's difficult to build on
>>> top of Glance's API, rather than just start fresh. But, for operators,
>>> it's more services, more API's to audit, and more complexity. For users,
>>> they'll now have two ways to upload software to their clouds, which is
>>> likely to result in a large portion just ignoring Glare even when it
>>> would be useful for them.
>>> What I'd hoped when Glare and Glance combined, was that there would be
>>> a single API that could be used for any software upload and listing. Is
>>> there any kind of retrospective or documentation somewhere that explains
>>> why that wasn't possible?
>> I was planning to leave this branch on it's own, but I have to correct
>> something here. This split is not introducing new API, it's moving the
>> new Artifact API under it's own project, there was no shared API in
>> first place. Glare was to be it's own service already within Glance
>> project. Also the Artifacts API turned out to be fundamentally
>> incompatible with the Images APIs v1 & v2 due to the totally different
>> requirements. And even the option was discussed in the community I
>> personally think replicating Images API and carrying the cost it being
>> in two services that are fundamentally different would have been huge
>> mistake we would have paid for long time. I'm not saying that it would
>> have been impossible, but there is lots of burden in Images APIs that
>> Glare really does not need to carry, we just can't get rid of it and
>> likely no-one would have been happy to see Images API v3 around the
>> time when we are working super hard to get the v1 users moving to v2.
>> Packaging glance-api, glance-registry and glare-api from glance repo
>> would not change the effort too much compared from 2 repos either.
>> Likely it just makes it easier when the logical split it clear from
>> the beginning.
>> What comes to Tim's statement, I do not see how Glare in it's own
>> service with it's own API could ride on the Glance install base apart
>> from the quite false mental image these two thing being the same and
>> based on the same code.
> To give a concrete use case, CERN have Glance deployed for images.  We are interested in the ecosystem
> around Murano and are actively using Heat.  We deploy using RDO with RPM packages, Puppet-OpenStack
> for configuration, a set of machines serving Glance in an HA set up across multiple data centres  and various open source monitoring tools.
> The multitude of projects and the day two maintenance scenarios with 11 independent projects is a cost and adding further to this cost for the production deployments of OpenStack should not be ignored.
> By Glare choosing to go their own way, does this mean that

Let me give you concrete answers. I will put the answer that differs
if Glare would stay as glance service in () otherwise the answer
applies to both cases.
> - Can the existing RPM packaging for Glance be used to deploy Glare ? If there needs to be new packages defined, this is additional cost for the RDO team (and the equivalent .deb teams) or will the Glare team provide this ?

No (afaik we already ship glance-api and glance-registry in a
different packages, I don't see why glare-api would be embedded into
either of those)

> - Can we use our existing templates for Glance for configuration management ? If there need to be new ones defined, this is additional work for the Chef and Ansible teams or will the Glare team provide this ?

No, glare-api.conf is independent config file

> - Log consolidation and parsing using the various OsOps tools for Glance is in place … a new project would need maintenance

glare-api is logging as it's own service just like glance-api and
glance-registry new service will need maintenance

> - If new endpoints need to be defined, this is additional work for the operators to allocate appropriate endpoints and HAProxy tweaks

Glare is operating in it's own endpoint, firewall and or any
loadbalancer work needs to be done

> - Additional database endpoints need to be defined, backed up, configured etc.

Yes they do, Glare using it it's own database requirement was brought
up to the team week or two ago {I couldn't find the logs} original
plan was to utilize glance database with dedicated tables.

> - Install / Config guides are an effort, adding plugins to the OpenStack CLI, Horizon panels …

They absolutely are

> I do not feel that sufficient consideration of the cost of the split for the consumers of OpenStack has been considered. It may be the right decision but baseing purely on the development speed and effort is ignoring a significant set of stakeholders.

As reiteration to the message I was trying to deliver on my previous
response, you might notice common theme in my responses above. All
that work is under it's way or needs to be done. The one thing that
actually might affect operators that comes out of this split is that
the package name will likely be something like "openstack-glare"
rather than "openstack-glance-glare-api".

So the cost to the consumers is really not changing that much. This
btw is one of the reasons what Mike was bringing up on his initial
e-mail about the inconsistencies in understadning what Glance Project

Hope that helps,

> Tim
>> - Erno
>>> _________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list