[ptg][glance] PTG summary

Abhishek Kekane akekane at redhat.com
Wed Nov 13 05:57:31 UTC 2019


Hi All,

I attended OpenInfra summit and PTG at Shanghai last week. It was an
interesting event with lots of discussion happening around different
OpenStack projects. I was mostly associated with Glance and cross-projects
work related to Glance. There were other topics around Edge, UI and QA.

During summit Me and Erno gave a Glance project update where we discussed
what we achieved in Train cycle and what we are going to do in Ussuri cycle.

As multiple stores feature is stabilized during Train, In Ussuri main focus
of glance is on enhancing /v2/import API to import single image into
multiple stores and copying existing images to multiple stores to avoid the
manual efforts required by operator to copy the image across the stores.
New delete API will also be added to delete the image from single store,
also cinder driver of glance_store needs refactoring so that it can use
multiple backends configured by cinder. Efforts will be continued for
cluster awareness of glance API during this cycle as well. Apart from these
edge related work, Glance team will also work on removing deprecated
registry and related functional tests, removing of sheepdog driver from
glance_store, adding s3 driver with multiple stores support in glance_store
and some urgent bug fixes.

Cross-Project work:
In this PTG we had discussion with Nova and Cinder regarding the adoption
of multiple store feature of Glance. As per discussion we have finalized
the design and Glance team will work together with Nova and Cinder  towards
adding multiple store support feature in Train cycle.

Support for Glance multiple stores in Cinder:
As per discussion, volume-type will be used to add which store the image
will be uploaded on upload-to-image operation, also cinder will send base
image id to glance as a header using which glance will upload the image
created from volume to all those stores in which base image is present.

Nova snapshots to dedicated store:
Agreement is, Nova will send us a base image id to glance as a header using
which glance will upload the instance snapshot to all those stores in which
base image is present.

Talk with QA team:
Glance has also talked with QA team for adding new tempest coverage for
newly added features in the last couple of cycles, Glance team will work
with tempest to add below new tempest tests.
1. New import workflow (image conversion, inject metadata etc.) - Depends
on https://review.opendev.org/#/c/545483/ devstack patch
2. Hide old images
3. Multiple stores: https://review.opendev.org/#/c/689104/ in devstack
   3.1 Devstack patch + zuul job to setup multiple stores and the job will
run on glance and run glance api and scenario tests
4. Delete barbican secrets from glance images
   4.1 add the tests the in barbican-tempest-plugin
   4.2 run as part of barbican gate using their job
   4.3 run that tests with new job (multi stores) on glance gate. do not
run barbican job on glance.

Below is the Ussuri cycle planning and deadline for Glance.

Ussuri milestone planning:

Ussuri U1 - December 09-13:
1. Import image in multiple stores (Specs + Implementation)
2. Copy existing image in multiple stores (Specs + Implementation)
3. S3 driver for glance
4. remove sheepdog driver from glance_store
5. Fix subunit parser error
6. Modify existing nova and cinder specs

Ussuri U2 - February 10-14
1. Cluster awareness of glance API nodes
2. remove registry code
3. Delete image from single store
4. Nova and Cinder upload snapshot/volume to glance
5. image-import.conf parsing issue with uwsgi

Ussuri U3 - April 06-10
1. Multiple cinder store support in glance_store (specs + implementation)
2. Creating image from volume using ceph (slow uploading issue)
3. Image encryption
4. Tempest work

Glance PTG planning etherpad:
https://etherpad.openstack.org/p/Glance-Ussuri-PTG-planning

Let me know if you guys need  more details on this.
Thanks & Best Regards,

Abhishek Kekane
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20191113/0b78ad87/attachment-0001.html>


More information about the openstack-discuss mailing list