[Glance] Zed PTG Summary

Abhishek Kekane akekane at redhat.com
Mon Apr 11 06:15:01 UTC 2022


Hi All,

We had our fifth virtual PTG between 4th April to 8th April 2022. Thanks to
everyone who joined the virtual PTG sessions. Using bluejeans app we had
lots of discussion around different topics for glance, glance + cinder,
fips and Secure RBAC.

I have created etherpad [1] with Notes from the session and which also
includes the recordings of each discussion. Here is a short summary of the
discussions.

Tuesday, April 5th 2022

# Yoga Retrospective
On the positive note, we managed to complete all the work we targeted for
the yoga cycle. In addition to that we have organized a first ever glance
review party where we managed to perform group reviews which helped us to
cover our review load in the final milestone. On the other side we need to
reorganize our bi-weekly bugs meeting and also improve our documentation
and API references.

Recording: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 1

# Cache API - New API to trigger periodic job
In Yoga we have managed to put together new endpoints for cache, this cycle
we should add a new API to trigger the periodic job to cache the images.
Here we have decided to get rid of periodic job and add a new API to
cache(pre-cache) the specified image(s) instantly rather than waiting for
the next periodic run to pre-cache those images.

Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 2

# Glance Cache improvements, restrict duplicate downloads
How we can avoid multiple downloading of the same image in cache on first
download.
Final design - https://review.opendev.org/c/openstack/glance-specs/+/734683

Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 3

# Distributed responsibilities among cores/team
>From this cycle we have decided to follow a distributed leadership model
internally which will help us to train internal members to take the PTL
responsibilities in the upcoming cycle. We have decided to distribute the
below responsibilities among ourselves in this cycle.

Release management: pranali/abhishek
Bug management: cyril/abhishek
Meetings:pranali
Stable branch management: jokke
Cross project communication:abhishekk
Mailing lists:pranali/abhishekk
PTG/summit preparation:pranali/abhishekk
Vulnerability management: jokke
Infra management: abhishekk

Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 4

# Secure RBAC - System Scope
Unfortunately due to time crunch we were not able to find answers to a few
of our queries in this PTG, so we have decided to attend the Open office
hours and Policy popup meetings to get them sorted. As per community goal
we should be enforcing the new RBAC policies from this cycle and support
system-admin. Once we get our doubts sorted then we will share more
information about the same.

Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 5

# Policy refactoring - Part 2
In Xena we have managed to move all policy checks to the API layer. This
cycle we need to work on removing dead code of policy and authorization
layer. So we are going to ensure that the policy and authorization layer
are not used anywhere before removing it from the code base.

Recordings: https://bluejeans.com/s/kVeDPbu6kX2 - Chapter 6


Wednesday, April 06th 2022

# Proposal for moving away from onion architecture
This is a long term goal and from this cycle we should start doing homework
on how we can squash two or more layers together and move away from the
onion architecture and obtain a popular and simple MVC architecture in
upcoming cycles. This cycle we will be mostly working on finalizing the
detail spec about this work.

Recordings: https://bluejeans.com/s/ZmURX2kOeJy - Chapter 1

# Import image from another region
 When dealing with a multi-region cloud it often appears that operators or
customers need to copy images from a region to another (it can be public
images or even remote backup of instance snapshot).
 It is currently complicated to implement as customers need to save the
image locally, then upload it to the new region. We propose to rely mostly
on the web-download code of glance to directly download an image from a
remote glance, calling this method « glance-download ». Note that this
first version will require a federated Keystone between all the glance in
order to avoid all authentication problems (we will rely on the context
token of the target glance to authenticate to the remote glance).

Recordings: https://bluejeans.com/s/ZmURX2kOeJy - Chapter 2

# Expanding stores-info detail for other stores
In Yoga we added a new API ``stores-detail`` to expose the properties of
stores but currently it is only exposing details of rbd store, we are
planning to extend its support to expose properties of other stores as well.

Recordings: https://bluejeans.com/s/ZmURX2kOeJy - Chapter 3

# Discussion of property injection coherency between image import and
possible implementation in upload
Glance does support injecting certain properties for images created by
non-admin users by using inject metadata import plugin, but same is not
possible when we do not use import workflow and use traditional way to
create the image. This cycle we will be working on supporting injecting the
metadata while creating images using upload workflow.

https://bluejeans.com/s/ZmURX2kOeJy - Chapter 4


Thursday, April 07th 2022

# Refactor glance cinder store
Currently we have one file for cinder store where the logic for handling
all cinder backend types (on basis of protocol) exists. It makes the code
less readable and prone to mistakes and even the  unit testing for code
coverage is difficult due to a lot of nested code. The idea of this
proposal is to divide the one big file into backend specific files on the
basis of operations sharing the same interface. Eg: connect_volume call for
most of the backends is the same except for remotefs type drivers where it
is handled in a custom way.

https://bluejeans.com/s/oiS49m8Pj_o - Chapter 1

# Native Image Encryption
Barbican updates:
- Microversion is done
- Secret consumer will be implemented by milestone 1
- Luzi is interested to work on image encryption
- Glance will keep watch and review the work if posted this cycle

# Default Glance to configure multiple stores
Glance has deprecated single stores configuration since Stein cycle and now
will start putting efforts to deploy glance using multistore by-default and
then remove single store support from glance.
This might be going to take a couple of cycles, so in yoga we are going to
migrate internal unit and functional tests to use multistore config and
also going to modify devstack to deploy glance using multistore
configuration for swift and Ceph (for file and cinder it's already
supported).

We need to notify respected deployment teams (ansible/tripleo/ceph-admin)
about our work and we are moving from single store configuration.

Recordings: https://bluejeans.com/s/oiS49m8Pj_o - Chapter 2

# Fips overview
Path forward:
- add experimental/periodic fips job on centos 8 (it will run on master)
- centos 9 dependency fixes (tempest and swift changes)
- Once dependency merges, experimental/periodic job will be run for centos
9 (enough time to verify that it is stable now)
- Once it is stable, move it to check/gate queue
- Then backport swift/tempest dependencies to stable branches
- Run centos 9 fips job as periodic on stable branches
- Once stable move those to gate/check queue for stable branches

Recordings: https://bluejeans.com/s/oiS49m8Pj_o - Chapter 3

# Cross project meet with Cinder

Discussion 1: New API to expose location information
We have OSSN-0065 describing the security risk of enabling
``show_multiple_locations`` option but this is required for cinder to
perform certain optimizations when creating a volume from image  (in case
of cinder and RBD store). The proposal is to create a new admin only API to
provide the location of the image and avoid dependency on the config
options.

Decided to write a spec describing the current API design for the new
locations API (alternative: nova's approach of using alternative endpoint
and service role/token as well)

Discussion 2: Clone v2: RBD deferred deletion
Recently cinder has utilized Ceph clone v2 support for its RBD backend,
since then if you attempt to delete an image from glance that has a
dependent volume, all future uses of that image will fail in error state.
Despite the fact that the image itself is still inside of Ceph/Glance. This
issue is reproducible if you are using ceph client version greater than
'luminous'.

Decided to fix things on cinder side and see how we can fix glance using
the same techniques (also document it since customers face these issues all
the time)

Recordings: https://bluejeans.com/s/efAqf0e5RDQ - Chapter 2

Friday, April 08th 2022

# Image Export with metadata
Especially if we implement the glance-download discussed on Wednesday it
might be worth exploring my old idea of image export, which would bundle
the image metadata together with the image payload itself for easier import
into the glance environment. This will need bits of work on both sides of
the process. The source will need to be able embed the metadata into the
end of the datastream sent to the client and the receiving end will need to
understand how to pick up and parse that data. To make easier transfer of
the image (especially RAW images from all of our Ceph deployments) the
original image payload should be compressed on the fly when sent to the
client and the metadata can be added after the compression is closed. This
way the image can be brought in older glance deployments too that supports
the uncompression and if it doesn't know to look at the metadata that part
will be just simply ignored.

Recordings: https://bluejeans.com/s/C7F5pDPR_v4 - Chapter 1


You will find the detailed information about the same in the PTG etherpad
[1] along with the recordings of the sessions and milestone wise priorities
at the bottom of the etherpad. Kindly let me know if you have any questions
about the same.

[1] https://etherpad.opendev.org/p/zed-glance-ptg

Thank you,

Abhishek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220411/6fe8817c/attachment-0001.htm>


More information about the openstack-discuss mailing list