[Glance] Yoga PTG Summary

Rajat Dhasmana rdhasman at redhat.com
Mon Oct 25 04:59:10 UTC 2021


On Fri, Oct 22, 2021 at 8:27 PM Abhishek Kekane <akekane at redhat.com> wrote:

> Hi All,
>
> We had our fourth virtual PTG between 18th October to 22nd October 2020.
> Thanks to everyone who joined the virtual PTG sessions. Using bluejeans app
> we had lots of discussion around different topics for glance, glance +
> cinder and Secure RBAC.
>
> I have created etherpad [1] with Notes from the session and which also
> includes the recordings of each discussion. Here is a short summary of the
> discussions.
>
> Tuesday, October 19th
> # Xena Retrospective
>
> On the positive note, we merged a number of useful features this cycle. We
> managed to implement a project scope of secure RBAC for metadef APIs,
> Implemented quotas using unified limits and moved policy enforcing closer
> to API layer.
> We also manage to wipe out many bugs from our bug backlog. On the other
> side we need to improve our documentation and API reference guide.
>
> Recording: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 1
>
> # Cache API
> During Xena cycle we managed to start this work and implement the core
> functionality but failed to merge it due to lack of negative tests and
> tempest coverage. In the Yoga cycle we are going to focus on adding tempest
> coverage for the same along with a new API to cache the given image
> immediately rather than waiting for a periodic job to pre-cache it for us.
>
> Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 2
>
> # Native Image Encryption
> Unfortunately this topic is sitting in our agenda for the last couple of
> PTGs. Current update is the core feature is depending on Barbican
> microversion work and once it is complete then the Consumer API can be
> functional again. At the moment in Glance we have decided to go ahead and
> implement the glance side part and instead of having placeholder (barbican
> consumer API secret register and deletion) code as commented we can have a
> WIP patch for the same with depending on glance side work.
>
> Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 3
>
> # Default Glance to configure multiple stores
> Glance has deprecated single stores configuration since Stein cycle and
> now will start putting efforts to deploy glance using multistore by-default
> and then remove single store support from glance.
> This might be going to take a couple of cycles, so in yoga we are going to
> migrate internal unit and functional tests to use multistore config and
> also going to modify devstack to deploy glance using multistore
> configuration for swift and Ceph (for file and cinder it's already
> supported).
>
> Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 4
>
> # Quotas Usage API
> In Xena we have implemented quotas for images API using unified limits.
> This cycle we will add new APIs which will help enduser to get the clear
> picture of quotas like What is total quota, used quota and remaining quota.
> So first we are coming up with the spec for the design and then
> implementation for the same.
>
> Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 5
>
>
> Wednesday, October 20th 2021
>
> # Policy Refactoring - Part 2
> In Xena we have managed to move all policy checks to the API layer. This
> cycle we need to work on removing dead code of policy and authorization
> layer.
> Before removing both the layers we need to make sure that property
> protection is working as expected and for the same we need to add one job
> with a POST script to verify that removing auth and policy layer will not
> break property protection.
>
> Recordings: https://bluejeans.com/s/AMZzGObPhK4 - Chapter 1
>
> # Database read only checks
> We have added new RBAC policy checks which are equivalent to readonly
> checks in our db layer, e.g. image ownership check, visibility check etc.
> To kick start work for this dansmith (thanks for volunteering) will work
> on PoC and abhishekk will work on specs about how we will modify/improve
> our db layer.
>
> # Secure RBAC - System Scope/Project Admin scope
> In Xena we have managed to move all policy checks to API layer and
> implemented project scope of metadef APIs. So as of now we have project
> scope for all glance APIs. During this discussion Security Team has updated
> us that discussions are still going about how the system scope should be
> used/implemented and they are planning to introduce a new role 'manager'
> which will act between 'admin' and 'member' roles. We need to keep an eye
> on this new development.
>
> https://etherpad.opendev.org/p/tc-yoga-ptg - line #446
> https://etherpad.opendev.org/p/policy-popup-yoga-ptg - line #122
>
> Recordings: https://bluejeans.com/s/AMZzGObPhK4 - Chapter 3
>
>
> Thursday, October 21st 2021
>
> # Glance- Interop interlock
> Due to confusion between timings this discussion didn't happen as planned.
> The InterOP team has added some questions later to PTG etherpad (refer line
> no #270). @Glance team please respond to those questions as I will be out
> for the next couple of weeks.
>
> # Upload volume to image in RBD backend
> In case when we upload a volume as an image to glance's rbd backend, it
> starts with a 0 size rbd image and performs resize in chunks of 8 MB which
> makes the operation very slow. Current idea is to pass volume size as image
> size to avoid these resize operations. As this change will lead to use
> locations API which are not advisable to use in glance due to security
> concerns and also checksum and multi-hash will not be available, cinder
> side will have new config option (default False) to use this optimization
> if set to True. Also this change needs a glance side to expose ceph pool
> information, so Rajat (whoami-rajat) will coordinate with the glance team
> to set up a spec and implement the same.
>

Just wanted to update the topic summary with a few details.
Currently the RBD store uses a faster mechanism by increasing the resize
chunk two times i.e. initially it will be 8 MB then 16M, 32M, 64M, 128M...
and will take 7 resizes to reach 1GB size which is an improvement but still
the whole volume is copied chunk by chunk. The current discussion around
this is for cinder to use RBD COW cloning which is significantly faster
than the generic approach and can be seen in my performance testing
here[1]. Further details can be found in the spec.

Thanks for the summary Abhshek!

[1]
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_131/810363/6/check/openstack-tox-docs/131e18d/docs/specs/yoga/optimize-upload-volume-to-rbd-store.html#performance-impact


>
> # Upload volume to image when glance backend is cinder
> Similar to above topic this will have same security concerns and same
> config option can be used to have this optimization available. Rajat will
> coordinate with the glance team to implement glance side changes.
>
> Note, as we have joined the cinder team, Brian Rosmaita/Rajat will
> update/share the recording links for above two sessions in glance PTG
> etherpad once it is available.
>
> # Adding multiple tags overrides existing tags
> We are going to modify the create multiple tags API which will have one
> boolean parameter (Default to False) in the header to maintain backward
> compatibility. If it is True then we are going to add new tags to the
> existing list of tags rather than replacing those.
>
> Recordings: https://bluejeans.com/s/@ttsNs8vIFq - Chapter 1
>
> # Delete newly created metadef resource types from DB after deassociating
> To maintain the consistency with other metadef APIs, we are going to add
> two new APIs.
> 1. Create resource types
> 2. Delete given resource type
>
> Recordings: https://bluejeans.com/s/@ttsNs8vIFq - Chapter 2
>
> You will find the detailed information about the same in the PTG etherpad
> [1] along with the recordings of the sessions. Kindly let me know if you
> have any questions about the same.
>
> [1] https://etherpad.opendev.org/p/yoga-glance-ptg
>
> Thank you,
>
> Abhishek
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211025/a214450c/attachment-0001.htm>


More information about the openstack-discuss mailing list