# Enhance PoolWeigher to respect current share network
The PoolWeigher doesn't take into consideration the share server per share network allocation.
As a result, the weigher will take one of these approaches:
- Stack: create all shares in a pool even though a newly added pool is available until it is full
- Spread: create shares in the pool with fewer shares
Instead of looking if there is already any share server on the pool, weigh according to if there is already any share server of the same share network on the pool
AIs:
- carthaca will create a launchpad blueprint with these notes for permanence.
- Enhance the poolweigher to consider the share network in the request.
# NetApp Snaplock/WORM feature
An RFE has been filed for supporting WORM shares [2]
Retention time should be configurable. We can implement such things through extra specs or new APIs for such shares.
AIs:
- NetApp/SAP will ask around if there are other vendors that have the same feature.
- If they don't, we should take the extra spec route and add the NetApp specific metadata keys.
# Cross-service-request-id + Enhancements to logging
We need to enable x-openstack-request-id, and use it when making cross service requests (to cinder/glance/neutron) for better tracking in the cross-service logs.
We could have an implementation in Manila similar to what is available in Nova and Cinder [3].
This could become an internship theme.
AIs:
- carthaca will report a bug to document this request
# Backend driver operations based on resource metadata update API
Metadata currently is being used as tags to refer to db resources and when it is updated, we don't check if the share back end needs to update anything in the share.
The backends should be notified of metadata modifications in case there is need to update something in the shares.
Where metadata changes should be reflected to Backend drivers: snapshot policy, show-mount option
AIs:
- kpdev will document the proposed new behavior into a specification
# Technical Debt - SRBAC
gouthamr talked us through all of the work that is complete and what are the next steps in the next few cycles.
AIs:
- Increase test coverage.
- Set "[oslo_policy]/enforce_new_defaults=True" as a default and see if it works.
- Introduce the manager role for reset-state, promote out of sync replicas, force-delete resources.
# Retrospective on internship projects
We discussed what we have been working on with the ~10 interns we had in Manila at the end of the Caracal cycle, as well as our plans for Dalmatian
There are also some mentoring opportunities in case people are interested. Please reach out :)
# All things CephFS
ashrodri talked us through the updates from the previous cycle and all the testing she has been doing with CI and enhancing the CI jobs, as well as introducing the ingress daemon service and so on.
We are hitting some package issues for testing, and we are trying to figure those out.
On new features we are planning:
- Manage/unmanage implementation in the CephFS driver
- This is already available to other drivers but we need to implement this to the CephFS drivers. We expect to be able to start on this work during the Dalmatian cycle.
- Impact of the new ensure shares API in the CephFS driver
- Ensure shares comes from a request from CERN, while they had their Ceph Mon IPs changing but they needed to restart the services to reconstruct the export locations.
- It ended up becoming a feature that will be used by many people and we are trying to make this as generic as possible, as it can also be useful for backend migrations.
- A spec is being worked on by carloss [4] and we expect to complete this work during the Dalmatian cycle.
- DHSS=True implementation with the CephFS driver.
- We received several requests to have DHSS=True available for the CephFS driver and we are willing to start on this work after another request showed up [5].
- AI: gouthamr will propose a spec to document how it will work.
- NFSv3 known issue
- NFSv3 has a large number of deficiencies that NFS-Ganesha can't fill - there are external services like rpcbind, statd that handle portions of the NFS protocol that are natively supported in NFSv4.1+
- However, Microsoft Windows doesn't have a native NFSv4.1+ client; which means Windows guests can't mount CephFS/NFS shares - until now.
- With Ceph Reef, the NFS service enables v4 and v3 export rules; which means a share can be mounted with NFS v3 or NFS v4.1+
- It isn't advised to mount with both protocols at the same time, nor is it advised to use NFSv3 outside of windows - because there's no recovery in case there's a failover in the NFS server
- Async mirroring
- This feature will use cephfs mirroring [6], which sounds like share replication but it has some differences.
- It works similarly to the way it was designed in Cinder [7].
- AI: ashrodri will work on a blueprint to track this effort.
# Backup APIs
Backups are available for some releases, and NetApp + SAP are planning some enhancements and are willing to include a new backup type entity in Manila, similar to share types
We agreed that the backup type properties should reuse the metadata mechanism to reuse code and UX, so we will end up also having backup type metadata.
Migrations can be a concern, as the backup types were configuration blocks in the manila.conf file, so we will rely on administrators to properly migrate that, or even provide a script to do such migration.
AIs:
- NetApp will work on a specification for the backup updates
# Backup driver implementation with CBACK
CERN is working on adding a driver that integrates with their backup solution (CBACK) to manila, which uses Ceph storages as the backups backend.
Zach showed us some findings during the implementation and talked about his plans and some challenges.
The plan is to introduce the new backup driver during the Dalmatian cycle and also start testing on CI with it.
# Bug management stats and cycle reports
vhari shared the bug stats after the previous cycle and how encouraging they are.
We reflected on the data and had a suggestion to expand our bugsquash events to two in a cycle, so that can help us reduce the bug backlog quicker.
We had great stats on reducing doc bugs backlog with the help of a bugsquash late in the previous cycle.
We also talked about possible role rotation and discussed what it means to be the Manila bug representative.
# Provide support for efficiency policy on NetApp shares
NetApp is figuring out a way to determine efficiency policies during the creation of shares individually.
They want to implement this by introducing new share type extra specs to configure such things.
DHSS=True is still being discussed, and this feature will be worked on for DHSS=False first. NetApp intends to implement this feature during the Dalmatian cycle.
# Technical Debt - sqlalchemy 2.0
Thanks to stephenfin and zzzek, all changes for sqlalchemy 2.0 were merged in the beginning of the Dalmatian release, as some issues showed up in the process and we couldn't figure them out.
We will be backporting the changes that didn't make it to Caracal, so sqlalchemy 2.0 will also be supported by Manila. There are no current plans for backporting this further.
We also need to evaluate removing the backref to back_populates.
AIs:
- Complete the backport of the changes to 2024.1 caracal
# Add response schema validation
The SDK team would like to generate OpenAPI schemas stored in-tree, to ensure things are complete and up-to-date, and to avoid landing another large deliverable on the SDK team, and to allow service teams to fix their own issues.
OpenAPI 3.1 is a superset of JSONSchema, which means we can use the same tooling we currently use for this.
AIs:
- stephenfin will work on a spec and on bootstraping it, and more people would be able to pick it up.
# Shares encryption
We had a brief discussion about the design of this feature for Manila, which the spec is already merged.
kpdev has already proposed two patches for this change [8]
AIs: schedule a collaborative review early in the cycle so we can start reviewing it as early as possible.
# DB migration script squash
tkajinam brough to our attention during the previous cycle that there was room for enhancement in our migration scripts, as there are 70+ of them now and we could squash them.
We are also considering automated ways to do this with sqlalchemy.
AIs:
- gouthamr/tkajinam will reach out to zzzek and stephenfin and ask if there is an automated way to make this process easier.
If you have questions, please feel free to reach out.
Looking forward to a very productive release cycle with you!
Regards,
carloss