Thanks for the great summary Jon!
Hello everyone, thanks for a very productive PTG for the Dalmatian
cycle. Below is a brief summary of the highlights. The raw etherpad
notes are here[1] and both the summary and transcribed etherpad notes
are captured in our wiki here[2].
# Improve driver documentation
When proposing new or updating existing drivers, our interface can be
confusing. We agreed on an effort to migrate class documentation to the
interface definitions, make clear the mandatory and optional methods for
drivers, and maintain this documentation going forward.
# User visible information in volume types
Volume type metadata and extra specs are not visible to users, making it
difficult to ascertain whether a volume type will lead to encryption or
replication. Agreement was reached on metadata fields similar to
Glance's metadefs, allowing drivers to report capabilities in a standard
way and allowing that metadata to become visible to admins and users.
# Optional backup driver dependencies
Although volume driver dependencies are optional, those of our backup
drivers are listed in requirements and are therefore always installed
irrespective of deployment configuration. Agreement was reached to move
these dependencies to driver-requirements. This should simplify efforts
of both deployers and packagers.
# Simplify deployments
A few ideas were proposed to improve our deployment:
* When cinder-volume is deployed in an active-active configuration, our
`backend_host` parameter must be updated on every backend to have the
same value, we would like to remove this additional step.
* Use of predictable names to define tenants, instead of unique IDs.
* Have dedicated quote section in our configuration for quote-related
options.
# Response schema validation
A spec to add response schema validation in addition to our existing
request validation was proposed. With adequate coverage, clients in
various languages could be autogenerated (in theory). No significant
objects were raised, agreement to review related patches this cycle.
# Documentation of Ceph auth requirements
We do not provide comprehensive and easy-to-find documentation on
exactly what authentication/permission expectations there are between
different services, leading deployers troubleshoot themselves and not be
able to rely on upstream best practices. Agreement to improve this
situation in the current cycle.
# Cinder backup improvements
An accumulation of bug fixes and performance improvements have stalled
in the review queue. We went through the major ones as a team to
attempt to bring awareness and unblock the remaining review
requirements. See WIKI notes for specific details.
# Migrating backups between multiple backends
There is desire to support multiple backup backends where new backups go
to a new backend while backups in an old backend remain readable. We
want to avoid needing to create a full backup in the new backend to
support incremental snapshots (and the associated charge). A spec will
be proposed for review and work towards this goal will proceed during
this cycle.
# Cross-project with Glance
In a cross-project collaboration with the Glance team, an improved
method for image migration was proposed. Agreement to introduce a new
migration operation in Glance. Cinder context was provided and a
consensus on path forward was reached.
# Cross-project with Nova
In a cross-project collaboration with both Nova and Glance, the topic of
image encryption was discussed. Dan Smith from the Nova team provided
input from the Nova side. Cinder and Nova expect LUKS formatted images,
but Glance currently supports GPG encrypted images - requiring
re-encryption prior to use. It was noted that LUKS encrypted images can
be created without root permission and the Glance team is now looking to
drop GPG support and consolidate around LUKS as our unified format.
# Active-Active support for NetApp
A NetApp engineer raised questions about adding active-active support to
the driver. Questions were answered and that work should proceed in
this cycle.
# Performance of parallel clone operations
For the clone operation in cinder we're using a single distributed lock
to prevent the source volume from being deleted mid-operation. This
causes multiple concurrent clone operations to block. Under the right
conditions, this can cause a significant performance degradation.
Multiple possible solutions were discussed (read-write locks) and a
consensus to use the DB was reached. Some details are unclear, awaiting
a spec before moving forward.
# Volume encryption with user defined keys
Cinder does not currently support encryption with key provided by the
user. Users could both manage their own keys and data could be
recovered even if keys are lost at the deployment. There are several
technical challenges to support this. Several of these hurdles were
raised, more thought and research is needed before we have a spec that
could be reviewed.
[1] https://etherpad.opendev.org/p/dalmatian-ptg-cinder
[2] https://wiki.openstack.org/wiki/CinderDalmatianPTGSummary
--
Jon