[manila] Summary of the Victoria Cycle Project Technical Gathering
Goutham Pacha Ravi
gouthampravi at gmail.com
Wed Jun 17 05:29:18 UTC 2020
Hello Zorillas and other friendly animals of the OpenStack universe,
The manila project community met virtually between 1st June and 5th June
and discussed project plans for the Victoria cycle. The detailed meeting
notes are in [1] and the recordings were published [2]. A short summary of
the discussions and the action items is below:
*== Ussuri Retrospective ==*
- We lauded the work that Vida Haririan and Jason Grosso have put into
making bug tracking and triaging a whole lot easier, and systematic. At the
beginning of the cycle, we had roughly 250 bugs, and this was brought down
to under 130 by the end of the cycle. As a community, we acted upon many
multi-release bugs and made backports as appropriate. We've now automated
the expiry of invalid and incomplete bugs thereby reducing noise. Vida's
our current Bug Czar, and is interested in mentoring anyone that wants to
contribute to this role. Please reach out to her if you're interested.
- We also had two successful Outreachy internships (Soledad
Kuczala/solkz, Maari Tamm/maaritamm) thanks to Outreachy, their sponsors,
mentors (Sofia Enriquez/enriquetaso, Victoria Martinez de la Cruz/vkmc) and
the OpenStack Foundation; and a successful Google Summer of Code internship
(Robert Vasek/gman0) - many thanks to the mentor (Tomas Smetana/tsmetana),
Google, Red Hat and other sponsoring organizations. The team learned a lot,
and vkmc encouraged all of us to consider submitting a mentorship
application for upcoming cycles and increase our involvement. Through the
interns' collective efforts in Train and Ussuri development cycles:
- manila CSI driver was built [3]
- manilaclient now provides a plugin to the OpenStackClient
- manila-ui has support for newer microversions of the manila API and,
- manila documentation has gotten a whole lot better!
- We made good core team improvements and want to continue to mentor new
contributors to become maintainers, and folks felt their PTL was doing a
good job (:D)
- The community loved the idea of PTL docs (thanks Kendall
Nelson/diablo_rojo) - a lot of tribal knowledge was documented for the
first time!
- We felt that "low-hanging-fruit" bugs [4] were lingering too long in
some cases, and must have a "resolve-by" date. These are farmed for new
contributors, and if they turn out to be annoying issues, the team may set
a resolve-by date and close them out. However, we'll continue to make a
concerted effort to triage "bugs that are tolerable" with nice-to-have
fixes and keep them handy for anyone looking to make an initial
contribution.
*== Optimize query speed for share snapshots ==*
Haixin (haixin) discovered that not all APIs are taking advantage of
filtering and pagination via sqlalchemy. There's a list of APIs that he's
compiled and would like to work on them; the team agreed that this is a
valuable bug fix; and can be made available to Ussuri when the fixes land
in this cycle.
*== TC Goals for Victoria cycle ==*
- We discussed a long list of items that were proposed for inclusion as
TC goals [5]. The TC has officially picked two of them for this cycle [6].
- For gating manila project repos, we make heavy use of "legacy" DSVM
jobs. We hadn't invested time and effort in converting these jobs in the
past cycles; however, we have a plan [7] and have started porting jobs to
"native" zuulv3 format already in the manila-tempest-plugin repository.
Once these jobs are complete there, we'll switch over to using them on the
main branch of manila. Older branches will get opportunistic updates beyond
milestone-2.
- Luigi Toscano (tosky) joined us for this discussion and asked us for
the status of third party CI systems. The team hasn't mandated that third
party CI systems move their testing to zuulv3-native in this cycle.
However, the OpenStack community may drop support for devstack-gate in the
Victoria release, and making things work with it will get harder - so it's
strongly encouraged that third party vendor systems that are using the
community testing infrastructure projects: zuul, nodepool and devstack-gate
move away from devstack-gate in this cycle. An option to adopt Zuulv3 in
third party CI systems could be via the Software Factory project [8]. The
RDO community runs some third party jobs and votes on OpenStack upstream
projects - so they've created a wealth of jobs and documentation that can
be of help. Maintainers of this project hang out in #softwarefactory on
FreeNode.
- All of the new zuulv3 style tempest jobs inherit from devstack-tempest
from the tempest repository, and changing the node-set in the parent would
affect all our jobs as well - this would make the transition to Ubuntu
20.04 LTS/Focal Fossa easier.
*== Secure default policies and granular policies ==*
- Raildo Mascena (raildo) joined us and presented an overview this
cross-community effort [9]
- Manila has many assumptions of what project roles should be - and over
time, we seem to have blended the idea of a deployer administrator and a
project administrator - so there are inconsistencies when, even to perform
project level administration, one needs excessive permissions across the
cloud. This is undesirable - so, a precursor to supporting the new scoped
policies from Keystone seems to be to:
- eliminate hard coded checks in the code requiring an "admin" role and
switch to performing policy checks
- eliminating empty defaults which allow anyone to execute an API -
manila has very few of these
- supporting a "reader" role with the APIs
- We can then re-calibrate the defaults to ensure a separation between
cross-tenant administration (system scope) and per-tenant administration -
following the work in oslo.policy and in keystone
- gouthamr will be leading this in the Victoria cycle - other
contributors are welcome to join this effort!
*== Oslo.privsep and other manila TODOs ==*
- We discussed another cross-community effort around transitioning all
sudo actions from rootwrap to privsep
- Currently no one in the manila team has the bandwidth to investigate
and commit to this effort, so we're happy to ask for help!
- If you are interested, please join us during one of the team meetings
or start submitting patches and we can discuss with you via code reviews.
- The team also compiled a list of backlog items in an etherpad [10].
These are great areas for new project contributors to help manila, so
please get in touch with us if you would like to work on any of these items
*== OSC Status/Completion ==*
- Victoria Martinez de la Cruz and Maari Tamm compiled the status for
the completion of the OSC plugin work in manilaclient [11]
- There's still a lot of ground to cover to get complete parity with the
manila command line client, and we need contributors
- Maari Tamm (maaritamm) will continue to work on this as time permits.
Spyros Trigazis (strigazi) and his team at CERN are interested to work
on this as well. Thank you, Maari and Spyros!
- On Friday, we were joined by Artem Goncharov (gtema) to discuss the
issue of "common commands"
- quotas, services, availability zones, limits are common concepts that
apply to other projects as well
- OSC has support to show you these resources for compute, volume and
networking
- gtema suggested we should approach this via the OpenStackSDK rather
than the plugin since plugins are slow as is, and adding anything more to
that interface is not desirable at the moment
- There's planned work in the OpenStackClient project to work on the
plugin loading mechanisms to make things faster
*== Graduation of Experimental Features ==*
- Last cycle Carlos Eduardo (carloss) committed the work to graduate
Share Groups APIs from their "Experimental API" status
- We have two more features behind experimental APIs: share replication
and share migration
- This cycle, carloss will work on graduating the share replication APIs
to fully supported
- Generic Share Migration still needs some work, but we've fleshed out
the API and it has stayed pretty constant in the past few releases - we
might consider graduating the API for share migration in the Wallaby
release.
*== CephFS Updates ==*
- Victoria (vkmc) took us through the updates planned for the Victoria
cycle (heh)
- Currently all dsvm/tempest based testing in OpenStack (cinder, nova,
manila) is happening on ceph luminous and older releases (hammer and jewel)
- Victoria has a patch up [12] to update the ceph cluster to using
Nautilus by default
- This patch moves to using the packages built by the ceph community via
their shaman build system [13]
- shaman does not support building nautilus on CentOS 8, or on Ubuntu
Xenial - so if older branches of the projects are tested with Ubuntu
Xenial, we'll fall back to testing with Luminous
- The Manila CephFS driver wants to take advantage of the "ceph-mgr"
daemon in the Nautilus release and beyond
- Maintaining support for "ceph-mgr" and "ceph-volume" clients in the
driver will make things messy - so, the manila driver will not support Ceph
versions prior to Nautilus in the Victoria cycle
- If you're using manila with cephfs, please upgrade your ceph clusters
to Nautilus or newer
- We're not opposed to supporting versions prior to nautilus, but the
community members cannot invest in maintaining support for these older
releases of ceph for future releases of manila
- With the ceph-mgr interface, we intend to support asynchronous
create-share-from-snapshot with manila
- Ramana Raja (rraja) provided us an update regarding the ceph-mgr and
upcoming support for nfs-ganesha interactions via that interface (ceph
pacific release)
- Currently there's a ganesha interface driver in manila, and that can
switch to using the ceph-mgr interface
- Manila provides an "ensure_shares" mechanism to migrate share export
locations when the NAS host changes - We'll need to work on that if we want
to make it easier to switch ganesha hosts.
- We also briefly discussed supporting manage and unmanage operations
with the ceph drivers - that should greatly assist day 2 operations, and
migration of shared file systems from the native cephfs protocol to nfs and
vice-versa.
*== Add/Delete/Update security services for in-use share networks ==*
- Douglas Viroel (dviroel) discussed a change to manila to support share
server modifications wrt security services
- Security services are project visible and tenant driven - however,
share servers are hidden away from project users by virtue of default policy
- dviroel's idea is that, If a share network has multiple share servers,
the share manager will enumerate and communicate with all share servers on
the share network to update a security service
- We need to make sure that all conflicting operations (such as creating
new shares, changing access rules on existing shares) are fenced off when a
share server security service is being updated
- dviroel has a spec that he's working on - and would like feedback on
his proposal [14]
*== Create shares with two (or more) subnets ==*
- dviroel proposed a design allowing a share network having multiple
subnets in a given AZ (currently you can have utmost 1 subnet in an AZ for
a given share network)
- Allowing multiple NICs on a share server may be something most drivers
can easily support
- This change is identical to the one to update security services on
existing share networks - in terms of user experience and expectations
- The use cases here include dual IP support, share server network
maintenance and migration, simultaneous access from disparate subnets
- Feedback from this discussion was to treat this as two separate
concerns for easier implementation
- Supporting multiple subnets per AZ per share network
- Supporting adding/removing subnets to/from a share network that is
in-use
- Currently, there's no way to modify an in-use share server - so adding
that would be a precursor to allowing modification of share
networks/subnets and security services
*== Technical Committee Tags ==*
- In the last cycle, the manila team worked with OpenStack VMT to
perform a vulnerability disclosure and coordinate a fix across
distributions that included manila.
- The experience was valuable in gaining control of our own "coresec"
team that had gone wayward on launchpad; and learning about VMT
- Jeremy Stanley (fungi) and other members of the VMT have been
supportive of having manila apply to the "vulnerability-managed" tag. We'll
follow up on this soon
- While we're on the subject, with Ghanshyam Mann (gmann) in the room,
we discussed other potential tags that we can assert as the project team:
- "supports-accessible-upgrade" - manila allows control plane upgrade
without disrupting the accessibility of shares, snapshots, ACLs, groups,
replicas, share servers, networks and all other resources [15]
- "supports-api-interoperability" - manila's API is microversioned and
we have hard tests that enforce backwards compatibility [16]
- We discussed "tc:approved-release" tag a bit, and felt that we needed
to bring it up in a TC meeting, and we did that with Ghanshyam's help
- The view from the manila team is that we'd like to eliminate any
misconception that the project is not mature, or ready for production use
or that it isn't a part of a "core OpenStack"
- At the TC meeting, Thierry Carrez (ttx), Graham Hayes (mugsie) and the
others provided historic context for this tag: the tag was for a section
from the OpenStack foundation bylaws that states that the Technical
Committee must define what an approved release is (Article 4, section 4.1
(b) i) [17]
- The TC's view was that this tag has outlived its purpose and
core-vs-non-core discussions have happened a lot of times. Dropping this
tag might require speaking with the Foundation and amending the bylaws and
exploring what this implies. It's a good effort to get started on though.
- For the moment, The TC was not opposed to the manila team requesting
this change to include manila in the list of projects in
"tc-approved-release".
*== Share and share size quotas/limits per share server ==*
- carloss shared his design for allowing share server limits being
enforced via the quotas system where administrators could define project
share server quotas that the share manager would enforce these by
provisioning new servers when the quotas are hit
- the quotas system is ill suited for this sort of enforcement,
specially given that the share manager allows the share drivers to control
what share server can be used to provision a share
- it's possible to look for a global solution, like the one proposed for
the generic driver in [18], or implement this at a backend level agnostic
to the rest of manila
- another reason not to use quotas is that manila may eventually do away
with this home grown quota system in favor of oslo.limit enforced via
keystone
- another alternative to doing this is via share types, but, this really
fits as a per-share-network limit rather than a global one
*== Optimize the quota processing logic for 'manage share' ==*
- haixin ran into a bug where quota operations are incorrectly applied
for during a share import/manage operation such that a failed manage
operation would cause incorrect quota deductions
- we discussed possible solutions for the bug, but mostly, this can
definitely be fixed
- he opened a bug for this [19]
*== Share server migration ==*
- dviroel presented his thoughts around this new feature [20] which
would be helpful for day 2 operations
- he suggested that we should not provide for a generic mechanism to
perform this migration, given that users would not need it especially if it
is not 100% reliable
- though there is a generic framework for provisioning share servers, it
is only being used by the reference driver (Generic driver) and the Windows
SMB driver
- shooting for a generic solution would require us to solve SPOF issues
that we currently have with the reference driver - and there is not much
investment in doing so
- dviroel's solution involves a multi-step migration, however relying on
the share drivers to perform atomic migration of all the shares - you can
think of this wrt the Generic driver as multi-attaching all the underlying
cinder volumes and deleting the older nova instance.
- manila's share migration is multi-step allowing for a data copy and a
cutover phase - and is cancelable through the data copy phase and before
the cutover phase is invoked
- so there were some concerns if that two phased approach is required
here, given that the operation may not be cancelable always, generically
*== Manila Container Storage Interface ==*
- Tom Barron (tbarron) presented a summary and demo of using the Manila
CSI driver on OpenShift to provide RWX storage to containerized applications
- Robert Vasek (gman0) explained the core design and the reasoning
behind the architecture
- Mikhail Fedosin (mfedosin) spoke about the OpenShift operator for
Manila CSI and the ease of install and day two operations [21]
- the CSI driver has support for snapshots, cloning of snapshots (nfs
only at the moment) and topology aside from provisioning, access control
and deprovisioning
- the team's prioritizing supporting cephfs snapshots and creating
shares from cephfs snapshots via subvolume clones in the Victoria cycle
Thanks for reading this far! Should you have any questions, don't hesitate
to pop into #openstack-manila on freenode.net.
[1] https://etherpad.opendev.org/p/victoria-ptg-manila (Minutes of the
PTG)
[2] https://www.youtube.com/playlist?list=PLnpzT0InFrqBKkyIAQdA9RFJnx-geS3lp
(YouTube playlist of the PTG recordings)
[3]
https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-manila-csi-plugin.md
(Manila CSI driver docs)
[4] https://bugs.launchpad.net/manila/+bugs?field.tag=low-hanging-fruit
(Low hanging fruit bugs in Manila)
[5] https://etherpad.opendev.org/p/community-goals (Community goal
proposals)
[6]
http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015459.html
(Chosen TC Community Goals for Victoria cycle)
[7]
https://tree.taiga.io/project/gouthampacha-manila-ci-zuul-v3-migration/kanban
(Zuulv3-native CI migrations tracker)
[8] https://www.softwarefactory-project.io/ (Software Factory project)
[9]
https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team
(Policy effort across OpenStack)
[10] https://etherpad.opendev.org/p/manila-todos (ToDo list for manila)
[11] https://etherpad.opendev.org/p/manila-openstackclient-updates (OSC
CLI catchup tracker)
[12] https://review.opendev.org/#/c/676722/ (devstack-plugin-ceph
support for Ceph Nautilus)
[13] https://shaman.ceph.com (Shaman build system for Ceph)
[14] https://review.opendev.org/#/c/729292/ (Specification to allow
security service updates)
[15]
https://governance.openstack.org/tc/reference/tags/assert_supports-accessible-upgrade.html
(TC tag for accessible upgrades)
[16]
https://governance.openstack.org/tc/reference/tags/assert_supports-api-interoperability.html
(TC tag for API interoperability)
[17]
https://www.openstack.org/legal/bylaws-of-the-openstack-foundation#ARTICLE_IV._BOARD_OF_DIRECTORS
(TC bylaws requiring "approved release")
[18] https://review.opendev.org/#/c/510542/ (Limitting the number of shares
per Share server)
[19] https://bugs.launchpad.net/manila/+bug/1883506 (delete manage_error
share will lead to quota error)
[20] https://review.opendev.org/#/c/735970/ (specification for share server
migration)
[21] https://github.com/openshift/csi-driver-manila-operator
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200616/a403a5ac/attachment-0001.html>
More information about the openstack-discuss
mailing list