<div dir="ltr">Thanks a lot for this summary. I enjoyed the reading.<div><br></div><div>Jordan</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 2, 2016 at 10:14 PM, Sean McGinnis <span dir="ltr"><<a href="mailto:sean.mcginnis@gmx.com" target="_blank">sean.mcginnis@gmx.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">War and Peace<br>
or<br>
Notes from the Cinder Mitaka Midcycle Sprint<br>
January 26-29<br>
<br>
Etherpads from discussions:<br>
* <a href="https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-1" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-1</a><br>
* <a href="https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-2" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-2</a><br>
* <a href="https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3</a><br>
* <a href="https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-4" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-4</a><br>
<br>
*Topics Covered*<br>
================<br>
In no particular order...<br>
<br>
Disable Old Volume Types<br>
========================<br>
There was a request from an end user to have a mechanism to disable<br>
a volume type as part of a workflow for progressing from a beta to<br>
production state.<br>
<br>
Of what was known of the request, there was some confusion as to<br>
whether the desired use case couldn't be met with the existing<br>
functionality. It was decided nothing would be done for this until<br>
more input is receieved explaining what is needed and why it cannot<br>
be done as it is today.<br>
<br>
User Provided Encryption Keys for Volume Encryption<br>
===================================================<br>
The question was raised as to whether we want to allow user specified<br>
keys. Google has something today where this key can be passed in<br>
headers.<br>
<br>
Some concern with doing this, both from a security and amount of work<br>
perspective. It was ultimately agreed this was a better fit for a<br>
cross project discussion.<br>
<br>
Adding a Common cinder.conf Setting for Suppressing SSL Warnings<br>
================================================================<br>
Log files get a TON of warnings when using a driver that uses the<br>
requests library internally for communication and you do not have<br>
a signed valid certificate. Some drivers have gotten around this<br>
by implementing their own settings for disabling these warnings.<br>
<br>
The question was raised that although not all drivers use requests,<br>
and therefore are not affected by this, should we still have a common<br>
config setting to disable these warnings for those drivers that do use<br>
it.<br>
<br>
Different approaches to disabling this will be explored. As long as<br>
it is clear what the option does, we were not opposed to this.<br>
<br>
Nested Quotas<br>
=============<br>
The current nested quota enforcement is badly broken. There are many<br>
scenarios the just do not work as expected. There is also some<br>
confusion around how nested quotas should work. Things like setting<br>
-1 for a child quota do not work as expected and things are not<br>
properly enforced during volume creation.<br>
<br>
Glance has also started to look at implementing nested quota support<br>
based on Cinder's implementation, so we don't want to cause broken<br>
implementation in Cinder to be propogated to other projects.<br>
<br>
Ryan McNair is working with folks on other projects to make find<br>
a better solution and to work through our current issues. This will<br>
be an ongoing effort for now.<br>
<br>
The Future of CLI for python-cinderclient<br>
=========================================<br>
A cross project spec has been approved to work toward removing<br>
individual project CLIs to center on the one common osc CLI. We<br>
discussed the feasibility of deprecating the cinder CLI in favor<br>
of focusing all CLI work on osc.<br>
<br>
There is also concern about delays getting new functionality<br>
deployed. First we need to make server side API changes, then get<br>
them added to the client library, then get them added to osc.<br>
<br>
There is not feature parity between the cinder and osc CLI's at the<br>
moment for cinder functionality. This needs to be addressed first<br>
before we can consider removing or deprecating anything in the cinder<br>
client CLI. Once we have the same level of functionality with both,<br>
we can then decide at what point to only add new CLI commands to osc<br>
and start deprecating the cinder CLI.<br>
<br>
Ivan and Ryan will look in to how to implement osc plugins.<br>
<br>
We will also look in to using cliff and other osc patterns to see if<br>
we can bring the existing cinder client implementation closer to the<br>
osc implementation to make the switch over smoother.<br>
<br>
API Microversions<br>
=================<br>
Scott gave an update on the microversion work.<br>
<br>
Cinder patch: <a href="https://review.openstack.org/#/c/224910" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/224910</a><br>
cinderclient patch: <a href="https://review.openstack.org/#/c/248163/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/248163/</a><br>
spec: <a href="https://review.openstack.org/#/c/223803/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/223803/</a><br>
Test cases: <a href="https://github.com/scottdangelo/TestCinderAPImicroversions" rel="noreferrer" target="_blank">https://github.com/scottdangelo/TestCinderAPImicroversions</a><br>
<br>
Ben brought up the need to have a new unique URL endpoint for<br>
this to get around some backward compatibility problems. This new URL<br>
would be made v3 even though it will initially be the same as v2.<br>
<br>
We would like to get this in soon so it has some runtime. There were<br>
a lot of work items identified though that should get done before we<br>
land. Scott is going to continue working through these issues.<br>
<br>
Async User Reporting<br>
====================<br>
Alex Meade and Sheel have been working on ways to report back better<br>
information for async operations.<br>
<br>
<a href="https://etherpad.openstack.org/p/mitaka-cinder-midcycle-user-notifications" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/mitaka-cinder-midcycle-user-notifications</a><br>
<br>
We will store data in the database rather than the originally<br>
investigated Zaqar approach. There was general agreement that this<br>
work should move forward and would be beneficial.<br>
<br>
SDS Driver Proposals<br>
====================<br>
We've had a few requests in the past to add drivers for other SDS<br>
platforms such as VipR, FalconStore, etc. We've rejected this on the<br>
basis that they duplicate much of what Cinder is already doing so it<br>
could potentially leverage Cinder without providing any benefit to<br>
the project as a whole. It was also brought up in the past that third<br>
party CI should then be run against all supported backends under this<br>
other SDS to validate it.<br>
<br>
It was brought up the IBM SVC driver could be classified as an SDS.<br>
Jay gave an overview of the system to explain how it works. The<br>
difference there is that although SVC can be configured to manage<br>
other storage, it is the only API for managing one of the IBM storage<br>
systems and is not being marketed as an SDS solution.<br>
<br>
In the end we decided that although our concerns for blocking these<br>
in the past are still valid, we will allow them in now if that is<br>
what end users would like to have. They must still have third party<br>
CI, but we will not require CI to be run against every supported<br>
backend under the SDS. We will assume the SDS product does enough<br>
of its own testing to ensure a level of quality and really is then<br>
outside the domain of cinder.<br>
<br>
Status of Third Party CI<br>
========================<br>
There are several CI's that have been very unreliable or completely<br>
absent. We have not been strongly enforcing our policies for this.<br>
<br>
We will need to start disabling CI's and removing drivers for any<br>
third party drivers that are not in compliance. For now this will<br>
need to be a manual task of identifying and enforcing this.<br>
<br>
Some scripts were done in the past to help get some of this data to<br>
help with enforcement. There are also a few dashboards that show some<br>
useful, if not complete, data. These scripts will be expanded to try<br>
to get a more automatic process in place for out of compliance<br>
systems.<br>
<br>
Multiattach<br>
===========<br>
Volume multiattach support has been in Cinder for a couple releases<br>
now, but more work is needed to make it usable with Nova. There<br>
have been some changes towards this, but it likely will not get<br>
resolved in Mitaka. This is ongoing and is actively being worked on<br>
by ildikov and hemna.<br>
<br>
Consistency Groups<br>
==================<br>
CG APIs are disabled by default by policy. It was brought up whether<br>
this should now change. Since not every backend support CGs it was<br>
decided we will not change this.<br>
<br>
There's no force flag for CG snapshots unlike individial volume APIs.<br>
This led to a broader discussion on the need for the force flag in<br>
the first place. General agreement was it should probably just be<br>
removed.<br>
<br>
Quotas with CG snapshots - is a new quota needed? Determined existing<br>
volume quotas are all that's needed and nothing special for CGs.<br>
<br>
Extend Volume<br>
=============<br>
Code landed in os-brick to do extend volume. This is too late for<br>
making it into Nova though. We should be able to get extend volume<br>
in Newton.<br>
<br>
Cinder-Volume A/A HA<br>
====================<br>
Gorka has been working through a series of patches to support this.<br>
Several API race conditions have been fixed for this that will be<br>
good to have even if the full solution doesn't land in time for this<br>
release.<br>
Plan is to get as much useful stuff merged in Mitaka, with the likely<br>
final implementation landing in Newton.<br>
<br>
Versioned Objects and Rolling Upgrades<br>
======================================<br>
Michal has several patches out there to implement this. It has been<br>
tested under at least one scenario and appears to be working. We want<br>
to get these landed as much as needed to support this and get some<br>
runtime and testing in on it.<br>
<br>
Will look at adding a grenade test to get coverage.<br>
<br>
Scalable Backup<br>
===============<br>
<a href="https://etherpad.openstack.org/p/mitaka-cinder-midcycle-scaling-backup-service" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/mitaka-cinder-midcycle-scaling-backup-service</a><br>
The proposal is to break out the backup service to possibly have<br>
multiple backup services running, allowing some parallelism and<br>
distribution of load. Most backups are CPU bound and not I/O bound<br>
so having the ability to move this off of one host could allow<br>
for more scale.<br>
<br>
There was some concern that this would not work with devices that<br>
don't use local device paths (CEPH, Sheepdog). These have been<br>
tested and appear to work fine.<br>
<br>
Cinder without Nova<br>
===================<br>
It was discussed in Tokyo about the desire to extend Cinder to be<br>
useable outside of an OpenStack cloud as a more general SDS<br>
solution. John was able to do some testing using the minimum<br>
pieces of Cinder, Keystone, RabbitMQ, and MySQL.<br>
<br>
This is just a first exploratory step. Additional work will need<br>
to be done to make this a more attractive solution.<br>
<br>
As a tangent, the idea was raised that after some of the major<br>
changes are completed for versioned objects and HA, we should<br>
take a step back and review the Cinder architecture to see if<br>
things have changed enough that we should think about<br>
rearchitecting some things.<br>
<br>
Replication<br>
===========<br>
A large part of the third day was spent talking about replication.<br>
There was a lot of concern about the planned v2 implementation.<br>
Most vendors that have added support struggled with some of the<br>
same questions. The final v2 spec also grew beyond its initial<br>
plan of being a crawl, walk, run approach and added too many<br>
things that complicated the API and implementation.<br>
<br>
There was general agreement that things didn't end up quite like<br>
we wanted them to be for this go around. Rather than releasing<br>
this v2 and potentially needing to turn around and start working<br>
on a v3, it was decided that we would course correct and change<br>
what we are doing for replication in Mitaka.<br>
<br>
There is some (OK, a lot) of concern about doing this so far in<br>
to the development cycle, especially as some vendors have already<br>
landed patches for supporting v2 and there are several in-flight.<br>
<br>
We agreed to accept this risk and go for a simpler case that<br>
clearly addresses one use case, rather than keeping what we had<br>
that unclearly addressed several use cases, maybe. For now we<br>
would just address the case of configuring one or more targets<br>
for a given backend. If there is a planned or unplanned outage<br>
for that primary backend, the administrator has the ability to<br>
fail over resources to one of the secondary locations.<br>
<br>
This is not a solution for ping ponging back and forth and<br>
keeping your instances up and running and happy. This is a<br>
solution for when something is on fire and you need to move to<br>
a safe location.<br>
<br>
Folks were getting hung up on the naming, as replication means<br>
different things to different people. To get around this, we<br>
used code names to talk about different options. The spec for<br>
cheesecake has much more detail about the proposed solution<br>
and valid use case for this iteration of our support.<br>
<br>
<a href="https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/cheesecake.html" rel="noreferrer" target="_blank">https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/cheesecake.html</a><br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div><br></div>