We are chuffed to announce the release of: cinder 15.5.0: OpenStack Block Storage This release is part of the train stable release series. The source is available from: https://opendev.org/openstack/cinder Download the package from: https://tarballs.openstack.org/cinder/ Please report issues through: https://bugs.launchpad.net/cinder/+bugs For more details, please see below. 15.5.0 ^^^^^^ Upgrade Notes ************* * This release contains a fix for Bug #1908315 (https://bugs.launchpad.net/cinder/+bug/1908315), which changes the default value of the policy governing the Block Storage API action Reset group snapshot status (https://docs.openstack.org/api-ref /block-storage/v3/#reset-group-snapshot-status) to make the action administrator-only. This policy was inadvertently changed to be admin-or-owner during the Queens development cycle. The policy is named "group:reset_group_snapshot_status". * If you have a custom value for this policy in your cinder policy configuration file, this change to the default value will not affect you. * If you have been aware of this regression and like the current (incorrect) behavior, you may add the following line to your cinder policy configuration file to restore that behavior: "group:reset_group_snapshot_status": "rule:admin_or_owner" This setting is *not recommended* by the Cinder project team, as it may allow end users to put a group snapshot into an invalid status with indeterminate consequences. For more information about the cinder policy configuration file, see the policy.yaml (https://docs.openstack.org/cinder/latest/configuration/block- storage/samples/policy.yaml.html) section of the Cinder Configuration Guide. * The default value of the configuration option, "glance_num_retries", has been changed to 3 in this release. Its former value was 0. The option controls how many times to retry a Glance API call in response to a HTTP connection failure, timeout or ServiceUnavailable status. By this change, Cinder can be more resilient to temporary failure and continue the request if a retry succeeds. Bug Fixes ********* * Bug #1888951 (https://bugs.launchpad.net/cinder/+bug/1888951): Fixed an issue with creating a backup from snapshot with NFS volume driver. * RBD driver bug #1901241 (https://bugs.launchpad.net/cinder/+bug/1901241): Fixed an issue where decreasing the "rbd_max_clone_depth" configuration option would prevent volumes that had already exceeded that depth from being cloned. * Bug #1908315 (https://bugs.launchpad.net/cinder/+bug/1908315): Corrected the default checkstring for the "group:reset_group_snapshot_status" policy to make it admin-only. This policy governs the Block Storage API action Reset group snapshot status (https://docs.openstack.org/api-ref/block-storage/v3 /#reset-group-snapshot-status), which by default is supposed to be an adminstrator-only action. * Bug #1883490 (https://bugs.launchpad.net/cinder/+bug/1883490): Fixed incorrect response of listing volumes with filters. * Bug #1863806 (https://bugs.launchpad.net/cinder/+bug/1863806): "os- reset_status" notifications for volumes, snapshots, and backups were being sent to nonstandard publisher_ids relative to other cinder notifications for volumes, snapshots, and backups. Now they are also sent to the following *standard* publisher_ids, where most people would expect to find them: * 'volume' for volume status resets * 'snapshot' for snapshot status resets * 'backup' for backup status resets * Bug #1898587 (https://bugs.launchpad.net/cinder/+bug/1898587): Address cloning and api request timeout issues users may hit in certain environments, by allowing configuring timeout values for these operations through cinder configuration file. * NetApp SolidFire driver Bug #1896112 (https://bugs.launchpad.net/cinder/+bug/1896112): Fixes an issue that may duplicate volumes during creation, in case the SolidFire backend successfully processes a request and creates the volume, but fails to deliver the result back to the driver (the response is lost). When this scenario occurs, the SolidFire driver will retry the operation, which previously resulted in the creation of a duplicate volume. This fix adds the "sf_volume_create_timeout" configuration option (default value: 60 seconds) which specifies an additional length of time that the driver will wait for the volume to become active on the backend before raising an exception. * NetApp SolidFire driver Bug #1891914 (https://bugs.launchpad.net/cinder/+bug/1891914): Fix an error that might occur on cluster workload rebalancing or system upgrade, when an operation is made to a volume at the same time its connection is being moved to a secondary node. Changes in cinder 15.4.1..15.5.0 -------------------------------- a75f8633b NetApp SolidFire: Fix error on cluster workload rebalancing f70bfbf71 NetApp SolidFire: Fix duplicate volume when API response is lost c2c098317 Pure: Add default value to pure_host_personality f6d256cf1 Correct group:reset_group_snapshot_status policy ddb88caad NetApp SolidFire: Fix clone and request timeout issues c3759271c Log information about the Ceph v2 clone API 0a3851af3 RBD: Retry delete if VolumeIsBusy in _copy_image_to_volume 1b24dd6f4 API: os-reset_status notification fix 0c5406da7 Ensure pep8/fast8 run in python 3.6 2e7abe662 Fix: listing volumes with filters 04198bba8 Fixed an issue with creating a backup from snapshot with NFS volume driver. d543201bd Adjust requirements and lower-constraints 07adabef0 Do not fail when depth is greater than rbd_max_clone_depth d56bb6d6f Change default glance_num_retries to 3 Diffstat (except docs and test files) ------------------------------------- cinder/api/contrib/admin_actions.py | 24 ++++ cinder/backup/manager.py | 2 + cinder/common/config.py | 2 +- cinder/policies/group_snapshot_actions.py | 2 +- .../volume/drivers/solidfire/test_solidfire.py | 24 ++-- cinder/volume/api.py | 9 +- cinder/volume/drivers/pure.py | 3 +- cinder/volume/drivers/rbd.py | 39 ++++-- cinder/volume/drivers/remotefs.py | 10 +- cinder/volume/drivers/solidfire.py | 136 +++++++++++++++++---- lower-constraints.txt | 28 +++-- ...-backup-from-nfs-snapshot-2e06235eb318b852.yaml | 6 + .../notes/bug-1901241-361b1b361bfa5152.yaml | 8 ++ .../notes/bug-1908315-020fea3e244d49bb.yaml | 38 ++++++ ...fix-list-volume-filtering-3f2bf93ab9b98974.yaml | 5 + ...crease_glance_num_retries-66b455a0729c4535.yaml | 9 ++ ...tatus-notification-update-4a80a8b5feb821ef.yaml | 13 ++ ...nd-request-timeout-issues-56f7a7659c7ec775.yaml | 7 ++ ...icate-volume-request-lost-adefacda1298dc62.yaml | 14 +++ ...or-on-cluster-rebalancing-515bf41104cd181a.yaml | 8 ++ requirements.txt | 10 +- test-requirements.txt | 12 +- tox.ini | 4 +- 26 files changed, 434 insertions(+), 78 deletions(-) Requirements updates -------------------- diff --git a/requirements.txt b/requirements.txt index 3bd2ee40e..c90a8bf73 100644 --- a/requirements.txt +++ b/requirements.txt @@ -21 +21 @@ oslo.concurrency>=3.26.0 # Apache-2.0 -oslo.context>=2.19.2 # Apache-2.0 +oslo.context>=2.22.0 # Apache-2.0 @@ -31 +31 @@ oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 -oslo.service!=1.28.1,>=1.24.0 # Apache-2.0 +oslo.service>=1.31.0 # Apache-2.0 @@ -47 +47 @@ python-swiftclient>=3.2.0 # Apache-2.0 -pytz>=2013.6 # MIT +pytz>=2015.7 # MIT @@ -62 +62 @@ os-brick>=2.10.5 # Apache-2.0 -os-win>=3.0.0 # Apache-2.0 +os-win>=4.1.0 # Apache-2.0 @@ -66 +66 @@ castellan>=0.16.0 # Apache-2.0 -cryptography>=2.1 # BSD/Apache-2.0 +cryptography>=2.1.4 # BSD/Apache-2.0 diff --git a/test-requirements.txt b/test-requirements.txt index a8514b563..32874d927 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -14 +14 @@ oslotest>=3.2.0 # Apache-2.0 -pycodestyle==2.5.0 # MIT License +pycodestyle>=2.0.0,<2.6.0 # MIT License @@ -17 +17 @@ psycopg2>=2.7 # LGPL/ZPL -SQLAlchemy-Utils>=0.36.1 # BSD License +SQLAlchemy-Utils>=0.33.11 # BSD License @@ -26,0 +27,8 @@ doc8>=0.6.0 # Apache-2.0 +# +# These are here to enable the resolver to work faster. +# They are not directly used by cinder. Without these +# dependency resolution was taking >6 hours. +mox3>=0.28.0 +os-service-types>=1.6.0 +msgpack>=0.5.6 +Babel>=2.7.0