[Cinder] User visible information in Volume Types
Hi, as discussed in the Cinder midcycle meeting: I would like to add something to the volume types to be able to see certain information. I am also working out a spec for this, but here are the options I have in mind/we have discussed yesterday: Use Cases -------------- As a non-admin user creating a volume I would like to see various information about the volume type. Some of them are already available (e.g. multiattach). While others are either not available at all or not available for non-admin users. Two of those information are: 1. Whether a volume type can be used to create encrypted volumes 2. Whether a volume type creates replicated volumes, either through Cinder or through a configured backend like ceph The first information comes from the encryption type within the volume, that is only accessible for the admin role. The second information can either be set and directly seen in the volume type extra_specs or is indirectly a part of the configuration of the backend. There might be other use cases, where an operator wants to set certain information about the volume type. As there are more and more tools that automatically create IaaS resources, those information should be accessible in a uniform way. Possible changes ------------------------ Here are a few ways to solve this problem. 1. Using the already existing user facing "properties" field. Administrators can already set key,value pairs here and user visible extra_specs can be seen in this field. This would lead to a problem in the volume scheduler, as EVERY input is going into the extra_specs table and the scheduler will try to fulfill every key=value pair a volume type requests when looking for a fitting backend for the volume. 1.a) This could be solved in creating a whitelist or blacklist, so not every extra_spec is checked in the scheduler. Either all only informational extra_specs should be ignored or a list of extra_specs, that will be checked could be created. 1.b) Another option would be to create another database table: metadata and put every key=value pair in there at the time of their creation, that should not be used in the scheduling process. The "properties" field would then be build in the API from a merge of the extra_specs and the metadata table. (This may also prevent the volume type from getting unusable, when some adds a key=value pair to the "properties", that is not meant for the scheduler - right now this would break the volume creation process) 2. Creating a new "metadata" field, that will contain such user facing information. This would include API changes to address the new view of the volume type as well as a new API endpoint to set information as key value pairs for this metadata field. A new database table for those metadata is also necessary. The downside on this is, it might confuse people, as user facing information would be in two different fields: "propertied" for multiattach, replication_enabled and AZ, but "metadata" for some things like encryption_enabled and backend_replication, etc... It would also not solve the problem, that key-value pairs not meant for scheduling purposes can still be put into the properties/extra_specs and will only lead to Errors in the scheduling process later on. 3. Facing use cases individually: 3.a) Information about whether a volume type creates encrypted volumes or not could be calculated in the API calls for list/show volume types from the presence of an encryption type. The information would be shown in the "properties" field. This would be a very minimal patch, but will not solve other use cases, that would benefit from user facing information. 3.b) Creating an extra field for the encryption in the volume type table, that is automatically set when creating or deleting an encryption type. This would need a database change and a change of the view of the volume_types. 3.c) Looking into the different drivers and how they handle internal configurable replication and whether there are ways to let OpenStack know this and propagate it to users. This is very hard to achieve, maybe even impossible without input from an operator, who configured the backends and volume types. ----- If you have any other options in mind or want to state your opinion on this, please let me know. greetings Josephine (Luzi)
With regards to the information about Cinder backends replication capabilities, a lot of this information is already exposed with the `cinder get-pools --detail` command - no equivalent OpenStack command that I know of. Yes, this is Admin only, but it could provide some of the information you need. For example, this Pure Storage backend is configured for multiple replication types to multiple different backends: $ cinder get-pools --detail +--------------------------------------+------------------------------+ | Property | Value | +--------------------------------------+------------------------------+ | QoS_support | True | | allocated_capacity_gb | 10 | | cacheable | True | | consistencygroup_support | True | | consistent_group_replication_enabled | True | | consistent_group_snapshot_enabled | True | | driver_version | 18.0.iscsi | | filter_function | None | | free_capacity_gb | 72.54375000018626 | | goodness_function | None | | input_per_sec | 0 | | max_over_subscription_ratio | 1.0 | | multiattach | True | | name | pure-cluster-1@fa-1#fa-1 | | output_per_sec | 0 | | provisioned_capacity | 10.0 | | queue_depth | 0 | | reads_per_sec | 0 | | replication_capability | trisync | | replication_count | 2 | | replication_enabled | True | | replication_targets | ['fa-2', 'fa-3'] | | replication_type | ['async', 'sync', 'trisync'] | | reserved_percentage | 0 | | storage_protocol | iSCSI | | thin_provisioning_support | True | | timestamp | 2024-02-15T15:26:22.210497 | | total_capacity_gb | 72.54375000018626 | | total_hosts | 2 | | total_pgroups | 2 | | total_snapshots | 3 | | total_volumes | 1 | | usec_per_read_op | 0 | | usec_per_write_op | 0 | | vendor_name | Pure Storage | | volume_backend_name | fa-1 | | writes_per_sec | 0 | +--------------------------------------+------------------------------+ If the backend isn't actually configured for replication, Pure also exposes whether the backend is even capable of supporting different replication types, depending on how the backend has been configured (outside of OpenStack)... $ cinder get-pools --detail +--------------------------------------+----------------------------+ | Property | Value | +--------------------------------------+----------------------------+ | QoS_support | True | | allocated_capacity_gb | 10 | | cacheable | True | | consistencygroup_support | True | | consistent_group_replication_enabled | True | | consistent_group_snapshot_enabled | True | | driver_version | 18.0.iscsi | | filter_function | None | | free_capacity_gb | 72.54609375074506 | | goodness_function | None | | input_per_sec | 0 | | max_over_subscription_ratio | 1.0 | | multiattach | True | | name | pure-cluster-1@fa-1#fa-1 | | output_per_sec | 0 | | provisioned_capacity | 10.0 | | queue_depth | 0 | | reads_per_sec | 0 | | replication_capability | trisync | | replication_count | 0 | | replication_enabled | False | | replication_targets | [] | | replication_type | [] | | reserved_percentage | 0 | | storage_protocol | iSCSI | | thin_provisioning_support | True | | timestamp | 2024-02-15T15:35:59.133353 | | total_capacity_gb | 72.54609375074506 | | total_hosts | 2 | | total_pgroups | 2 | | total_snapshots | 3 | | total_volumes | 1 | | usec_per_read_op | 0 | | usec_per_write_op | 0 | | vendor_name | Pure Storage | | volume_backend_name | fa-1 | | writes_per_sec | 0 | +--------------------------------------+----------------------------+ Here there are no replication targets or types listed, but the replication capability is defined. Simon
thank you Simon It seems, that for different backends, there are different fields present for this command. I do not get the `replication_capability` field for ceph. I only get the `replication_enabled` field. So I have the problem, that ceph uses internal replication, but this is completely transparent for Cinder. greetings Josephine (Luzi) Am 15.02.24 um 16:39 schrieb simon@purestorage.com:
With regards to the information about Cinder backends replication capabilities, a lot of this information is already exposed with the `cinder get-pools --detail` command - no equivalent OpenStack command that I know of.
Yes, this is Admin only, but it could provide some of the information you need.
For example, this Pure Storage backend is configured for multiple replication types to multiple different backends:
$ cinder get-pools --detail +--------------------------------------+------------------------------+ | Property | Value | +--------------------------------------+------------------------------+ | QoS_support | True | | allocated_capacity_gb | 10 | | cacheable | True | | consistencygroup_support | True | | consistent_group_replication_enabled | True | | consistent_group_snapshot_enabled | True | | driver_version | 18.0.iscsi | | filter_function | None | | free_capacity_gb | 72.54375000018626 | | goodness_function | None | | input_per_sec | 0 | | max_over_subscription_ratio | 1.0 | | multiattach | True | | name | pure-cluster-1@fa-1#fa-1 | | output_per_sec | 0 | | provisioned_capacity | 10.0 | | queue_depth | 0 | | reads_per_sec | 0 | | replication_capability | trisync | | replication_count | 2 | | replication_enabled | True | | replication_targets | ['fa-2', 'fa-3'] | | replication_type | ['async', 'sync', 'trisync'] | | reserved_percentage | 0 | | storage_protocol | iSCSI | | thin_provisioning_support | True | | timestamp | 2024-02-15T15:26:22.210497 | | total_capacity_gb | 72.54375000018626 | | total_hosts | 2 | | total_pgroups | 2 | | total_snapshots | 3 | | total_volumes | 1 | | usec_per_read_op | 0 | | usec_per_write_op | 0 | | vendor_name | Pure Storage | | volume_backend_name | fa-1 | | writes_per_sec | 0 | +--------------------------------------+------------------------------+
If the backend isn't actually configured for replication, Pure also exposes whether the backend is even capable of supporting different replication types, depending on how the backend has been configured (outside of OpenStack)... $ cinder get-pools --detail +--------------------------------------+----------------------------+ | Property | Value | +--------------------------------------+----------------------------+ | QoS_support | True | | allocated_capacity_gb | 10 | | cacheable | True | | consistencygroup_support | True | | consistent_group_replication_enabled | True | | consistent_group_snapshot_enabled | True | | driver_version | 18.0.iscsi | | filter_function | None | | free_capacity_gb | 72.54609375074506 | | goodness_function | None | | input_per_sec | 0 | | max_over_subscription_ratio | 1.0 | | multiattach | True | | name | pure-cluster-1@fa-1#fa-1 | | output_per_sec | 0 | | provisioned_capacity | 10.0 | | queue_depth | 0 | | reads_per_sec | 0 | | replication_capability | trisync | | replication_count | 0 | | replication_enabled | False | | replication_targets | [] | | replication_type | [] | | reserved_percentage | 0 | | storage_protocol | iSCSI | | thin_provisioning_support | True | | timestamp | 2024-02-15T15:35:59.133353 | | total_capacity_gb | 72.54609375074506 | | total_hosts | 2 | | total_pgroups | 2 | | total_snapshots | 3 | | total_volumes | 1 | | usec_per_read_op | 0 | | usec_per_write_op | 0 | | vendor_name | Pure Storage | | volume_backend_name | fa-1 | | writes_per_sec | 0 | +--------------------------------------+----------------------------+
Here there are no replication targets or types listed, but the replication capability is defined.
Simon
-- Josephine Seifert IT-Innovationsassistent Cloud&Heat Technologies GmbH Königsbrücker Straße 96 | 01099 Dresden +49 351 479 367 00 Josephine.Seifert@cloudandheat.com | www.cloudandheat.com Green, Open, Efficient. Your Cloud Service and Cloud Technology Provider from Dresden. https://www.cloudandheat.com/ Commercial Register: District Court Dresden Register Number: HRB 30549 VAT ID No.: DE281093504 Managing Director: Nicolas Röhrs Authorized signatory: Dr. Marius Feldmann
participants (2)
-
Josephine Seifert
-
simon@purestorage.com