[openstack][stein][cinder] capacity filter is not working

Ignazio Cassano ignaziocassano at gmail.com
Thu Jan 13 11:32:02 UTC 2022


Hello, I am using nfsgold volume type.
[root at tst-controller-01 ansible]# cinder type-show nfsgold
+---------------------------------+--------------------------------------+
| Property                        | Value                                |
+---------------------------------+--------------------------------------+
| description                     | None                                 |
| extra_specs                     | volume_backend_name : nfsgold        |
| id                              | fd8b1cc8-4c3a-490d-bc95-29e491f850cc |
| is_public                       | True                                 |
| name                            | nfsgold                              |
| os-volume-type-access:is_public | True                                 |
| qos_specs_id                    | None                                 |
+---------------------------------+--------------------------------------+

cinder get-pools
+----------+--------------------------------------------------------------------+
| Property | Value
     |
+----------+--------------------------------------------------------------------+
| name     | cinder-cluster-1 at nfsgold2#10.102.189.156:/svm_tstcinder_cl2_volssd
|
+----------+--------------------------------------------------------------------+
+----------+--------------------------------------------------------------------+
| Property | Value
     |
+----------+--------------------------------------------------------------------+
| name     | cinder-cluster-1 at nfsgold1#10.102.189.155:/svm_tstcinder_cl1_volssd
|
+----------+--------------------------------------------------------------------+

I noted that nfsgold2 is used also when nfsgold1 is almost full.
I expected the volume was created on share with more space availability.
Ignazio


Il giorno gio 13 gen 2022 alle ore 12:03 Gorka Eguileor <geguileo at redhat.com>
ha scritto:

> On 13/01, Ignazio Cassano wrote:
> > Hello,
> > I am using openstack stein on centos 7 with netapp ontap driver.
> > Seems capacity filter is not working and volumes are always creed on the
> > first share where less space is available.
> > My configuration is posted here:
> > enabled_backends = nfsgold1, nfsgold2
> >
> > [nfsgold1]
> > nas_secure_file_operations = false
> > nas_secure_file_permissions = false
> > volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
> > netapp_storage_family = ontap_cluster
> > netapp_storage_protocol = nfs
> > netapp_vserver = svm-tstcinder2-cl1
> > netapp_server_hostname = faspod2.csi.it
> > netapp_server_port = 80
> > netapp_login = apimanager
> > netapp_password = password
> > nfs_shares_config = /etc/cinder/nfsgold1_shares
> > volume_backend_name = nfsgold
> > #nfs_mount_options = lookupcache=pos
> > nfs_mount_options = lookupcache=pos
> >
> >
> > [nfsgold2]
> > nas_secure_file_operations = false
> > nas_secure_file_permissions = false
> > volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
> > netapp_storage_family = ontap_cluster
> > netapp_storage_protocol = nfs
> > netapp_vserver = svm-tstcinder2-cl2
> > netapp_server_hostname = faspod2.csi.it
> > netapp_server_port = 80
> > netapp_login = apimanager
> > netapp_password = password
> > nfs_shares_config = /etc/cinder/nfsgold2_shares
> > volume_backend_name = nfsgold
> > #nfs_mount_options = lookupcache=pos
> > nfs_mount_options = lookupcache=pos
> >
> >
> >
> > Volumes are created always on nfsgold1 also if  has less space available
> of
> > nfsgold2 share
> > Thanks
> > Ignazio
>
> Hi,
>
> What volume type are you using to create the volumes?  If you don't
> define it it would use the default from the cinder.conf file.
>
> What are the extra specs of the volume type?
>
> What pool info are the NetApp backends reporting?
>
> It's usually a good idea to enabled debugging on the schedulers and look
> at the details of how they are making the filtering and weighting
> decisions.
>
> Cheers,
> Gorka.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220113/65d75b62/attachment.htm>


More information about the openstack-discuss mailing list