<div dir="ltr"><div>Hello, I am using nfsgold volume type.</div><div>[root@tst-controller-01 ansible]# cinder type-show nfsgold<br>+---------------------------------+--------------------------------------+<br>| Property                        | Value                                |<br>+---------------------------------+--------------------------------------+<br>| description                     | None                                 |<br>| extra_specs                     | volume_backend_name : nfsgold        |<br>| id                              | fd8b1cc8-4c3a-490d-bc95-29e491f850cc |<br>| is_public                       | True                                 |<br>| name                            | nfsgold                              |<br>| os-volume-type-access:is_public | True                                 |<br>| qos_specs_id                    | None                                 |<br>+---------------------------------+--------------------------------------+</div><div><br></div><div>cinder get-pools</div><div>+----------+--------------------------------------------------------------------+<br>| Property | Value                                                              |<br>+----------+--------------------------------------------------------------------+<br>| name     | cinder-cluster-1@nfsgold2#10.102.189.156:/svm_tstcinder_cl2_volssd |<br>+----------+--------------------------------------------------------------------+</div><div>+----------+--------------------------------------------------------------------+<br>| Property | Value                                                              |<br>+----------+--------------------------------------------------------------------+<br>| name     | cinder-cluster-1@nfsgold1#10.102.189.155:/svm_tstcinder_cl1_volssd |<br>+----------+--------------------------------------------------------------------+</div><div><br></div><div>I noted that nfsgold2 is used also when nfsgold1 is almost full.</div><div>I expected the volume was created on share with more space availability.</div><div>Ignazio</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno gio 13 gen 2022 alle ore 12:03 Gorka Eguileor <<a href="mailto:geguileo@redhat.com">geguileo@redhat.com</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 13/01, Ignazio Cassano wrote:<br>
> Hello,<br>
> I am using openstack stein on centos 7 with netapp ontap driver.<br>
> Seems capacity filter is not working and volumes are always creed on the<br>
> first share where less space is available.<br>
> My configuration is posted here:<br>
> enabled_backends = nfsgold1, nfsgold2<br>
><br>
> [nfsgold1]<br>
> nas_secure_file_operations = false<br>
> nas_secure_file_permissions = false<br>
> volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver<br>
> netapp_storage_family = ontap_cluster<br>
> netapp_storage_protocol = nfs<br>
> netapp_vserver = svm-tstcinder2-cl1<br>
> netapp_server_hostname = <a href="http://faspod2.csi.it" rel="noreferrer" target="_blank">faspod2.csi.it</a><br>
> netapp_server_port = 80<br>
> netapp_login = apimanager<br>
> netapp_password = password<br>
> nfs_shares_config = /etc/cinder/nfsgold1_shares<br>
> volume_backend_name = nfsgold<br>
> #nfs_mount_options = lookupcache=pos<br>
> nfs_mount_options = lookupcache=pos<br>
><br>
><br>
> [nfsgold2]<br>
> nas_secure_file_operations = false<br>
> nas_secure_file_permissions = false<br>
> volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver<br>
> netapp_storage_family = ontap_cluster<br>
> netapp_storage_protocol = nfs<br>
> netapp_vserver = svm-tstcinder2-cl2<br>
> netapp_server_hostname = <a href="http://faspod2.csi.it" rel="noreferrer" target="_blank">faspod2.csi.it</a><br>
> netapp_server_port = 80<br>
> netapp_login = apimanager<br>
> netapp_password = password<br>
> nfs_shares_config = /etc/cinder/nfsgold2_shares<br>
> volume_backend_name = nfsgold<br>
> #nfs_mount_options = lookupcache=pos<br>
> nfs_mount_options = lookupcache=pos<br>
><br>
><br>
><br>
> Volumes are created always on nfsgold1 also if  has less space available of<br>
> nfsgold2 share<br>
> Thanks<br>
> Ignazio<br>
<br>
Hi,<br>
<br>
What volume type are you using to create the volumes?  If you don't<br>
define it it would use the default from the cinder.conf file.<br>
<br>
What are the extra specs of the volume type?<br>
<br>
What pool info are the NetApp backends reporting?<br>
<br>
It's usually a good idea to enabled debugging on the schedulers and look<br>
at the details of how they are making the filtering and weighting<br>
decisions.<br>
<br>
Cheers,<br>
Gorka.<br>
<br>
</blockquote></div>