On 13/01, Ignazio Cassano wrote:
Hello, I am using openstack stein on centos 7 with netapp ontap driver. Seems capacity filter is not working and volumes are always creed on the first share where less space is available. My configuration is posted here: enabled_backends = nfsgold1, nfsgold2
[nfsgold1] nas_secure_file_operations = false nas_secure_file_permissions = false volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = nfs netapp_vserver = svm-tstcinder2-cl1 netapp_server_hostname = faspod2.csi.it netapp_server_port = 80 netapp_login = apimanager netapp_password = password nfs_shares_config = /etc/cinder/nfsgold1_shares volume_backend_name = nfsgold #nfs_mount_options = lookupcache=pos nfs_mount_options = lookupcache=pos
[nfsgold2] nas_secure_file_operations = false nas_secure_file_permissions = false volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = nfs netapp_vserver = svm-tstcinder2-cl2 netapp_server_hostname = faspod2.csi.it netapp_server_port = 80 netapp_login = apimanager netapp_password = password nfs_shares_config = /etc/cinder/nfsgold2_shares volume_backend_name = nfsgold #nfs_mount_options = lookupcache=pos nfs_mount_options = lookupcache=pos
Volumes are created always on nfsgold1 also if has less space available of nfsgold2 share Thanks Ignazio
Hi, What volume type are you using to create the volumes? If you don't define it it would use the default from the cinder.conf file. What are the extra specs of the volume type? What pool info are the NetApp backends reporting? It's usually a good idea to enabled debugging on the schedulers and look at the details of how they are making the filtering and weighting decisions. Cheers, Gorka.