[Openstack][cinder] multiple backend generic volume group issue
Ignazio Cassano
ignaziocassano at gmail.com
Fri Jan 27 20:12:47 UTC 2023
Hello All,
I created a cinder configuration using multiple backends based on netapp
ontap nfs with the same backend name and the same volume type.
So my nfsgold1, nfsgold2 and nfsgold3 are addressed by the nfsgold volume
backend name and nfsgold volume type.
Each one has its own svm on netapp.
This help me to distribute nfsgold volumes using scheduler filters based on
free capacity.
So when I create a volume with volume type nfsgold the scheduler allocates
it on nfsgold1 or nfsgold2 or nfsgold3 using the capacity filter.
Since some virtual machines need to have volumes on the same backend (for
example nfsgold1) because they belong to the same application, I use the
cinder scheduler hint.
Why I need to store those volumes on the same backend?
It is because they must belong to the same generic volume group for
snaphotting at the same time.
For this reason I need to create a generic volume group.
Generic volume group creation need the volume type, in my case nfsgold.
But like the volume, when I create a generic volume group, it is scheduled
on nfsgold1 or nfsgold2 or nfsgold3 and it is obvious looking in the cinder
database. So if I want to groups volumes of an application I must:
1 check if they are on the same backend (nfsgold1/nfsgold2/nfsgold3)
2) check on which backend the volume group is allocated
(nfsgold1,/nfsgold2/nfsgold3) and it can be done only looking in the cinder
database).
Volumes and volume goups must stay on the same real
backend.
If not, when I create a group snapshot, it gives some errors because it
checks the host related to the real backend (nfsgold1/nfsgold2/nfsgold3)
and returns errors failing the operation.
When I create a volume group by api or by command line, I must specify the
volume type but I cannot know which is the real backend associated to it
without looking in the cinder database.
I think this is a bug.
In the above situation, how can obtain consistent volume group snapshot ?
Sorry for my bad english.
I hope who is reading can understand what I mean.
Ignazio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230127/48b42d60/attachment.htm>
More information about the openstack-discuss
mailing list