On 31/08, Eugen Block wrote:
I think I found my answers. Currently I only have a single-control node in my test lab but I'll redeploy it with three control nodes and test it with zookeeper. With a single control node the zookeeper and cinder cluster config seem to work.
Hi Eugen, Besides deploying the DLM, configuring it in Cinder, and setting the `cluser` configuration option you should also be careful with the `host`/`backend_host` configuration option. In Active-Passive deployments we set the `backend_host` configuration option so it is preserved when cinder-volume fails over to a different controller host, but in Active-Active we actually want each host to have cinder-volume running with different values, so we usually leave it unset so it takes the controller's host name. Changing a deployment from Active-Passive to Active-Active is a bit trickier, because you need to leave one of the cinder-volume services running (at least once) with the old `backend_host` so it can "move" resources (volumes/snapshots) to the "cluster". Cheers, Gorka.
Zitat von Eugen Block <eblock@nde.ag>:
Hi,
I didn't mean to hijack the other thread so I'll start a new one. There are some pages I found incl. Gorkas article [1], but I don't really understand yet how to configure it.
We don't use any of the automated deployments (we created our own) like TripleO etc., is there any guide showing how to setup cinder-volume active/active? I see in my lab environment that python3-tooz is already installed on the control node, but how do I use it? Besides the "cluster" config option in the cinder.conf (is that defined when setting up the DLM?) what else is required? I also found this thread [2] pointing to the source code, but that doesn't really help me at this point. Any pointers to a how-to or deployment guide would be highly appreciated!
Thanks, Eugen
[1] https://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/ [2] https://www.mail-archive.com/openstack@lists.openstack.org/msg18385.html