Exactly, for cinder a/a we would remove it from pacemaker config (as recommended) and only define a cluster with zookeeper. I still didn't have the time to actually test it properly, but I definitely will. Zitat von Gorka Eguileor <geguileo@redhat.com>:
On 22/09, Eugen Block wrote:
Hi,
thanks for the clarification. We don't set "backend_host" at all at the moment, only "host" to the virtual hostname which has the virtual IP assigned by pacemaker. This seems to work fine.
Hi,
I assume than in an Active-Active deployment you won't be using pacemaker, and in that case I would not define "host" at all.
Cheers, Gorka.
Changing a deployment from Active-Passive to Active-Active is a bit trickier, because you need to leave one of the cinder-volume services running (at least once) with the old `backend_host` so it can "move" resources (volumes/snapshots) to the "cluster".
I'm not sure if we'll even try that, I was thinking more about new deployments that would be using active/active from the beginning. If we decided to switch we'd test it heavily before chaning anything. ;-)
Thanks again! Eugen
Zitat von Gorka Eguileor <geguileo@redhat.com>:
On 31/08, Eugen Block wrote:
I think I found my answers. Currently I only have a single-control node in my test lab but I'll redeploy it with three control nodes and test it with zookeeper. With a single control node the zookeeper and cinder cluster config seem to work.
Hi Eugen,
Besides deploying the DLM, configuring it in Cinder, and setting the `cluser` configuration option you should also be careful with the `host`/`backend_host` configuration option.
In Active-Passive deployments we set the `backend_host` configuration option so it is preserved when cinder-volume fails over to a different controller host, but in Active-Active we actually want each host to have cinder-volume running with different values, so we usually leave it unset so it takes the controller's host name.
Changing a deployment from Active-Passive to Active-Active is a bit trickier, because you need to leave one of the cinder-volume services running (at least once) with the old `backend_host` so it can "move" resources (volumes/snapshots) to the "cluster".
Cheers, Gorka.
Zitat von Eugen Block <eblock@nde.ag>:
Hi,
I didn't mean to hijack the other thread so I'll start a new one. There are some pages I found incl. Gorkas article [1], but I don't really understand yet how to configure it.
We don't use any of the automated deployments (we created our own) like TripleO etc., is there any guide showing how to setup cinder-volume active/active? I see in my lab environment that python3-tooz is already installed on the control node, but how do I use it? Besides the "cluster" config option in the cinder.conf (is that defined when setting up the DLM?) what else is required? I also found this thread [2] pointing to the source code, but that doesn't really help me at this point. Any pointers to a how-to or deployment guide would be highly appreciated!
Thanks, Eugen
[1] https://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/ [2] https://www.mail-archive.com/openstack@lists.openstack.org/msg18385.html