[Openstack] Cinder-volume active/passive failover

Marco Marino marino.mrc at gmail.com
Fri Sep 2 19:09:31 UTC 2016


Hi, I'm trying to develop a solution for openstack-cinder-volume with an
active/passive cluster.
Basically I have a san that exposes a LUN to 2 openstack-cinder-volume
hosts, then I used pacemaker to create resources:

1 drbd + filesystem mounted on /etc/target
1 drbd + filesystem mounted on /var/lib/cinder
1 Virtual Ip Address used as iscsi_ip_address in cinder.conf (yes, on both
nodes)
1 systemd:target.service for starting target daemon in an active/passive
manner (only on the active host of the cluster)
1 systemd:openstack-cinder-volume for start the cinder-volume daemon

I'm using lvm as backend in cinder volume and cinder.conf has the same
content an all cinder-volume nodes.

Status of the cluster:

[root at mitaka-cinder-volume1-env3 ~]# pcs status
Cluster name: cinder_iscsi_cluster
Last updated: Fri Sep  2 21:00:41 2016        Last change: Fri Sep  2
20:29:51 2016 by root via cibadmin on mitaka-cinder-volume2-env3
Stack: corosync
Current DC: mitaka-cinder-volume2-env3 (version 1.1.13-10.el7_2.4-44eb2dd)
- partition with quorum
2 nodes and 9 resources configured

Online: [ mitaka-cinder-volume1-env3 mitaka-cinder-volume2-env3 ]

Full list of resources:

 Resource Group: group1
     StorageVIP    (ocf::heartbeat:IPaddr2):    Started
mitaka-cinder-volume2-env3
     targetd    (systemd:target):    Started mitaka-cinder-volume2-env3
     cinder-volume    (systemd:openstack-cinder-volume):    Started
mitaka-cinder-volume2-env3
 Master/Slave Set: ms_drbd_1_2 [res_drbd_1_2]
     Masters: [ mitaka-cinder-volume2-env3 ]
     Slaves: [ mitaka-cinder-volume1-env3 ]
 res_Filesystem_1    (ocf::heartbeat:Filesystem):    Started
mitaka-cinder-volume2-env3
 Master/Slave Set: ms_drbd_2 [res_drbd_2]
     Masters: [ mitaka-cinder-volume2-env3 ]
     Slaves: [ mitaka-cinder-volume1-env3 ]
 res_Filesystem_2    (ocf::heartbeat:Filesystem):    Started
mitaka-cinder-volume2-env3

PCSD Status:
  mitaka-cinder-volume1-env3: Online
  mitaka-cinder-volume2-env3: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root at mitaka-cinder-volume1-env3 ~]#


Status of constraints:

[root at mitaka-cinder-volume1-env3 ~]# pcs constraint
Location Constraints:
Ordering Constraints:
  promote ms_drbd_1_2 then start res_Filesystem_1 (score:INFINITY)
  res_Filesystem_1 then group1 (score:INFINITY)
  promote ms_drbd_2 then start res_Filesystem_2 (score:INFINITY)
  res_Filesystem_2 then group1 (score:INFINITY)
Colocation Constraints:
  res_Filesystem_1 with ms_drbd_1_2 (score:INFINITY) (with-rsc-role:Master)
  group1 with res_Filesystem_1 (score:INFINITY)
  res_Filesystem_2 with ms_drbd_2 (score:INFINITY) (with-rsc-role:Master)
  group1 with res_Filesystem_2 (score:INFINITY)
[root at mitaka-cinder-volume1-env3 ~]#




I'd like to know if this is the right way because I have some doubts:
1) Should I avoid that the lun used by the VG cinder-volumes is viewed at
the same time by all nodes in the cluster? (please, let me know how can I
avoid this in a "clustered" manner)
2) When I create a new volume, I see the logical volume on the active node,
but not on the passive node. Why this happens? How can I rescan the logical
volumes on the passive node? (lvscan doesn't find new volumes...)

Basically the failover works well because I have an instance booted from a
volume that continue to works if I shut down the active node. I can observe
with netstat that the connection between (the new active) cinder-volume and
the compute node is re-created when failover happens.
However, please give me more suggestions about the argument. I have a san
and 2 cinder-volume dedicated nodes and I'm try to realize an
active/passive cluster. Any idea is accepted.
Thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160902/800c13d8/attachment.html>


More information about the Openstack mailing list