Hi, In my environment openstack volume service running on compute node. When the openstack-cinder-volume service goes down on, the Block Storage volumes which were created using the openstack-cinder-volume service cannot be managed until the service comes up again. I need help how to configure openstack cinder volume HA. Thanks & B'Rgds, Rony
On 20/06, Md. Farhad Hasan Khan wrote:
Hi,
In my environment openstack volume service running on compute node. When the openstack-cinder-volume service goes down on, the Block Storage volumes which were created using the openstack-cinder-volume service cannot be managed until the service comes up again.
I need help how to configure openstack cinder volume HA.
Thanks & B'Rgds,
Rony
Hi Rony, You can configure Cinder API, Scheduler, and Backup in Active-Active and Cinder Volume in Active-Passive using Pacemaker. Please check the "Highly available Block Storage API" documentation [1] for a detailed guide. Cheers, Gorka. [1]: https://docs.openstack.org/ha-guide/storage-ha-block.html
Hi Gorka, According to your link I configure. My backend storage is lvm. But created volume not sync to all compute node. And when active cinder-volume node change in pacemaker, unable to do volume operation like: extend. Here is the error log. Could you please help me to solve this. [root@controller1 ~]# openstack volume service list +------------------+------------------------------+------+---------+-------+ ----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------------------------+------+---------+-------+ ----------------------------+ | cinder-volume | cinder-cluster-hostgroup@lvm | nova | enabled | up | 2019-06-22T05:07:15.000000 | | cinder-scheduler | cinder-cluster-hostgroup | nova | enabled | up | 2019-06-22T05:07:18.000000 | +------------------+------------------------------+------+---------+-------+ ----------------------------+ [root@compute1 ~]# pcs status Cluster name: cindervolumecluster Stack: corosync Current DC: compute1 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum Last updated: Sat Jun 22 11:08:35 2019 Last change: Fri Jun 21 16:00:08 2019 by root via cibadmin on compute1 3 nodes configured 1 resource configured Online: [ compute1 compute2 compute3 ] Full list of resources: openstack-cinder-volume (systemd:openstack-cinder-volume): Started compute2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~ 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager [req-341fd5ce-dcdb-482e-9309-4c5bfe272137 b0ff2eb16d9b4af58e812d47e0bc753b fc78335beea842038579b36c5a3eef7d - default default] Extend volume failed.: ProcessExecutionError: Unexpected error while running command. Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o Attr cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be Exit code: 5 Stdout: u'' Stderr: u'File descriptor 20 (/dev/urandom) leaked on lvdisplay invocation. Parent PID 18064: /usr/bin/python2\n Failed to find logical volume "cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be"\n' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Traceback (most recent call last): 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 2622, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager self.driver.extend_volume(volume, new_size) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 576, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager self._sizestr(new_size)) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 818, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager has_snapshot = self.lv_has_snapshot(lv_name) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 767, in lv_has_snapshot 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager run_as_root=True) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in _execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager result = self.__execute(*args, **kwargs) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 128, in execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, in execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager cmd=sanitized_cmd) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager ProcessExecutionError: Unexpected error while running command. 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o Attr cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Exit code: 5 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Stdout: u'' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Stderr: u'File descriptor 20 (/dev/urandom) leaked on lvdisplay invocation. Parent PID 18064: /usr/bin/python2\n Failed to find logical volume "cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be"\n' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~ Thanks & B'Rgds, Rony -----Original Message----- From: Gorka Eguileor [mailto:geguileo@redhat.com] Sent: Friday, June 21, 2019 3:24 PM To: Md. Farhad Hasan Khan Cc: openstack-discuss@lists.openstack.org Subject: Re: Openstack cinder HA On 20/06, Md. Farhad Hasan Khan wrote:
Hi,
In my environment openstack volume service running on compute node. When the openstack-cinder-volume service goes down on, the Block Storage volumes which were created using the openstack-cinder-volume service cannot be managed until the service comes up again.
I need help how to configure openstack cinder volume HA.
Thanks & B'Rgds,
Rony
Hi Rony, You can configure Cinder API, Scheduler, and Backup in Active-Active and Cinder Volume in Active-Passive using Pacemaker. Please check the "Highly available Block Storage API" documentation [1] for a detailed guide. Cheers, Gorka. [1]: https://docs.openstack.org/ha-guide/storage-ha-block.html
On 22/06, Md. Farhad Hasan Khan wrote:
Hi Gorka, According to your link I configure. My backend storage is lvm. But created volume not sync to all compute node. And when active cinder-volume node change in pacemaker, unable to do volume operation like: extend. Here is the error log. Could you please help me to solve this.
Hi, Unfortunately cinder-volume configured with LVM will not support any kind of HA deployment. The reason is that the actual volumes are local to a single node, so other nodes don't have access to the storage and cinder-volume cannot manage something it doesn't have access to. That is why LVM is not recommended for production environments, because when a node goes down you lose both the data and control planes. Regards, Gorka.
[root@controller1 ~]# openstack volume service list +------------------+------------------------------+------+---------+-------+ ----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------------------------+------+---------+-------+ ----------------------------+ | cinder-volume | cinder-cluster-hostgroup@lvm | nova | enabled | up | 2019-06-22T05:07:15.000000 | | cinder-scheduler | cinder-cluster-hostgroup | nova | enabled | up | 2019-06-22T05:07:18.000000 | +------------------+------------------------------+------+---------+-------+ ----------------------------+
[root@compute1 ~]# pcs status Cluster name: cindervolumecluster Stack: corosync Current DC: compute1 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum Last updated: Sat Jun 22 11:08:35 2019 Last change: Fri Jun 21 16:00:08 2019 by root via cibadmin on compute1
3 nodes configured 1 resource configured
Online: [ compute1 compute2 compute3 ]
Full list of resources:
openstack-cinder-volume (systemd:openstack-cinder-volume): Started compute2
Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~ 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager [req-341fd5ce-dcdb-482e-9309-4c5bfe272137 b0ff2eb16d9b4af58e812d47e0bc753b fc78335beea842038579b36c5a3eef7d - default default] Extend volume failed.: ProcessExecutionError: Unexpected error while running command. Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o Attr cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be Exit code: 5 Stdout: u'' Stderr: u'File descriptor 20 (/dev/urandom) leaked on lvdisplay invocation. Parent PID 18064: /usr/bin/python2\n Failed to find logical volume "cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be"\n' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Traceback (most recent call last): 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 2622, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager self.driver.extend_volume(volume, new_size) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 576, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager self._sizestr(new_size)) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 818, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager has_snapshot = self.lv_has_snapshot(lv_name) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 767, in lv_has_snapshot 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager run_as_root=True) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in _execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager result = self.__execute(*args, **kwargs) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 128, in execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, in execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager cmd=sanitized_cmd) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager ProcessExecutionError: Unexpected error while running command. 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o Attr cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Exit code: 5 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Stdout: u'' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Stderr: u'File descriptor 20 (/dev/urandom) leaked on lvdisplay invocation. Parent PID 18064: /usr/bin/python2\n Failed to find logical volume "cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be"\n' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~ Thanks & B'Rgds, Rony
-----Original Message----- From: Gorka Eguileor [mailto:geguileo@redhat.com] Sent: Friday, June 21, 2019 3:24 PM To: Md. Farhad Hasan Khan Cc: openstack-discuss@lists.openstack.org Subject: Re: Openstack cinder HA
On 20/06, Md. Farhad Hasan Khan wrote:
Hi,
In my environment openstack volume service running on compute node. When the openstack-cinder-volume service goes down on, the Block Storage volumes which were created using the openstack-cinder-volume service cannot be managed until the service comes up again.
I need help how to configure openstack cinder volume HA.
Thanks & B'Rgds,
Rony
Hi Rony,
You can configure Cinder API, Scheduler, and Backup in Active-Active and Cinder Volume in Active-Passive using Pacemaker.
Please check the "Highly available Block Storage API" documentation [1] for a detailed guide.
Cheers, Gorka.
[1]: https://docs.openstack.org/ha-guide/storage-ha-block.html
Hi Gorka, According to your link I configure. My backend storage is lvm. But created volume not sync to all compute node. And when active cinder-volume node change in pacemaker, unable to do volume operation like: extend. Here is the error log. Could you please help me to solve
Hi Gorka, I checked with NFS instead of LVM. Now it's working. Thanks a lot for your quick suggestion. Thanks & B'Rgds, Rony -----Original Message----- From: Gorka Eguileor [mailto:geguileo@redhat.com] Sent: Saturday, June 22, 2019 2:04 PM To: Md. Farhad Hasan Khan Cc: openstack-discuss@lists.openstack.org Subject: Re: Openstack cinder HA On 22/06, Md. Farhad Hasan Khan wrote: this.
Hi, Unfortunately cinder-volume configured with LVM will not support any kind of HA deployment. The reason is that the actual volumes are local to a single node, so other nodes don't have access to the storage and cinder-volume cannot manage something it doesn't have access to. That is why LVM is not recommended for production environments, because when a node goes down you lose both the data and control planes. Regards, Gorka.
[root@controller1 ~]# openstack volume service list
+------------------+------------------------------+------+---------+-------+
----------------------------+ | Binary | Host | Zone | Status | State | Updated At |
+------------------+------------------------------+------+---------+-------+
----------------------------+ | cinder-volume | cinder-cluster-hostgroup@lvm | nova | enabled | up | 2019-06-22T05:07:15.000000 | | cinder-scheduler | cinder-cluster-hostgroup | nova | enabled | up | 2019-06-22T05:07:18.000000 |
+------------------+------------------------------+------+---------+-------+
----------------------------+
[root@compute1 ~]# pcs status Cluster name: cindervolumecluster Stack: corosync Current DC: compute1 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum Last updated: Sat Jun 22 11:08:35 2019 Last change: Fri Jun 21 16:00:08 2019 by root via cibadmin on compute1
3 nodes configured 1 resource configured
Online: [ compute1 compute2 compute3 ]
Full list of resources:
openstack-cinder-volume (systemd:openstack-cinder-volume): Started compute2
Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~ ~~~~~~~~ 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager [req-341fd5ce-dcdb-482e-9309-4c5bfe272137 b0ff2eb16d9b4af58e812d47e0bc753b fc78335beea842038579b36c5a3eef7d - default default] Extend volume failed.: ProcessExecutionError: Unexpected error while running command. Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o Attr cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be Exit code: 5 Stdout: u'' Stderr: u'File descriptor 20 (/dev/urandom) leaked on lvdisplay invocation. Parent PID 18064: /usr/bin/python2\n Failed to find logical volume "cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be"\n' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Traceback (most recent call last): 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 2622, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager self.driver.extend_volume(volume, new_size) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 576, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager self._sizestr(new_size)) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 818, in extend_volume 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager has_snapshot = self.lv_has_snapshot(lv_name) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 767, in lv_has_snapshot 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager run_as_root=True) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in _execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager result = self.__execute(*args, **kwargs) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 128, in execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, in execute 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager cmd=sanitized_cmd) 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager ProcessExecutionError: Unexpected error while running command. 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o Attr cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Exit code: 5 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Stdout: u'' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager Stderr: u'File descriptor 20 (/dev/urandom) leaked on lvdisplay invocation. Parent PID 18064: /usr/bin/python2\n Failed to find logical volume "cinder-lvm-volumes/volume-5369a96c-0369-4bb2-9ea1-759359a418be"\n' 2019-06-21 15:47:56.791 17754 ERROR cinder.volume.manager
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~ ~~~~~~~~ Thanks & B'Rgds, Rony
-----Original Message----- From: Gorka Eguileor [mailto:geguileo@redhat.com] Sent: Friday, June 21, 2019 3:24 PM To: Md. Farhad Hasan Khan Cc: openstack-discuss@lists.openstack.org Subject: Re: Openstack cinder HA
On 20/06, Md. Farhad Hasan Khan wrote:
Hi,
In my environment openstack volume service running on compute node. When the openstack-cinder-volume service goes down on, the Block Storage volumes which were created using the openstack-cinder-volume service cannot be managed until the service comes up again.
I need help how to configure openstack cinder volume HA.
Thanks & B'Rgds,
Rony
Hi Rony,
You can configure Cinder API, Scheduler, and Backup in Active-Active and Cinder Volume in Active-Passive using Pacemaker.
Please check the "Highly available Block Storage API" documentation [1] for a detailed guide.
Cheers, Gorka.
[1]: https://docs.openstack.org/ha-guide/storage-ha-block.html
participants (2)
-
Gorka Eguileor
-
Md. Farhad Hasan Khan