[openstack-dev][cinder] question on cinder-volume A/A configuration
Bernd Bausch
berndbausch at gmail.com
Fri Sep 20 01:39:02 UTC 2019
On 2019/09/19 7:17 PM, Chen CH Ji wrote:
> compute node1 and node 2 both use this backend and I can see only 1
> compute services
It's not quite clear to me what you mean by "backend" for compute nodes
1 and 2. But see my guess below.
> [root at controller ~]# cinder service-list
> +------------------+----------------+------+---------+-------+----------------------------+-----------------+
> | Binary | Host | Zone | Status | State |
> Updated_at | Disabled Reason |
> +------------------+----------------+------+---------+-------+----------------------------+-----------------+
> | cinder-scheduler | controller | nova | enabled | up |
> 2019-09-19T09:16:21.000000 | - |
> | cinder-volume | FC at POWERMAX_FC | nova | enabled | up |
> 2019-09-19T09:16:30.000000 | - |
> +------------------+----------------+------+---------+-------+----------------------------+-----------------+
You say that you can see only one compute service, but here you are
listing Cinder services, not Nova services.
> and now I am creating 5 instances from nova at same time (boot from
> volume) , the scheduler will report those error time to time like
> following ,but actually the 2 services on both 2 compute nodes runs
> fine ..
I guess that you are running cinder-volume on both compute nodes, and
your problem is that only one of the cinder-volume services is up. Is
that correct?
If I am guessing correctly, one of the two cinder-volume services is
unable to reach cinder-api or is not running fine or not running at all.
As a result, cinder-api is not aware of it and doesn't list it.
> 2019-09-19 17:53:10.951 20916 WARNING cinder.scheduler.host_manager
> [req-19e722e8-1523-4121-8987-3cb450a8038e
> 071294a19fa8463788822565e0927fce f43175c07dc8415899d6b350dbede772 -
> default default] volume service is down. (host: FC at POWERMAX_FC)
Where do you see this warning message? It looks like this particular
service is not running fine.
If my guess is correct, I would expect to see additional information in
the log of the problematic cinder-volume service.
Bernd.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190920/260c6ac6/attachment.html>
More information about the openstack-discuss
mailing list