[Openstack] icehouse cinder on multiple nodes problem
Anatoly Oreshkin
Anatoly.Oreshkin at pnpi.spb.ru
Fri Jun 27 14:23:43 UTC 2014
I've found the reason of my problem. It is that time between hosts was not syncronized
cinder-manage service list
shows time difference 2 minutes
Binary Host Zone Status
State Updated At
cinder-scheduler labosctrl labosctrl enabled
:-) 2014-06-24 15:24:30
cinder-backup labosctrl labosctrl enabled
:-) 2014-06-24 15:24:32
cinder-volume labosctrl labosctrl enabled
:-) 2014-06-24 15:24:32
cinder-scheduler labos02 labos02 enabled
XXX 2014-06-24 15:22:46
cinder-backup labos02 labos02 enabled
XXX 2014-06-24 15:22:47
cinder-volume labos02 at lvm labos02 enabled
XXX 2014-06-24 15:22:47
And down time in cinder.conf was set up to 60 seconds.
>
> I am curious how does command "cinder-manage service-list" learn that specific
> cinder service up or down ?
>>From what sources ? Database or something else ?
> Looking at cinder-manage source text I've found that function utils.service_is_up
> check for cinder service avaiability. But I've not found source text of
> "utils.service_is_up"
>
> Anybody help me ?
>
>
>>
>> According to your advice I've set up in cinder.conf
>> my_ip=10.76.254.222
>>
>> other parameters you mentioned were already specified as follows:
>>
>> iscsi_ip_address=10.76.254.222
>> iscsi_target_prefix=iqn.2010-10.org.openstack:
>> glance_host=10.76.254.220
>> glance_api_servers=$glance_host:$glance_port
>>
>> rabbit_hosts=10.76.254.220:5672
>> rpc_backend=cinder.openstack.common.rpc.impl_kombu
>>
>> sql_connection=mysql://cinder:29872c1151c04682@10.76.254.220/cinder
>>
>> After that I've restarted cinder services ob both controller and compute host,
>> however that have not helped.
>>
>> What else should I do ?
>>
>>
>> Remark.
>> Compute node labos02 has no DNS record. Could that somehow influence on my problem
>> ?
>>
>>
>>
>>
>>
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>>
>>> last time I've set up something like that those were the parameters in
>>> cinder.conf (on all nodes) that were vital to cinder's normal operation:
>>>
>>> - - my_ip
>>> - - iscsi_ip_prefix
>>> - - iscsi_ip_address
>>> - - glance_api_servers
>>>
>>>
>>> and the [keystone_authtoken] section. You will need to have some
>>> pointer for your AMPQ service (qpid in my case):
>>>
>>> - - qpid_host
>>> - - rpc_backend
>>>
>>> and to the DB:
>>>
>>> [database]
>>> - - connection
>>>
>>> - From the looks of it your configs are missing all of it.
>>>
>>> On 06/25/2014 08:44 AM, Anatoly Oreshkin wrote:
>>>>
>>>> Hello,
>>>>
>>>> I have openstack icehouse running on 2 nodes under Centos 6.5 .
>>>> One node is controller/network node, other node - compute node.
>>>> Cinder is installed and running on both nodes. Controller/network
>>>> node has hostname labosctrl (10.76.254.220), compute node has
>>>> hostname labos02 (10.76.254.222)
>>>>
>>>> command 'cinder service-list' shows that cinder on labos02 is down,
>>>> however cinder services are really running on labos02.
>>>>
>>>>
>>>> | Binary | Host | Zone |
>>>> Status | State | Updated_at | Disabled Reason |
>>>> +------------------+---------------------------+-----------+---------+-------+----------------------------+-----------------+
>>>>
>>>>
>>> | cinder-backup | labos02 | labos02 | enabled |
>>> down |
>>>> 2014-06-24T15:23:27.000000 | None | | cinder-backup |
>>>> labosctrl.lss.emc.com | labosctrl | enabled | up |
>>>> 2014-06-24T15:25:12.000000 | None | | cinder-scheduler |
>>>> labos02 | labos02 | enabled | down |
>>>> 2014-06-24T15:23:26.000000 | None | | cinder-scheduler |
>>>> labosctrl.lss.emc.com | labosctrl | enabled | up |
>>>> 2014-06-24T15:25:10.000000 | None | | cinder-volume |
>>>> labos02 at lvm | labos02 | enabled | down |
>>>> 2014-06-24T15:23:27.000000 | None | | | cinder-volume |
>>>> labosctrl.lss.emc.com at lvm | labosctrl | enabled | up |
>>>> 2014-06-24T15:25:02.000000 | None |
>>>>
>>>> command 'cinder-manage service list' shows
>>>>
>>>> Binary Host Zone
>>>> Status State Updated At cinder-scheduler labosctrl
>>>> labosctrl enabled :-) 2014-06-24 15:24:30 cinder-backup
>>>> labosctrl labosctrl enabled :-)
>>>> 2014-06-24 15:24:32 cinder-volume labosctrl
>>>> labosctrl enabled :-) 2014-06-24 15:24:32 cinder-scheduler
>>>> labos02 labos02 enabled XXX
>>>> 2014-06-24 15:22:46 cinder-backup labos02
>>>> labos02 enabled XXX 2014-06-24 15:22:47 cinder-volume
>>>> labos02 at lvm labos02 enabled XXX
>>>> 2014-06-24 15:22:47
>>>>
>>>> /etc/cinder/cinder.conf on labosctrl(10.76.254.220) has such
>>>> parameters: ---------------------------------------------------
>>>>
>>>> enabled_backends=lvm host=labosctrl.lss.emc.com
>>>> storage_availability_zone=labosctrl
>>>> default_availability_zone=labosctrl
>>>> volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
>>>> iscsi_ip_address=10.76.254.220 iscsi_helper=tgtadm
>>>>
>>>> /etc/cinder/cinder.conf on labos02(10.76.254.222):
>>>> -------------------------------------------------
>>>>
>>>> enabled_backends=lvm host=labos02
>>>> storage_availability_zone=labos02
>>>> default_availability_zone=labos02
>>>> volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
>>>> iscsi_ip_address=10.76.254.222 iscsi_helper=tgtadm
>>>>
>>>>
>>>> 'cinder endpoints' shows
>>>>
>>>>
>>>> +-------------+---------------------------------------------------------------+
>>>>
>>>>
>>> | cinder_v2 | Value
>>> |
>>>> +-------------+---------------------------------------------------------------+
>>>>
>>>>
>>> | adminURL |
>>> http://10.76.254.220:8776/v2/81b280570f994c3eb9d7bb563096b49a |
>>>> | id | 2313e8b0c09249909f0f6c104afa364e
>>>> | | internalURL |
>>>> http://10.76.254.220:8776/v2/81b280570f994c3eb9d7bb563096b49a | |
>>>> publicURL |
>>>> http://10.76.254.220:8776/v2/81b280570f994c3eb9d7bb563096b49a | |
>>>> region | RegionOne
>>>> |
>>>> +-------------+---------------------------------------------------------------+
>>>>
>>>>
>>>> +-------------+---------------------------------------------------------------+
>>>>
>>>>
>>> | cinder | Value
>>> |
>>>> +-------------+---------------------------------------------------------------+
>>>>
>>>>
>>> | adminURL |
>>> http://10.76.254.220:8776/v1/81b280570f994c3eb9d7bb563096b49a |
>>>> | id | 02696554a0fd473292e93834f9269086
>>>> | | internalURL |
>>>> http://10.76.254.220:8776/v1/81b280570f994c3eb9d7bb563096b49a | |
>>>> publicURL |
>>>> http://10.76.254.220:8776/v1/81b280570f994c3eb9d7bb563096b49a | |
>>>> region | RegionOne
>>>> |
>>>> +-------------+---------------------------------------------------------------+
>>>>
>>>> tgt service is also runnung on both nodes.
>>>>
>>>> /etc/tgt/targets.conf has parameters:
>>>>
>>>> include /etc/cinder/volumes/* default-driver iscsi
>>>>
>>>> Should the nodes have DNS records or /etc/hosts records enough ?
>>>> Why don't cinder commands see cinder services being up on labos02
>>>> ?
>>>>
>>>> Can anybody help me ?
>>>>
>>>>
>>>> _______________________________________________ Mailing list:
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post
>>>> to : openstack at lists.openstack.org Unsubscribe :
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>
>>>
>>> -----BEGIN PGP SIGNATURE-----
>>> Version: GnuPG v1
>>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>>
>>> iD8DBQFTqubqyDrVuGfS98QRAgHTAJ48ECFGLa2o8CVhiTQyxfOVgcZ8ZQCffc65
>>> yMiSgkPXOI2XKneZfy88APU=
>>> =J3M1
>>> -----END PGP SIGNATURE-----
>>>
>>> --
>>> This communication is intended for the use of the recipient to whom it
>>> is addressed, and may contain confidential, personal, and or privileged
>>> information. Please contact us immediately if you are not the intended
>>> recipient of this communication, and do not copy, distribute, or take
>>> action relying on it. Any communications received in error, or
>>> subsequent reply, should be deleted or destroyed.
>>> ---
>>>
>>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
More information about the Openstack
mailing list