[kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached.

Radosław Piliszek radoslaw.piliszek at gmail.com
Fri Jul 26 16:36:35 UTC 2019


If time gets out of sync too much, you get auth errors due to tokens being
from the future / from a too distant past.

Kind regards,
Radek

pt., 26 lip 2019 o 17:12 Eddie Yen <missile0407 at gmail.com> napisał(a):

> Roger that, thanks for explanation.
>
> I think there's another reason to me that get this issue.
> The environment is stayed without any internet nor local NTP server, until
> the last test.
> Before the test, the nova and cinder services became unstable because they
> keeping up and down. And I found that the clock are out of sync between
> nodes.
> We let one of the node can connect outside and let NTP client pointed to
> that one on other nodes. Then problem solved.
> Of course the test is successful.
>
> I'm not sure but that's a one of reason right?
>
> But I think I still need to try optimize the timeout value since the API
> response is slow when shutting down a node.
> Wonder know why it become slow when a node down.
>
> I'll try to gain up rpc_response_timeout in Cinder and do more testing.
>
> Matt Riedemann <mriedemos at gmail.com> 於 2019年7月26日 週五 下午9:42寫道:
>
>> On 7/25/2019 11:54 PM, Eddie Yen wrote:
>> > And I think I should gain rpc_response_timeout rather than
>> > long_rpc_timeout in nova.
>>
>> Since Cinder doesn't have the long_rpc_timeout option like Nova you're
>> only option is to bump up the rpc_response_timeout in Cinder but that
>> will be used by all RPC calls in Cinder, not just the
>> initialize/terminate connection calls for attachments. Maybe that's not
>> a problem, but long_rpc_timeout in Nova allows us to pick which RPC
>> calls to use that on rather than everywhere. The changes to Cinder
>> shouldn't be that hard if they follow the Nova patch [1].
>>
>> [1] https://review.opendev.org/#/c/566696/
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190726/5cf0b4b3/attachment.html>


More information about the openstack-discuss mailing list