<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2015-12-02 13:11 GMT+03:00 Bogdan Dobrelya <span dir="ltr"><<a href="mailto:bdobrelia@mirantis.com" target="_blank">bdobrelia@mirantis.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">On 01.12.2015 23:34, Peter Lemenkov wrote:<br>
> Hello All!<br>
><br>
> Well, side-effects (or any other effects) are quite obvious and<br>
> predictable - this will decrease availability of RPC queues a bit.<br>
> That's for sure.<br>
<br>
</span>And consistency. Without messages and queues being synced between all of<br>
the rabbit_hosts, how exactly dispatching rpc calls would work then<br>
workers connected to different AMQP urls?<br></blockquote><div><br></div><div><div>There will be no problem with consistency here. Since we will disable HA, queues will not be synced across the cluster and there will be exactly one node hosting messages for a queue.</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Perhaps that change would only raise the partitions tolerance to the<br>
very high degree? But this should be clearly shown by load tests - under<br>
network partitions with mirroring against network partitions w/o<br>
mirroring. Rally could help here a lot.</blockquote><div><br></div><div>Nope, the change will not increase partitioning tolerance at all. What I expect is that it will not get worse. Regarding tests, sure we are going to perform destructive testing to verify that there is no regression in recovery time.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class=""><br>
><br>
> However, Dmitry's guess is that the overall messaging backplane<br>
> stability increase (RabitMQ won't fail too often in some cases) would<br>
> compensate for this change. This issue is very much real - speaking of<br>
<br>
</span>Agree, that should be proven by (rally) tests for the specific case I<br>
described in the spec [0]. Please correct it as I may understand things<br>
wrong, but here it is:<br>
- client 1 submits RPC call request R to the server 1 connected to the<br>
AMQP host X<br>
- worker A listens for jobs topic to the AMQP host X<br>
- worker B listens for jobs topic to the AMQP host Y<br>
- a job by the R was dispatched to the worker B<br>
Q: would the B never receive its job message because it just cannot see<br>
messages at the X?<br>
Q: timeout failure as the result.<br>
<br>
And things may go even much more weird for more complex scenarios.<br></blockquote><div><br></div><div>Yes, in the described scenario B will receive the job. Node Y will proxy B listening to node X. So, we will not experience timeout. Also, I have replied in the review.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
[0] <a href="https://review.openstack.org/247517" rel="noreferrer" target="_blank">https://review.openstack.org/247517</a><br>
<span class=""><br>
> me I've seen an awful cluster's performance degradation when a failing<br>
> RabbitMQ node was killed by some watchdog application (or even worse<br>
> wasn't killed at all). One of these issues was quite recently, and I'd<br>
> love to see them less frequently.<br>
><br>
> That said I'm uncertain about the stability impact of this change, yet<br>
> I see a reasoning worth discussing behind it.<br>
<br>
</span>I would support this to the 8.0 if only proven by the load tests within<br>
scenario I described plus standard destructive tests</blockquote><div><br></div><div>As I said in my initial email, I've run <span style="font-size:12.8px">boot_and_delete_server_with_</span><span style="font-size:12.8px">secgroups Rally scenario to verify my change. I think I should provide more details:</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Scale team considers this test to be the worst case we have for RabbitMQ. I've ran the test on 200 nodes lab and what I saw is that when I disable HA, test time becomes 2 times smaller. That clearly shows that there is a test where our current messaging system is bottleneck and just tuning it considerably improves performance of OpenStack as a whole. </span><span style="font-size:12.8px">Also while there was small fail rate for HA mode (around 1-2%), in non-HA mode all tests always completed successfully.</span></div><div><br></div><div>Overall, I think current results are already enough to consider the change useful. What is left is to confirm that it does not make our failover worse.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div><div class="h5">
><br>
> 2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk <<a href="mailto:sgolovatiuk@mirantis.com">sgolovatiuk@mirantis.com</a>>:<br>
>> Hi,<br>
>><br>
>> -1 for FFE for disabling HA for RPC queue as we do not know all side effects<br>
>> in HA scenarios.<br>
>><br>
>> On Tue, Dec 1, 2015 at 7:34 PM, Dmitry Mescheryakov<br>
>> <<a href="mailto:dmescheryakov@mirantis.com">dmescheryakov@mirantis.com</a>> wrote:<br>
>>><br>
>>> Folks,<br>
>>><br>
>>> I would like to request feature freeze exception for disabling HA for RPC<br>
>>> queues in RabbitMQ [1].<br>
>>><br>
>>> As I already wrote in another thread [2], I've conducted tests which<br>
>>> clearly show benefit we will get from that change. The change itself is a<br>
>>> very small patch [3]. The only thing which I want to do before proposing to<br>
>>> merge this change is to conduct destructive tests against it in order to<br>
>>> make sure that we do not have a regression here. That should take just<br>
>>> several days, so if there will be no other objections, we will be able to<br>
>>> merge the change in a week or two timeframe.<br>
>>><br>
>>> Thanks,<br>
>>><br>
>>> Dmitry<br>
>>><br>
>>> [1] <a href="https://review.openstack.org/247517" rel="noreferrer" target="_blank">https://review.openstack.org/247517</a><br>
>>> [2]<br>
>>> <a href="http://lists.openstack.org/pipermail/openstack-dev/2015-December/081006.html" rel="noreferrer" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/2015-December/081006.html</a><br>
>>> [3] <a href="https://review.openstack.org/249180" rel="noreferrer" target="_blank">https://review.openstack.org/249180</a><br>
>>><br>
>>> __________________________________________________________________________<br>
>>> OpenStack Development Mailing List (not for usage questions)<br>
>>> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>>><br>
>><br>
>><br>
>> __________________________________________________________________________<br>
>> OpenStack Development Mailing List (not for usage questions)<br>
>> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>><br>
><br>
><br>
><br>
<br>
<br>
--<br>
</div></div>Best regards,<br>
Bogdan Dobrelya,<br>
Irc #bogdando<br>
<div class=""><div class="h5"><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>