<div dir="ltr">Some more comments inline.<div><br></div><div>Salvatore<br><div class="gmail_extra"><br><div class="gmail_quote">On 16 June 2015 at 19:00, Carl Baldwin <span dir="ltr"><<a href="mailto:carl@ecbaldwin.net" target="_blank">carl@ecbaldwin.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">On Tue, Jun 16, 2015 at 12:33 AM, Kevin Benton <<a href="mailto:blak111@gmail.com">blak111@gmail.com</a>> wrote:<br>
>>Do these kinds of test even make sense? And are they feasible at all? I<br>
>> doubt we have any framework for injecting anything in neutron code under<br>
>> test.<br>
><br>
> I was thinking about this in the context of a lot of the fixes we have for<br>
> other concurrency issues with the database. There are several exception<br>
> handlers that aren't exercised in normal functional, tempest, and API tests<br>
> because they require a very specific order of events between workers.<br>
><br>
> I wonder if we could write a small shim DB driver that wraps the python one<br>
> for use in tests that just makes a desired set of queries take a long time<br>
> or fail in particular ways? That wouldn't require changes to the neutron<br>
> code, but it might not give us the right granularity of control.<br>
<br>
</span>Might be worth a look.<br></blockquote><div><br></div><div>It's a solution for pretty much mocking out the DB interactions. This would work for fault injection on most neutron-server scenarios, both for RESTful and RPC interfaces, but we'll need something else to "mock" interactions with the data plane that are performed by agents. I think we already have a mock for the AMQP bus on which we shall just install hooks for injecting faults.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span class=""><br>
>>Finally, please note I am using DB-level locks rather than non-locking<br>
>> algorithms for making reservations.<br>
><br>
> I thought these were effectively broken in Galera clusters. Is that not<br>
> correct?<br>
<br>
</span>As I understand it, if two writes to two different masters end up<br>
violating some db-level constraint then the operation will cause a<br>
failure regardless if there is a lock.<br></blockquote><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
Basically, on Galera, instead of waiting for the lock, each will<br>
proceed with the transaction. Finally, on commit, a write<br>
certification will double check constraints with the rest of the<br>
cluster (with a write certification). It is at this point where<br>
Galera will fail one of them as a deadlock for violating the<br>
constraint. Hence the need to retry. To me, non-locking just means<br>
that you embrace the fact that the lock won't work and you don't<br>
bother to apply it in the first place.<br></blockquote><div><br></div><div>This is correct.</div><div><br></div><div>Db level locks are broken in galera. As Carl says, write sets are sent out for certification after a transaction is committed.</div><div>So the write intent lock, or even primary key constraint violations cannot be verified before committing the transaction.</div><div>As a result you incur a write set certification failure, which is notably more expensive than an instance-level rollback, and manifests as a DBDeadlock exception to the OpenStack service. </div><div><br></div><div>Retrying a transaction is also a way of embracing this behaviour... you just accept the idea of having to reach to write set certifications. Non-locking approaches instead aim at avoiding write set certifications. The downside is that especially in high concurrency scenario, the operation is retries many times, and this might become even more expensive than dealing with the write set certification failure.</div><div><br></div><div>But zzzeek (Mike Bayer) is coming to our help; as a part of his DBFacade work, we should be able to treat active/active cluster as active/passive for writes, and active/active for reads. This means that the write set certification issue just won't show up, and the benefits of active/active clusters will still be attained for most operations (I don't think there's any doubt that SELECT operations represent the majority of all DB statements).</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
If my understanding is incorrect, please set me straight.<br></blockquote><div><br></div><div>You're already straight enough ;)</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span class=""><br>
> If you do go that route, I think you will have to contend with DBDeadlock<br>
> errors when we switch to the new SQL driver anyway. From what I've observed,<br>
> it seems that if someone is holding a lock on a table and you try to grab<br>
> it, pymsql immediately throws a deadlock exception.<br></span></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class=""><br>
</span>I'm not familiar with pymysql to know if this is true or not. But,<br>
I'm sure that it is possible not to detect the lock at all on galera.<br>
Someone else will have to chime in to set me straight on the details.<br></blockquote><div><br></div><div>DBDeadlocks without multiple workers also suggest we should look closely at what eventlet is doing before placing the blame on pymysql. I don't think that the switch to pymysql is changing the behaviour of the database interface; I think it's changing the way in which neutron interacts to the database thus unveiling concurrency issues that we did not spot before as we were relying on a sort of implicit locking triggered by the fact that some parts of Mysql-Python were implemented in C.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span class=""><font color="#888888"><br>
Carl<br>
</font></span><div class=""><div class="h5"><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div></div>