[openstack-dev] [Neutron] DB: transaction isolation and related questions

Mike Bayer mbayer at redhat.com
Wed Nov 19 20:10:18 UTC 2014


> On Nov 19, 2014, at 1:49 PM, Ryan Moats <rmoats at us.ibm.com> wrote:
> 
> I was waiting for this because I think I may have a slightly different (and outside of the box) view on how to approach a solution to this.
> 
> Conceptually (at least in my mind) there isn't a whole lot of difference between how the example below (i.e. updates from two concurrent threads) is handled
> and how/if neutron wants to support a multi-master database scenario (which in turn lurks in the background when one starts thinking/talking about multi-region support).
> 
> If neutron wants to eventually support multi-master database scenarios, I see two ways to go about it:
> 
> 1) Defer multi-master support to the database itself.
> 2) Take responsibility for managing the conflict resolution inherent in multi-master scenarios itself.
> 
> The first approach is certainly simpler in the near term, but it has the down side of restricting the choice of databases to those that have solved multi-master and further, may lead to code bifurcation based on possibly different solutions to the conflict resolution scenarios inherent in multi-master.
> 
> The second approach is certainly more complex as neutron assumes more responsibility for its own actions, but it has the advantage that (if done right) would be transparent to the underlying databases (with all that implies)
> 
multi-master is a very advanced use case so I don’t see why it would be unreasonable to require a multi-master vendor database.   Reinventing a complex system like that in the application layer is an unnecessary reinvention.

As far as working across different conflict resolution scenarios, while there may be differences across backends, these differences will be much less significant compared to the differences against non-clustered backends in which we are inventing our own multi-master solution.   I doubt a home rolled solution would insulate us at all from “code bifurcation” as this is already a fact of life in targeting different backends even without any implication of clustering.   Even with simple things like transaction isolation, we see that different databases have different behavior, and if you look at the logic in oslo.db inside of https://github.com/openstack/oslo.db/blob/master/oslo/db/sqlalchemy/exc_filters.py <https://github.com/openstack/oslo.db/blob/master/oslo/db/sqlalchemy/exc_filters.py> you can see an example of just how complex it is to just do the most rudimental task of organizing exceptions into errors that mean the same thing.


> My reason for asking this question here is that if the community wants to consider #2, then these problems are the place to start crafting that solution - if we solve the conflicts inherent with the  two conncurrent thread scenarios, then I think we will find that we've solved the multi-master problem essentially "for free”.
> 

Maybe I’m missing something, if we learn how to write out a row such that a concurrent transaction against the same row doesn’t throw us off, where is the part where that data is replicated to databases running concurrently on other IP numbers in a way that is atomic come out of that effort “for free” ?   A home-rolled “multi master” scenario would have to start with a system that has multiple create_engine() calls, since we need to communicate directly to multiple database servers. From there it gets really crazy.  Where’s all that ?




> 
> Ryan Moats
> 
> Mike Bayer <mbayer at redhat.com> wrote on 11/19/2014 12:05:35 PM:
> 
> > From: Mike Bayer <mbayer at redhat.com>
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > <openstack-dev at lists.openstack.org>
> > Date: 11/19/2014 12:05 PM
> > Subject: Re: [openstack-dev] [Neutron] DB: transaction isolation and
> > related questions
> > 
> > On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov <enikanorov at mirantis.com> wrote:
> > 
> > Hi neutron folks,
> > 
> > There is an ongoing effort to refactor some neutron DB logic to be 
> > compatible with galera/mysql which doesn't support locking 
> > (with_lockmode('update')).
> > 
> > Some code paths that used locking in the past were rewritten to 
> > retry the operation if they detect that an object was modified concurrently.
> > The problem here is that all DB operations (CRUD) are performed in 
> > the scope of some transaction that makes complex operations to be 
> > executed in atomic manner.
> > For mysql the default transaction isolation level is 'REPEATABLE 
> > READ' which means that once the code issue a query within a 
> > transaction, this query will return the same result while in this 
> > transaction (e.g. the snapshot is taken by the DB during the first 
> > query and then reused for the same query).
> > In other words, the retry logic like the following will not work:
> > 
> > def allocate_obj():
> >     with session.begin(subtrans=True):
> >          for i in xrange(n_retries):
> >               obj = session.query(Model).filter_by(filters)
> >               count = session.query(Model).filter_by(id=obj.id
> > ).update({'allocated': True})
> >               if count:
> >                    return obj
> > 
> > since usually methods like allocate_obj() is called from within 
> > another transaction, we can't simply put transaction under 'for' 
> > loop to fix the issue.
> > 
> > has this been confirmed?  the point of systems like repeatable read 
> > is not just that you read the “old” data, it’s also to ensure that 
> > updates to that data either proceed or fail explicitly; locking is 
> > also used to prevent concurrent access that can’t be reconciled.  A 
> > lower isolation removes these advantages.  
> > 
> > I ran a simple test in two MySQL sessions as follows:
> > 
> > session 1:
> > 
> > mysql> create table some_table(data integer) engine=innodb;
> > Query OK, 0 rows affected (0.01 sec)
> > 
> > mysql> insert into some_table(data) values (1);
> > Query OK, 1 row affected (0.00 sec)
> > 
> > mysql> begin;
> > Query OK, 0 rows affected (0.00 sec)
> > 
> > mysql> select data from some_table;
> > +------+
> > | data |
> > +------+
> > |    1 |
> > +------+
> > 1 row in set (0.00 sec)
> > 
> > session 2:
> > 
> > mysql> begin;
> > Query OK, 0 rows affected (0.00 sec)
> > 
> > mysql> update some_table set data=2 where data=1;
> > Query OK, 1 row affected (0.00 sec)
> > Rows matched: 1  Changed: 1  Warnings: 0
> > 
> > then back in session 1, I ran:
> > 
> > mysql> update some_table set data=3 where data=1;
> > 
> > this query blocked;  that’s because session 2 has placed a write 
> > lock on the table.  this is the effect of repeatable read isolation.
> > 
> > while it blocked, I went to session 2 and committed the in-progress 
> > transaction:
> > 
> > mysql> commit;
> > Query OK, 0 rows affected (0.00 sec)
> > 
> > then session 1 unblocked, and it reported, correctly, that zero rows
> > were affected:
> > 
> > Query OK, 0 rows affected (7.29 sec)
> > Rows matched: 0  Changed: 0  Warnings: 0
> > 
> > the update had not taken place, as was stated by “rows matched":
> > 
> > mysql> select * from some_table;
> > +------+
> > | data |
> > +------+
> > |    1 |
> > +------+
> > 1 row in set (0.00 sec)
> > 
> > the code in question would do a retry at this point; it is checking 
> > the number of rows matched, and that number is accurate.
> > 
> > if our code did *not* block at the point of our UPDATE, then it 
> > would have proceeded, and the other transaction would have 
> > overwritten what we just did, when it committed.   I don’t know that
> > read committed is necessarily any better here.
> > 
> > now perhaps, with Galera, none of this works correctly.  That would 
> > be a different issue in which case sure, we should use whatever 
> > isolation is recommended for Galera.  But I’d want to potentially 
> > peg it to the fact that Galera is in use, or not.
> > 
> > would love also to hear from Jay Pipes on this since he literally 
> > wrote the book on MySQL ! :)
> > 
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141119/8fa3e785/attachment.html>


More information about the OpenStack-dev mailing list