[openstack-dev] [oslo.db][nova] Deprecating use_slave in Nova

Mike Bayer mbayer at redhat.com
Fri Jan 30 19:06:34 UTC 2015



Matthew Booth <mbooth at redhat.com> wrote:

> At some point in the near future, hopefully early in L, we're intending
> to update Nova to use the new database transaction management in
> oslo.db's enginefacade.
> 
> Spec:
> http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/kilo/make-enginefacade-a-facade.rst
> 
> Implementation:
> https://review.openstack.org/#/c/138215/
> 
> One of the effects of this is that we will always know when we are in a
> read-only transaction, or a transaction which includes writes. We intend
> to use this new contextual information to make greater use of read-only
> slave databases. We are currently proposing that if an admin has
> configured a slave database, we will use the slave for *all* read-only
> transactions. This would make the use_slave parameter passed to some
> Nova apis redundant, as we would always use the slave where the context
> allows.
> 
> However, using a slave database has a potential pitfall when mixed with
> separate write transactions. A caller might currently:
> 
> 1. start a write transaction
> 2. update the database
> 3. commit the transaction
> 4. start a read transaction
> 5. read from the database
> 
> The client might expect data written in step 2 to be reflected in data
> read in step 5. I can think of 3 cases here:
> 
> 1. A short-lived RPC call is using multiple transactions
> 
> This is a bug which the new enginefacade will help us eliminate. We
> should not be using multiple transactions in this case. If the reads are
> in the same transaction as the write: they will be on the master, they
> will be consistent, and there is no problem. As a bonus, lots of these
> will be race conditions, and we'll fix at least some.
> 
> 2. A long-lived task is using multiple transactions between long-running
> sub-tasks
> 
> In this case, for example creating a new instance, we genuinely want
> multiple transactions: we don't want to hold a database transaction open
> while we copy images around. However, I can't immediately think of a
> situation where we'd write data, then subsequently want to read it back
> from the db in a read-only transaction. I think we will typically be
> updating state, meaning it's going to be a succession of write transactions.
> 
> 3. Separate RPC calls from a remote client
> 
> This seems potentially problematic to me. A client makes an RPC call to
> create a new object. The client subsequently tries to retrieve the
> created object, and gets a 404.
> 
> Summary: 1 is a class of bugs which we should be able to find fairly
> mechanically through unit testing. 2 probably isn't a problem in
> practise? 3 seems like a problem, unless consumers of cloud services are
> supposed to expect that sort of thing.
> 
> I understand that slave databases can occasionally get very behind. How
> behind is this in practise?
> 
> How do we use use_slave currently? Why do we need a use_slave parameter
> passed in via rpc, when it should be apparent to the developer whether a
> particular task is safe for out-of-date data.
> 
> Any chance they have some kind of barrier mechanism? e.g. block until
> the current state contains transaction X.
> 
> General comments on the usefulness of slave databases, and the
> desirability of making maximum use of them?

keep in mind that the big win we get from writer()/ reader() is that writer() can remain pointing to one node in a Galera cluster, and reader() can point to the cluster as a whole.  reader() by default should definitely refer to the cluster as a whole, that is, “use slave”.     

As for issue #3, galera cluster is synchronous replication.   Slaves don’t get “behind” at all.   So to the degree that we need to transparently support some other kind of master/slave where slaves do get behind, perhaps there would be a reader(synchronous_required=True) kind of thing; based on configuration, it would be known that “synchronous” either means we don’t care (using galera) or that we should use the writer (an asynchronous replication scheme).

All of this points to the fact that I really don’t think the directives / flags should say anything about which specific database to use; using a “slave” or not due to various concerns is dependent on backend implementation and configuration.   The purpose of reader() / writer() is to ensure that we are only flagging the *intent* of the call, not the implementation.








More information about the OpenStack-dev mailing list