[openstack-dev] [oslo.db]A proposal for DB read/write separation

Amrith Kumar amrith at tesora.com
Sun Aug 10 13:59:14 UTC 2014


Li Ma, Mike [Wilson | Bayer], and Roman Podoliaka,

 

A similar topic came up in Atlanta at a database panel I participated in. Jay Pipes had organized it as part of the ops track and Peter Boros (of Percona) and I were on the panel. The issue of what to do about the database under OpenStack in the face of high load from such components as, for example ceilometer.

 

Splitting reads and writes is a solution that is fraught with challenges as it requires the application to know where it wrote, where it should read from, what is replication latency, and all of that. At the heart of the issue is that you want to scale the database.

 

I had suggested at this panel that those who want to try and solve this problem should try the Database Virtualization Engine[1] product from Tesora. In the interest of full disclosure, I work for Tesora. 

 

The solution is a simple way to horizontally scale a MySQL (or Percona or MariaDB) database across a collection of database servers. It exposes a MySQL compatible interface and takes care of all the minutiae of where to store data, partitioning it across the various database servers, and executing queries on behalf of an application irrespective of the location of the data. It natively provides such capabilities as distributed joins, aggregation and sorting. Architecturally it is a traditional parallel database built from a collection of unmodified MySQL (or variant) databases. 

 

It is open source, and available for free download.[2] 

 

Percona recently tested[3] the DVE product and confirmed that the solution provided horizontal scalability and linear (and in some cases better than linear) performance improvements[4] with scale. You can get a copy of their full test report here.[5] 

 

Ingesting data at very high volume is often an area of considerable pain for large systems and in one demonstration of our product, we were required to ingest 1 million CDR style records per second. We demonstrated that with just 15 Amazon RDS servers (m1.xlarge, standard EBS volumes, no provisioned IOPS) and two c1.xlarge servers to run the Tesora DVE software, we could in fact ingest a sustained stream of over 1 million CDR’s a second![6]

 

To Mike Wilson and Roman’s point, the solution I’m proposing would be entirely transparent to the developer and would be something that would be both elastic and scalable with the workload placed on it. In addition, standard SQL queries will continue to work unmodified, irrespective of which database server physically holds a row of data.

 

To Mike Bayer’s point about data distribution and transaction management; yes, we handle all the details relating to handling data consistency and providing atomic transactions during Insert/Update/Delete operations.

 

As a company, we at Tesora are committed to OpenStack and are significant participants in Trove (the database-as-a-service project for OpenStack). You can verify this yourself on Stackalytics [7] or [8]. If you would like to consider it as a part of your solution to oslo.db, we’d be thrilled to work with the OpenStack community to make this work, both from a technical and a business/licensing perspective. You can catch most of our dev team on either #openstack-trove or #tesora.

 

Some of us from Tesora, Percona and Mirantis are planning an ops panel similar to the one at Atlanta, for the Summit in Paris. I would definitely like to meet with more of you in Paris and discuss how we address issues of scale in the database that powers an OpenStack implementation.

 

Thanks,

 

-amrith

 

--

 

Amrith Kumar, CTO Tesora (www.tesora.com)

 

Twitter: @amrithkumar  

IRC: amrith @freenode 

 

 

[1] http://www.tesora.com/solutions/database-virtualization-engine

[2] http://www.tesora.com/solutions/downloads/products

[3] http://www.mysqlperformanceblog.com/2014/06/24/benchmarking-tesoras-database-virtualisation-engine-sysbench/ 

[4] http://www.tesora.com/blog/perconas-evaluation-our-database-virtualization-engine

[5] http://resources.tesora.com/site/download/percona-benchmark-whitepaper 

[6] http://www.tesora.com/blog/ingesting-over-1000000-rows-second-mysql-aws-cloud 

[7] http://stackalytics.com/?module=trove-group <http://stackalytics.com/?module=trove-group&metric=commits> &metric=commits

[8] http://stackalytics.com/?module=trove-group <http://stackalytics.com/?module=trove-group&metric=marks> &metric=marks

 

 

 

 

 

From: Mike Wilson [mailto:geekinutah at gmail.com] 
Sent: Friday, August 08, 2014 7:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

 

Li Ma,

 

This is interesting, In general I am in favor of expanding the scope of any read/write separation capabilities that we have. I'm not clear what exactly you are proposing, hopefully you can answer some of my questions inline. The thing I had thought of immediately was detection of whether an operation is read or write and integrating that into oslo.db or sqlalchemy. Mike Bayer has some thoughts on that[1] and there are other approaches around that can be copied/learned from. These sorts of things are clear to me and while moving towards more transparency for the developer, still require context. Please, share with us more details on your proposal.

 

-Mike

 

[1] http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html

[2] http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/

 

On Thu, Aug 7, 2014 at 10:03 PM, Li Ma <skywalker.nick at gmail.com <mailto:skywalker.nick at gmail.com> > wrote:

Getting a massive amount of information from data storage to be displayed is
where most of the activity happens in OpenStack. The two activities of reading
data and writing (creating, updating and deleting) data are fundamentally
different.

The optimization for these two opposite database activities can be done by
physically separating the databases that service these two different
activities. All the writes go to database servers, which then replicates the
written data to the database server(s) dedicated to servicing the reads.


Currently, AFAIK, many OpenStack deployment in production try to take
advantage of MySQL (includes Percona or MariaDB) multi-master Galera cluster.
It is possible to design and implement a read/write separation schema
for such a DB cluster.

 

I just want to clarify here, are you suggesting that _all_ reads and _all_ writes would hit different databases? It would be interesting to see a relational schema design that would allow that to work. That seems like something that you wouldn't try in a relational database at all.

 


Actually, OpenStack has a method for read scalability via defining
master_connection and slave_connection in configuration, but this method
lacks of flexibility due to deciding master or slave in the logical
context(code). It's not transparent for application developer.
As a result, it is not widely used in all the OpenStack projects.

So, I'd like to propose a transparent read/write separation method
for oslo.db that every project may happily takes advantage of it
without any code modification.

 

The problem with making it transparent to the developer is that, well, you can't unless your application is tolerant of old data in an asynchronous replication world. If you are in a fully synchronous world you could fully separate writes and reads, but what would be the point since your database performance is now trash anyway. Please note that although Galera is a considered a synchronous model it's not actually all the way there. You can break the certification of course, but there are also things that are done to keep the performance to an acceptable level. Take for example the wswrep_causal_reads configuration parameter[2]. Without this sucker being turned on you can't make read/write separation transparent to the developer. Turning it on causes a significant performance degradation unfortunately. 

 

I feel like this is a problem fundamental to a consistent relational dataset. If you are okay with eventual consistency it's okay, you can make things transparent to the developer. But by it's very nature relational datasets are well, relational, they need all the other pieces and those pieces need to be consistent. I guess what I am saying is that your proposal needs more details. Please respond with specifics and examples to move the discussion forward.

 


Moreover, I'd like to put it in the mailing list in advance to
make sure it is acceptable for oslo.db.

I'd appreciate any comments.

br.
Li Ma


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org> 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140810/2fdf9a37/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6559 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140810/2fdf9a37/attachment.bin>


More information about the OpenStack-dev mailing list