[openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

Ihar Hrachyshka ihrachys at redhat.com
Sun Jul 13 16:26:35 UTC 2014


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 12/07/14 00:30, Vishvananda Ishaya wrote:
> I have tried using pymysql in place of mysqldb and in real world
> concurrency tests against cinder and nova it performs slower. I was
> inspired by the mention of mysql-connector so I just tried that
> option instead. Mysql-connector seems to be slightly slower as
> well, which leads me to believe that the blocking inside of 
> sqlalchemy is not the main bottleneck across projects.

I wonder what's your setup and library versions, and your script that
you use for testing would also be great to see.

In my tests, mysql-connector showed similar performance to what
mysqldb provides in serial testing. Once you get to parallel requests
execution, that's where the real benefit of parallelism shows up. Have
you run your testing with parallel requests in mind?

I now realise that I should have posted the benchmark I've used myself
in the first place. So here it is, as a gist:
https://gist.github.com/booxter/c4f3e743a2573ba7809f

> 
> Vish
> 
> P.S. The performanace in all cases was abysmal, so performance work
> definitely needs to be done, but just the guess that replacing our
> mysql library is going to solve all of our performance problems
> appears to be incorrect at first blush.

All? Not at all. Some of them? Probably.

That said, the primary reason to switch the library is avoiding
database deadlocks. Additional performance boost is just a nice thing
to have with little effort.

> 
> On Jul 11, 2014, at 10:20 AM, Clark Boylan <clark.boylan at gmail.com>
> wrote:
> 
>> Before we get too far ahead of ourselves mysql-connector is not
>> hosted on pypi. Instead it is an external package link. We
>> recently managed to remove all packages that are hosted as
>> external package links from openstack and will not add new ones
>> in. Before we can use mysql-connector in the gate oracle will
>> need to publish mysql-connector on pypi properly.
>> 
>> That said there is at least one other pure python alternative, 
>> PyMySQL. PyMySQL supports py3k and pypy. We should look at using 
>> PyMySQL instead if we want to start with a reasonable path to
>> getting this in the gate.
>> 
>> Clark
>> 
>> On Fri, Jul 11, 2014 at 10:07 AM, Miguel Angel Ajo Pelayo 
>> <mangelajo at redhat.com> wrote:
>>> +1 here too,
>>> 
>>> Amazed with the performance gains, x2.4 seems a lot, and we'd
>>> get rid of deadlocks.
>>> 
>>> 
>>> 
>>> ----- Original Message -----
>>>> +1
>>>> 
>>>> I'm pretty excited about the possibilities here.  I've had
>>>> this mysqldb/eventlet contention in the back of my mind for
>>>> some time now. I'm glad to see some work being done in this
>>>> area.
>>>> 
>>>> Carl
>>>> 
>>>> On Fri, Jul 11, 2014 at 7:04 AM, Ihar Hrachyshka
>>>> <ihrachys at redhat.com> wrote:
> On 09/07/14 13:17, Ihar Hrachyshka wrote:
>>>>>>> Hi all,
>>>>>>> 
>>>>>>> Multiple projects are suffering from db lock timeouts
>>>>>>> due to deadlocks deep in mysqldb library that we use to
>>>>>>> interact with mysql servers. In essence, the problem is
>>>>>>> due to missing eventlet support in mysqldb module,
>>>>>>> meaning when a db lock is encountered, the library does
>>>>>>> not yield to the next green thread, allowing other 
>>>>>>> threads to eventually unlock the grabbed lock, and
>>>>>>> instead it just blocks the main thread, that eventually
>>>>>>> raises timeout exception (OperationalError).
>>>>>>> 
>>>>>>> The failed operation is not retried, leaving failing
>>>>>>> request not served. In Nova, there is a special retry
>>>>>>> mechanism for deadlocks, though I think it's more a
>>>>>>> hack than a proper fix.
>>>>>>> 
>>>>>>> Neutron is one of the projects that suffer from those
>>>>>>> timeout errors a lot. Partly it's due to lack of
>>>>>>> discipline in how we do nested calls in l3_db and
>>>>>>> ml2_plugin code, but that's not something to change in
>>>>>>> foreseeable future, so we need to find another solution
>>>>>>> that is applicable for Juno. Ideally, the solution
>>>>>>> should be applicable for Icehouse too to allow
>>>>>>> distributors to resolve existing deadlocks without
>>>>>>> waiting for Juno.
>>>>>>> 
>>>>>>> We've had several discussions and attempts to introduce
>>>>>>> a solution to the problem. Thanks to oslo.db guys, we
>>>>>>> now have more or less clear view on the cause of the
>>>>>>> failures and how to easily fix them. The solution is to
>>>>>>> switch mysqldb to something eventlet aware. The best
>>>>>>> candidate is probably MySQL Connector module that is
>>>>>>> an official MySQL client for Python and that shows some
>>>>>>> (preliminary) good results in terms of performance.
> 
> I've made additional testing, creating 2000 networks in parallel
> (10 thread workers) for both drivers and comparing results.
> 
> With mysqldb: 215.81 sec With mysql-connector: 88.66
> 
> ~2.4 times performance boost, ok? ;)
> 
> I think we should switch to that library *even* if we forget about
> all the nasty deadlocks we experience now.
> 
>>>>>>> 
>>>>>>> I've posted a Neutron spec for the switch to the new
>>>>>>> client in Juno at [1]. Ideally, switch is just a matter
>>>>>>> of several fixes to oslo.db that would enable full
>>>>>>> support for the new driver already supported by
>>>>>>> SQLAlchemy, plus 'connection' string modified in 
>>>>>>> service configuration files, plus documentation updates
>>>>>>> to refer to the new official way to configure services
>>>>>>> for MySQL. The database code won't, ideally, require
>>>>>>> any major changes, though some adaptation for the new
>>>>>>> client library may be needed. That said, Neutron does
>>>>>>> not seem to require any changes, though it was revealed
>>>>>>> that there are some alembic migration rules in Keystone
>>>>>>> or Glance that need (trivial) modifications.
>>>>>>> 
>>>>>>> You can see how trivial the switch can be achieved for
>>>>>>> a service based on example for Neutron [2].
>>>>>>> 
>>>>>>> While this is a Neutron specific proposal, there is an
>>>>>>> obvious wish to switch to the new library globally
>>>>>>> throughout all the projects, to reduce devops burden,
>>>>>>> among other things. My vision is that, ideally, we
>>>>>>> switch all projects to the new library in Juno, though 
>>>>>>> we still may leave several projects for K in case any
>>>>>>> issues arise, similar to the way projects switched to
>>>>>>> oslo.messaging during two cycles instead of one. Though
>>>>>>> looking at how easy Neutron can be switched to the new
>>>>>>> library, I wouldn't expect any issues that would
>>>>>>> postpone the switch till K.
>>>>>>> 
>>>>>>> It was mentioned in comments to the spec proposal that
>>>>>>> there were some discussions at the latest summit around
>>>>>>> possible switch in context of Nova that revealed some
>>>>>>> concerns, though they do not seem to be documented
>>>>>>> anywhere. So if you know anything about it, please
>>>>>>> comment.
>>>>>>> 
>>>>>>> So, we'd like to hear from other projects what's your
>>>>>>> take on that move, whether you see any issues or have
>>>>>>> concerns about it.
>>>>>>> 
>>>>>>> Thanks for your comments, /Ihar
>>>>>>> 
>>>>>>> [1]: https://review.openstack.org/#/c/104905/ [2]: 
>>>>>>> https://review.openstack.org/#/c/105209/
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org 
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>
>>>>>
>>>>>>> 
_______________________________________________
>>>>> OpenStack-dev mailing list 
>>>>> OpenStack-dev at lists.openstack.org 
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>> 
_______________________________________________
>>>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org 
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>>> 
_______________________________________________
>>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>> 
_______________________________________________
>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> 
> 
> 
> _______________________________________________ OpenStack-dev
> mailing list OpenStack-dev at lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTwrM7AAoJEC5aWaUY1u57LUAH/Rpqigwvyu8M22X7Mr7vQFRF
W7EBQ1vr/H9wgfV9Qfkz6iPcHbwthR1LpbKIB593Vt8GjJy3TwhoMWSetVsxT3bO
R2Tq7hTb9Bs7JFVrtYqPz1d453vJRawIvBZ6d4XWfzhsvWhBArIFeSDi2LW04+IE
HhAi8hrMHVp193Ui9BKxFgp1hUPwbAptwhmW3Ir2+rGi8cj7tKUWMWzBunT/dkBK
El+pCmxC+EBlVm+ewVBVDrtlQvbKw9TKEuWS6/Dfp2gTUA+4mNPAz43xtoFtoqhN
MN15VcM888Ei/plsc62ledHBm7rnGMa8Skm0+fj71KmsyX6g7yBXHppSAYqcTgk=
=9oxH
-----END PGP SIGNATURE-----



More information about the OpenStack-dev mailing list