[openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

Ihar Hrachyshka ihrachys at redhat.com
Mon Jul 21 08:37:09 UTC 2014


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512



On 21/07/14 04:53, Angus Lees wrote:
> Status, as I understand it:
> 
> * oslo.db changes to support other mysql drivers:
> 
> https://review.openstack.org/#/c/104425/  (merged) 
> https://review.openstack.org/#/c/106928/  (awaiting oslo.db
> review) https://review.openstack.org/#/c/107221/  (awaiting oslo.db
> review)

For that last one, the idea is correct, but the implementation is
wrong, see my comments in the review.

> 
> (These are sufficient to allow operators to switch connection
> strings and use mysqlconnector.  The rest is all for our testing
> environment)
> 
> * oslo.db change to allow testing with other mysql drivers:
> 
> https://review.openstack.org/#/c/104428/  (awaiting oslo.db
> review) https://review.openstack.org/#/c/104447/  (awaiting oslo.db
> review. Ongoing discussion towards a larger rewrite of oslo.db
> testing instead)
> 
> * Integration into jenkins environment:
> 
> Blocked on getting Oracle to distribute mysql-connector via pypi. 
> Ihar and others are having conversations with the upstream author.
> 
> * Devstack change to switch to mysqlconnector for neutron:
> 
> https://review.openstack.org/#/c/105209/  (marked wip) Ihar: do you
> want me to pick this up, or are you going to continue it once some
> of the above has settled?

This is in WIP because it's not clear now whether the switch is
expected to be global or local to neutron. I'll make sure it's covered
if/when spec is approved.

> 
> * oslo.db gate test that reproduces the deadlock with eventlet:
> 
> https://review.openstack.org/#/c/104436/  (In review.  Can't be 
> submitted until gate environment is switched to mysqlconnector)
> 

+ performance is yet to be benchmarked for different projects.

> 
> Overall I'm not happy with the rate of change - but we're getting
> there.

That's Openstack! Changes take time here.

> I look forward to getting this fixed :/
> 

Thanks for tracking oslo.db part of that, I really appreciate that.

> 
> On 18 July 2014 21:45, Ihar Hrachyshka <ihrachys at redhat.com 
> <mailto:ihrachys at redhat.com>> wrote:
> 
> On 14/07/14 17:03, Ihar Hrachyshka wrote:
>> On 14/07/14 15:54, Clark Boylan wrote:
>>> On Sun, Jul 13, 2014 at 9:20 AM, Ihar Hrachyshka 
>>> <ihrachys at redhat.com <mailto:ihrachys at redhat.com>> wrote: On
> 11/07/14 19:20, Clark Boylan
>>> wrote:
>>>>>> Before we get too far ahead of ourselves mysql-connector 
>>>>>> is not hosted on pypi. Instead it is an external package 
>>>>>> link. We recently managed to remove all packages that
>>>>>> are hosted as external package links from openstack and
>>>>>> will not add new ones in. Before we can use
>>>>>> mysql-connector in the gate oracle will need to publish
>>>>>> mysql-connector on pypi properly.
> 
>>> There is misunderstanding in our community on how we deploy db 
>>> client modules. No project actually depends on any of them. We 
>>> assume deployers will install the proper one and configure 
>>> 'connection' string to use it. In case of devstack, we install 
>>> the appropriate package from distribution packages, not pip.
> 
>>>> Correct, but for all of the other test suites (unittests) we 
>>>> install the db clients via pip because tox runs them and 
>>>> virtualenvs allowing site packages cause too many problems.
>>>> See
>>>> 
>>>> 
> https://git.openstack.org/cgit/openstack/nova/tree/test-requirements.txt#n8.
>>>>
>>>>
>
> 
>>>> 
> So we do actually depend on these things being pip installable.
>>>> Basically this allows devs to run `tox` and it works.
> 
>> Roger that, and thanks for clarification. I'm trying to reach
>> the author and the maintainer of mysqlconnector-python to see
>> whether I'll be able to convince him to publish the packages on 
>> pypi.python.org <http://pypi.python.org>.
> 
> 
> I've reached the maintainer of the module, he told me he is
> currently working on uploading releases directly to
> pypi.python.org <http://pypi.python.org>.
> 
> 
>>>> I would argue that we should have devstack install via pip
>>>> too for consistency, but that is a different issue (it is
>>>> already installing all of the other python dependencies this
>>>> way so why special case?).
> 
>>> What we do is recommending a module for our users in our 
>>> documentation.
> 
>>> That said, I assume the gate is a non-issue. Correct?
> 
>>>>>> 
>>>>>> That said there is at least one other pure python 
>>>>>> alternative, PyMySQL. PyMySQL supports py3k and pypy. We 
>>>>>> should look at using PyMySQL instead if we want to start 
>>>>>> with a reasonable path to getting this in the gate.
> 
>>> MySQL Connector supports py3k too (not sure about pypy
>>> though).
> 
>>>>>> 
>>>>>> Clark
>>>>>> 
>>>>>> On Fri, Jul 11, 2014 at 10:07 AM, Miguel Angel Ajo
>>>>>> Pelayo <mangelajo at redhat.com
>>>>>> <mailto:mangelajo at redhat.com>> wrote:
>>>>>>> +1 here too,
>>>>>>> 
>>>>>>> Amazed with the performance gains, x2.4 seems a lot,
>>>>>>> and we'd get rid of deadlocks.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> ----- Original Message -----
>>>>>>>> +1
>>>>>>>> 
>>>>>>>> I'm pretty excited about the possibilities here.
>>>>>>>> I've had this mysqldb/eventlet contention in the back
>>>>>>>> of my mind for some time now. I'm glad to see some
>>>>>>>> work being done in this area.
>>>>>>>> 
>>>>>>>> Carl
>>>>>>>> 
>>>>>>>> On Fri, Jul 11, 2014 at 7:04 AM, Ihar Hrachyshka 
>>>>>>>> <ihrachys at redhat.com <mailto:ihrachys at redhat.com>>
>>>>>>>> wrote:
>>>>>> On 09/07/14 13:17, Ihar Hrachyshka wrote:
>>>>>>>>>>> Hi all,
>>>>>>>>>>> 
>>>>>>>>>>> Multiple projects are suffering from db lock 
>>>>>>>>>>> timeouts due to deadlocks deep in mysqldb 
>>>>>>>>>>> library that we use to interact with mysql 
>>>>>>>>>>> servers. In essence, the problem is due to 
>>>>>>>>>>> missing eventlet support in mysqldb module, 
>>>>>>>>>>> meaning when a db lock is encountered, the 
>>>>>>>>>>> library does not yield to the next green
>>>>>>>>>>> thread, allowing other threads to eventually
>>>>>>>>>>> unlock the grabbed lock, and instead it just
>>>>>>>>>>> blocks the main thread, that eventually raises
>>>>>>>>>>> timeout exception (OperationalError).
>>>>>>>>>>> 
>>>>>>>>>>> The failed operation is not retried, leaving 
>>>>>>>>>>> failing request not served. In Nova, there is
>>>>>>>>>>> a special retry mechanism for deadlocks, though
>>>>>>>>>>> I think it's more a hack than a proper fix.
>>>>>>>>>>> 
>>>>>>>>>>> Neutron is one of the projects that suffer
>>>>>>>>>>> from those timeout errors a lot. Partly it's
>>>>>>>>>>> due to lack of discipline in how we do nested
>>>>>>>>>>> calls in l3_db and ml2_plugin code, but that's
>>>>>>>>>>> not something to change in foreseeable future,
>>>>>>>>>>> so we need to find another solution that is
>>>>>>>>>>> applicable for Juno. Ideally, the solution
>>>>>>>>>>> should be applicable for Icehouse too to allow
>>>>>>>>>>> distributors to resolve existing deadlocks
>>>>>>>>>>> without waiting for Juno.
>>>>>>>>>>> 
>>>>>>>>>>> We've had several discussions and attempts to 
>>>>>>>>>>> introduce a solution to the problem. Thanks to 
>>>>>>>>>>> oslo.db guys, we now have more or less clear 
>>>>>>>>>>> view on the cause of the failures and how to 
>>>>>>>>>>> easily fix them. The solution is to switch 
>>>>>>>>>>> mysqldb to something eventlet aware. The best 
>>>>>>>>>>> candidate is probably MySQL Connector module
>>>>>>>>>>> that is an official MySQL client for Python and
>>>>>>>>>>> that shows some (preliminary) good results in
>>>>>>>>>>> terms of performance.
>>>>>> 
>>>>>> I've made additional testing, creating 2000 networks in 
>>>>>> parallel (10 thread workers) for both drivers and 
>>>>>> comparing results.
>>>>>> 
>>>>>> With mysqldb: 215.81 sec With mysql-connector: 88.66
>>>>>> 
>>>>>> ~2.4 times performance boost, ok? ;)
>>>>>> 
>>>>>> I think we should switch to that library *even* if we 
>>>>>> forget about all the nasty deadlocks we experience now.
>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> I've posted a Neutron spec for the switch to
>>>>>>>>>>> the new client in Juno at [1]. Ideally, switch
>>>>>>>>>>> is just a matter of several fixes to oslo.db
>>>>>>>>>>> that would enable full support for the new
>>>>>>>>>>> driver already supported by SQLAlchemy, plus 
>>>>>>>>>>> 'connection' string modified in service 
>>>>>>>>>>> configuration files, plus documentation
>>>>>>>>>>> updates to refer to the new official way to
>>>>>>>>>>> configure services for MySQL. The database code
>>>>>>>>>>> won't, ideally, require any major changes,
>>>>>>>>>>> though some adaptation for the new client
>>>>>>>>>>> library may be needed. That said, Neutron does
>>>>>>>>>>> not seem to require any changes, though it was
>>>>>>>>>>> revealed that there are some alembic migration
>>>>>>>>>>> rules in Keystone or Glance that need
>>>>>>>>>>> (trivial) modifications.
>>>>>>>>>>> 
>>>>>>>>>>> You can see how trivial the switch can be 
>>>>>>>>>>> achieved for a service based on example for 
>>>>>>>>>>> Neutron [2].
>>>>>>>>>>> 
>>>>>>>>>>> While this is a Neutron specific proposal,
>>>>>>>>>>> there is an obvious wish to switch to the new
>>>>>>>>>>> library globally throughout all the projects,
>>>>>>>>>>> to reduce devops burden, among other things. My
>>>>>>>>>>> vision is that, ideally, we switch all projects
>>>>>>>>>>> to the new library in Juno, though we still may
>>>>>>>>>>> leave several projects for K in case any issues
>>>>>>>>>>> arise, similar to the way projects switched to 
>>>>>>>>>>> oslo.messaging during two cycles instead of
>>>>>>>>>>> one. Though looking at how easy Neutron can be 
>>>>>>>>>>> switched to the new library, I wouldn't expect 
>>>>>>>>>>> any issues that would postpone the switch till 
>>>>>>>>>>> K.
>>>>>>>>>>> 
>>>>>>>>>>> It was mentioned in comments to the spec 
>>>>>>>>>>> proposal that there were some discussions at
>>>>>>>>>>> the latest summit around possible switch in
>>>>>>>>>>> context of Nova that revealed some concerns,
>>>>>>>>>>> though they do not seem to be documented
>>>>>>>>>>> anywhere. So if you know anything about it,
>>>>>>>>>>> please comment.
>>>>>>>>>>> 
>>>>>>>>>>> So, we'd like to hear from other projects
>>>>>>>>>>> what's your take on that move, whether you see
>>>>>>>>>>> any issues or have concerns about it.
>>>>>>>>>>> 
>>>>>>>>>>> Thanks for your comments, /Ihar
>>>>>>>>>>> 
>>>>>>>>>>> [1]: https://review.openstack.org/#/c/104905/ 
>>>>>>>>>>> [2]: https://review.openstack.org/#/c/105209/
>>>>>>>>>>> 
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>
>>>>>>>>>>> 
OpenStack-dev mailing list
>>>>>>>>>>> OpenStack-dev at lists.openstack.org
> <mailto:OpenStack-dev at lists.openstack.org>
>>>>>>>>>>> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>>> 
> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>> _______________________________________________
>>>>>>>>> OpenStack-dev mailing list 
>>>>>>>>> OpenStack-dev at lists.openstack.org
> <mailto:OpenStack-dev at lists.openstack.org>
>>>>>>>>> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> 
> 
>>>>>>>>> 
>>>>>>>>> 
>> _______________________________________________
>>>>>>>> OpenStack-dev mailing list 
>>>>>>>> OpenStack-dev at lists.openstack.org
> <mailto:OpenStack-dev at lists.openstack.org>
>>>>>>>> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> 
> 
>>>>>>>> 
>>>>>>>> 
>> _______________________________________________
>>>>>>> OpenStack-dev mailing list 
>>>>>>> OpenStack-dev at lists.openstack.org
> <mailto:OpenStack-dev at lists.openstack.org>
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>>
>>>>>>
>>>>>>>
>
>>>>>>> 
>>>>>>> 
> _______________________________________________ OpenStack-dev
>>>>>> mailing list OpenStack-dev at lists.openstack.org
> <mailto:OpenStack-dev at lists.openstack.org>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>
>>>>
>>>>>>
>
>>>>>> 
>>>>>> 
> _______________________________________________
>>>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org
> <mailto:OpenStack-dev at lists.openstack.org>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>>>> 
>>>> 
>>>> 
>>> _______________________________________________ OpenStack-dev 
>>> mailing list OpenStack-dev at lists.openstack.org
> <mailto:OpenStack-dev at lists.openstack.org>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>>> 
>>> 
> 
>> _______________________________________________ OpenStack-dev 
>> mailing list OpenStack-dev at lists.openstack.org
> <mailto:OpenStack-dev at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> 
> 
> _______________________________________________ OpenStack-dev
> mailing list OpenStack-dev at lists.openstack.org 
> <mailto:OpenStack-dev at lists.openstack.org> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- - Gus
> 
> 
> _______________________________________________ OpenStack-dev
> mailing list OpenStack-dev at lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTzNE0AAoJEC5aWaUY1u57jxMH/iuMA0BkKve1khSLCvlZeauW
5gVyMDCPf1Z5Xg0VzpUYwLqf0X4An/OQubzvL1DQlmn+0aSbcKrzIZfXVrAWJdc8
WDveQHqPZEYI4ucM/y9Kne6R0+IPh0huOUcJ7E56xCqEsM6L/OH5NpWOdtj2grLZ
NzIaewi5Hu+zS0h/FbjPlOZRwD7zrzxqgwGA6HYSsNvjep6RxgrBi7TKpusYjvLN
0KPQXZPJGjEtOs9mEzhu7svUPk8gVbnufWU6vhm1ECJa0GIwP0YhwFD5GuDMkG3H
/qIftAZlopDP75n9bcNMRrq3MfBFSBpTHW0sCKDrlfpzDzl3KjovLode+U/karU=
=cmBt
-----END PGP SIGNATURE-----



More information about the OpenStack-dev mailing list