[openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client
Ihar Hrachyshka
ihrachys at redhat.com
Mon Jul 14 15:03:02 UTC 2014
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
On 14/07/14 15:54, Clark Boylan wrote:
> On Sun, Jul 13, 2014 at 9:20 AM, Ihar Hrachyshka
> <ihrachys at redhat.com> wrote: On 11/07/14 19:20, Clark Boylan
> wrote:
>>>> Before we get too far ahead of ourselves mysql-connector is
>>>> not hosted on pypi. Instead it is an external package link.
>>>> We recently managed to remove all packages that are hosted as
>>>> external package links from openstack and will not add new
>>>> ones in. Before we can use mysql-connector in the gate oracle
>>>> will need to publish mysql-connector on pypi properly.
>
> There is misunderstanding in our community on how we deploy db
> client modules. No project actually depends on any of them. We
> assume deployers will install the proper one and configure
> 'connection' string to use it. In case of devstack, we install the
> appropriate package from distribution packages, not pip.
>
>> Correct, but for all of the other test suites (unittests) we
>> install the db clients via pip because tox runs them and
>> virtualenvs allowing site packages cause too many problems. See
>> https://git.openstack.org/cgit/openstack/nova/tree/test-requirements.txt#n8.
>>
>>
So we do actually depend on these things being pip installable.
>> Basically this allows devs to run `tox` and it works.
Roger that, and thanks for clarification. I'm trying to reach the
author and the maintainer of mysqlconnector-python to see whether I'll
be able to convince him to publish the packages on pypi.python.org.
>
>> I would argue that we should have devstack install via pip too
>> for consistency, but that is a different issue (it is already
>> installing all of the other python dependencies this way so why
>> special case?).
>
> What we do is recommending a module for our users in our
> documentation.
>
> That said, I assume the gate is a non-issue. Correct?
>
>>>>
>>>> That said there is at least one other pure python
>>>> alternative, PyMySQL. PyMySQL supports py3k and pypy. We
>>>> should look at using PyMySQL instead if we want to start with
>>>> a reasonable path to getting this in the gate.
>
> MySQL Connector supports py3k too (not sure about pypy though).
>
>>>>
>>>> Clark
>>>>
>>>> On Fri, Jul 11, 2014 at 10:07 AM, Miguel Angel Ajo Pelayo
>>>> <mangelajo at redhat.com> wrote:
>>>>> +1 here too,
>>>>>
>>>>> Amazed with the performance gains, x2.4 seems a lot, and
>>>>> we'd get rid of deadlocks.
>>>>>
>>>>>
>>>>>
>>>>> ----- Original Message -----
>>>>>> +1
>>>>>>
>>>>>> I'm pretty excited about the possibilities here. I've
>>>>>> had this mysqldb/eventlet contention in the back of my
>>>>>> mind for some time now. I'm glad to see some work being
>>>>>> done in this area.
>>>>>>
>>>>>> Carl
>>>>>>
>>>>>> On Fri, Jul 11, 2014 at 7:04 AM, Ihar Hrachyshka
>>>>>> <ihrachys at redhat.com> wrote:
>>>> On 09/07/14 13:17, Ihar Hrachyshka wrote:
>>>>>>>>> Hi all,
>>>>>>>>>
>>>>>>>>> Multiple projects are suffering from db lock
>>>>>>>>> timeouts due to deadlocks deep in mysqldb library
>>>>>>>>> that we use to interact with mysql servers. In
>>>>>>>>> essence, the problem is due to missing eventlet
>>>>>>>>> support in mysqldb module, meaning when a db lock
>>>>>>>>> is encountered, the library does not yield to the
>>>>>>>>> next green thread, allowing other threads to
>>>>>>>>> eventually unlock the grabbed lock, and instead it
>>>>>>>>> just blocks the main thread, that eventually raises
>>>>>>>>> timeout exception (OperationalError).
>>>>>>>>>
>>>>>>>>> The failed operation is not retried, leaving
>>>>>>>>> failing request not served. In Nova, there is a
>>>>>>>>> special retry mechanism for deadlocks, though I
>>>>>>>>> think it's more a hack than a proper fix.
>>>>>>>>>
>>>>>>>>> Neutron is one of the projects that suffer from
>>>>>>>>> those timeout errors a lot. Partly it's due to lack
>>>>>>>>> of discipline in how we do nested calls in l3_db
>>>>>>>>> and ml2_plugin code, but that's not something to
>>>>>>>>> change in foreseeable future, so we need to find
>>>>>>>>> another solution that is applicable for Juno.
>>>>>>>>> Ideally, the solution should be applicable for
>>>>>>>>> Icehouse too to allow distributors to resolve
>>>>>>>>> existing deadlocks without waiting for Juno.
>>>>>>>>>
>>>>>>>>> We've had several discussions and attempts to
>>>>>>>>> introduce a solution to the problem. Thanks to
>>>>>>>>> oslo.db guys, we now have more or less clear view
>>>>>>>>> on the cause of the failures and how to easily fix
>>>>>>>>> them. The solution is to switch mysqldb to
>>>>>>>>> something eventlet aware. The best candidate is
>>>>>>>>> probably MySQL Connector module that is an
>>>>>>>>> official MySQL client for Python and that shows
>>>>>>>>> some (preliminary) good results in terms of
>>>>>>>>> performance.
>>>>
>>>> I've made additional testing, creating 2000 networks in
>>>> parallel (10 thread workers) for both drivers and comparing
>>>> results.
>>>>
>>>> With mysqldb: 215.81 sec With mysql-connector: 88.66
>>>>
>>>> ~2.4 times performance boost, ok? ;)
>>>>
>>>> I think we should switch to that library *even* if we forget
>>>> about all the nasty deadlocks we experience now.
>>>>
>>>>>>>>>
>>>>>>>>> I've posted a Neutron spec for the switch to the
>>>>>>>>> new client in Juno at [1]. Ideally, switch is just
>>>>>>>>> a matter of several fixes to oslo.db that would
>>>>>>>>> enable full support for the new driver already
>>>>>>>>> supported by SQLAlchemy, plus 'connection' string
>>>>>>>>> modified in service configuration files, plus
>>>>>>>>> documentation updates to refer to the new official
>>>>>>>>> way to configure services for MySQL. The database
>>>>>>>>> code won't, ideally, require any major changes,
>>>>>>>>> though some adaptation for the new client library
>>>>>>>>> may be needed. That said, Neutron does not seem to
>>>>>>>>> require any changes, though it was revealed that
>>>>>>>>> there are some alembic migration rules in Keystone
>>>>>>>>> or Glance that need (trivial) modifications.
>>>>>>>>>
>>>>>>>>> You can see how trivial the switch can be achieved
>>>>>>>>> for a service based on example for Neutron [2].
>>>>>>>>>
>>>>>>>>> While this is a Neutron specific proposal, there is
>>>>>>>>> an obvious wish to switch to the new library
>>>>>>>>> globally throughout all the projects, to reduce
>>>>>>>>> devops burden, among other things. My vision is
>>>>>>>>> that, ideally, we switch all projects to the new
>>>>>>>>> library in Juno, though we still may leave several
>>>>>>>>> projects for K in case any issues arise, similar to
>>>>>>>>> the way projects switched to oslo.messaging during
>>>>>>>>> two cycles instead of one. Though looking at how
>>>>>>>>> easy Neutron can be switched to the new library, I
>>>>>>>>> wouldn't expect any issues that would postpone the
>>>>>>>>> switch till K.
>>>>>>>>>
>>>>>>>>> It was mentioned in comments to the spec proposal
>>>>>>>>> that there were some discussions at the latest
>>>>>>>>> summit around possible switch in context of Nova
>>>>>>>>> that revealed some concerns, though they do not
>>>>>>>>> seem to be documented anywhere. So if you know
>>>>>>>>> anything about it, please comment.
>>>>>>>>>
>>>>>>>>> So, we'd like to hear from other projects what's
>>>>>>>>> your take on that move, whether you see any issues
>>>>>>>>> or have concerns about it.
>>>>>>>>>
>>>>>>>>> Thanks for your comments, /Ihar
>>>>>>>>>
>>>>>>>>> [1]: https://review.openstack.org/#/c/104905/ [2]:
>>>>>>>>> https://review.openstack.org/#/c/105209/
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> OpenStack-dev mailing list
>>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>>
>
>>>>>>>>>
_______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>>>
>
>>>>>>>
_______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>
>>>>>
>>>>>>
>
>>>>>>
_______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>>
>>>>
>>>>>
_______________________________________________ OpenStack-dev
>>>> mailing list OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>
>>
>>>>
_______________________________________________
>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>>
> _______________________________________________ OpenStack-dev
> mailing list OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBCgAGBQJTw/EmAAoJEC5aWaUY1u57dYoH/it9kTKsjYiI/8hWLwr08ezt
Lk7xwm2mWvTjWSsT3eZypAupIS4BR1t6/jJyGQO2YD1dI2IwmEsV7MZnoE9Cl7Iz
XtKVi+PGMMnVo6XvLej75BbFlXccIGo5AYBGqQ87JEiktzMxfFyubsARv8vu6MQp
O1PinoCj9dCfTl8tKu+RzbcoSC9aFN07vXbRnh1ouzBVpK2Ps/oBlWzusawaUsgI
8mYaCa+hn5z10PLalxaDn5PaJbk0RsGLr2eLXABRzE6qGYzYyjfJ343AIUbUbz/4
TUTXwnVSZrJ8TUgpU+nJkrWORkwC3rIzfKM+tHlEeA2BaxyAsyaf6zYk8CN3QGs=
=LGh/
-----END PGP SIGNATURE-----
More information about the OpenStack-dev
mailing list