[openstack-dev] [goals][python3] mixed versions?

Ghanshyam Mann gmann at ghanshyammann.com
Thu Sep 13 13:48:08 UTC 2018




 ---- On Thu, 13 Sep 2018 22:10:48 +0900 Doug Hellmann <doug at doughellmann.com> wrote ---- 
 > Excerpts from Thomas Goirand's message of 2018-09-13 12:23:32 +0200:
 > > On 09/13/2018 12:52 AM, Chris Friesen wrote:
 > > > On 9/12/2018 12:04 PM, Doug Hellmann wrote:
 > > > 
 > > >>> This came up in a Vancouver summit session (the python3 one I think).
 > > >>> General consensus there seemed to be that we should have grenade jobs
 > > >>> that run python2 on the old side and python3 on the new side and test
 > > >>> the update from one to another through a release that way.
 > > >>> Additionally there was thought that the nova partial job (and similar
 > > >>> grenade jobs) could hold the non upgraded node on python2 and that
 > > >>> would talk to a python3 control plane.
 > > >>>
 > > >>> I haven't seen or heard of anyone working on this yet though.
 > > >>>
 > > >>> Clark
 > > >>>
 > > >>
 > > >> IIRC, we also talked about not supporting multiple versions of
 > > >> python on a given node, so all of the services on a node would need
 > > >> to be upgraded together.
 > > > 
 > > > As I understand it, the various services talk to each other using
 > > > over-the-wire protocols.  Assuming this is correct, why would we need to
 > > > ensure they are using the same python version?
 > > > 
 > > > Chris
 > > 
 > > There are indeed a few cases were things can break, especially with
 > > character encoding. If you want an example of what may go wrong, here's
 > > one with Cinder and Ceph:
 > > 
 > > https://review.openstack.org/568813
 > > 
 > > Without the encodeutils.safe_decode() call, Cinder over Ceph was just
 > > crashing for me in Debian (Debian is full Python 3 now...). In this
 > > example, we're just over the wire, and it was supposed to be the same.
 > > Yet, only an integration test could have detect it (and I discovered it
 > > running puppet-openstack on Debian).

I think that should be detected by py3 ceph job  "legacy-tempest-dsvm-py35-full-devstack-plugin-ceph". Was that failing or anyone checked its status during failure. This job is experimental in cinder gate[1] so i could not get its failure data from health-dashboard.
May be we should move it to check pipeline to cover cinder+ceph for py3 ?

[1] https://github.com/openstack-infra/project-config/blob/4eeec4cc6e18dd8933b16a2ddda75b469b893437/zuul.d/projects.yaml#L3471

-gmann
 > 
 > Was that caused (or found) by first running cinder under python 2
 > and then upgrading to python 3 on the same host? That's the test
 > case Jim originally suggested and I'm trying to understand if we
 > actually need it.
 > 
 > Doug
 > 
 > __________________________________________________________________________
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 





More information about the OpenStack-dev mailing list