[openstack-dev] [grenade][keystone] Keystone multinode grenade

Grasza, Grzegorz grzegorz.grasza at intel.com
Mon Feb 8 13:20:09 UTC 2016

> From: Sean Dague [mailto:sean at dague.net]
> On 02/05/2016 04:44 AM, Grasza, Grzegorz wrote:
> >
> >> From: Sean Dague [mailto:sean at dague.net]
> >>
> >> On 02/04/2016 10:25 AM, Grasza, Grzegorz wrote:
> >>>
> >>> Keystone is just one service, but we want to run a test, in which it
> >>> is setup in HA – two services running at different versions, using
> >>> the same
> >> DB.
> >>
> >> Let me understand the scenario correctly.
> >>
> >> There would be Keystone Liberty and Keystone Mitaka, both talking to
> >> a Liberty DB?
> >>
> >
> > The DB would be upgraded to Mitaka. From Mitaka onwards, we are
> making only additive schema changes, so that both versions can work
> simultaneously.
> >
> > Here are the specifics:
> > http://docs.openstack.org/developer/keystone/developing.html#online-
> mi
> > gration
> Breaking this down, it seems like there is a simpler test setup here.
> Master keystone is already tested with master db, all over the place. In unit
> tests all the dsvm jobs. So we can assume pretty hard that that works.
> Keystone doesn't cross talk to itself (as there are no workers), so I don't think
> there is anything to test there.
> Keystone stable working with master db seems like an interesting bit, are
> there already tests for that?

Not yet. Right now there is only a unit test, checking obvious incompatibilities.

> Also, is there any time where you'd get data from Keystone new use it in a
> server, and then send it back to Keystone old, and have a validation issue?
> That seems easier to trigger edge cases at a lower level. Like an extra
> attribute is in a payload in Keystone new, and Keystone old faceplants with it.

In case of keystone, the data that can cause compatibility issues is in the DB.
There can be issues when data stored or modified by the new keystone
is read by the old service, or the other way around. The issues may happen
only in certain scenarios, like:

row created by old keystone ->
row modified by new keystone ->
failure reading by old keystone

I think a CI test, in which we have more than one keystone version accessible
at the same time is preferable to testing only one scenario. My proposed
solution with HAProxy probably wouldn't trigger all of them, but it may catch
some instances in which there is no full lower level test coverage. I think testing
in HA would be helpful, especially at the beginning, when we are only starting to
evaluate rolling upgrades and discovering new types of issues that we should
test for.

> The reality is that standing up an HA Proxy Keystone multinode environment
> is going to be pretty extensive amount of work. And when things fail, digging
> out why, is kind of hard. However it feels like most of the interesting edges
> can be tested well at a lower level. And is at least worth getting those sorted
> before biting off the bigger thing.

I only proposed multinode grenade, because I thought it is the most complete
solution for what I want to achieve, but maybe there is a simpler way, like
running two keystone instances on the same node?

/ Greg

More information about the OpenStack-dev mailing list