[openstack-dev] [grenade][keystone] Keystone multinode grenade

Morgan Fainberg morgan.fainberg at gmail.com
Mon Feb 8 16:59:26 UTC 2016


On Mon, Feb 8, 2016 at 5:20 AM, Grasza, Grzegorz <grzegorz.grasza at intel.com>
wrote:

>
> > From: Sean Dague [mailto:sean at dague.net]
> >
> > On 02/05/2016 04:44 AM, Grasza, Grzegorz wrote:
> > >
> > >> From: Sean Dague [mailto:sean at dague.net]
> > >>
> > >> On 02/04/2016 10:25 AM, Grasza, Grzegorz wrote:
> > >>>
> > >>> Keystone is just one service, but we want to run a test, in which it
> > >>> is setup in HA – two services running at different versions, using
> > >>> the same
> > >> DB.
> > >>
> > >> Let me understand the scenario correctly.
> > >>
> > >> There would be Keystone Liberty and Keystone Mitaka, both talking to
> > >> a Liberty DB?
> > >>
> > >
> > > The DB would be upgraded to Mitaka. From Mitaka onwards, we are
> > making only additive schema changes, so that both versions can work
> > simultaneously.
> > >
> > > Here are the specifics:
> > > http://docs.openstack.org/developer/keystone/developing.html#online-
> > mi
> > > gration
> >
> > Breaking this down, it seems like there is a simpler test setup here.
> >
> > Master keystone is already tested with master db, all over the place. In
> unit
> > tests all the dsvm jobs. So we can assume pretty hard that that works.
> >
> > Keystone doesn't cross talk to itself (as there are no workers), so I
> don't think
> > there is anything to test there.
> >
> > Keystone stable working with master db seems like an interesting bit, are
> > there already tests for that?
>
> Not yet. Right now there is only a unit test, checking obvious
> incompatibilities.
>
>
As an FYI, this test was reverted as we spent a significant time around
covering it at the midcycle (and it was going to require us to
significantly rework in-flight code (and was developed / agreed upon before
the new db restrictions landed). We will be revisiting this with the now
better understanding of the scope and how to handle the "limited" downtime
upgrade for first thing in Newton.


> >
> > Also, is there any time where you'd get data from Keystone new use it in
> a
> > server, and then send it back to Keystone old, and have a validation
> issue?
> > That seems easier to trigger edge cases at a lower level. Like an extra
> > attribute is in a payload in Keystone new, and Keystone old faceplants
> with it.
>
> In case of keystone, the data that can cause compatibility issues is in
> the DB.
> There can be issues when data stored or modified by the new keystone
> is read by the old service, or the other way around. The issues may happen
> only in certain scenarios, like:
>
> row created by old keystone ->
> row modified by new keystone ->
> failure reading by old keystone
>
> I think a CI test, in which we have more than one keystone version
> accessible
> at the same time is preferable to testing only one scenario. My proposed
> solution with HAProxy probably wouldn't trigger all of them, but it may
> catch
> some instances in which there is no full lower level test coverage. I
> think testing
> in HA would be helpful, especially at the beginning, when we are only
> starting to
> evaluate rolling upgrades and discovering new types of issues that we
> should
> test for.
>
>
This was something we need to work on. We came to the conclusion it is
going to be very hard (tm) to run multiple versions of keystone on the same
DB, the volume of complexity added is fairly large. I also want to better
understand the proposed upgrade paths - we ran many scenarios and came up
with a ton of edge cases / issues.

Thus this is likely something we will need to target for newton, but this
shouldn't stop us from standing up the basic test scaffolding so we can
move more quickly next cycle.

When we have the gate job, I would like to see us run a battery of tests if
we're doing this against both keystones in isolation rather than HAProxy.
The HAProxy test is a different type of test to confirm random subsets of
read/write don't break (aren't wildly different) across the two different
code bases. Testing each API in isolation is also important.


> >
> > The reality is that standing up an HA Proxy Keystone multinode
> environment
> > is going to be pretty extensive amount of work. And when things fail,
> digging
> > out why, is kind of hard. However it feels like most of the interesting
> edges
> > can be tested well at a lower level. And is at least worth getting those
> sorted
> > before biting off the bigger thing.
>
> I only proposed multinode grenade, because I thought it is the most
> complete
> solution for what I want to achieve, but maybe there is a simpler way, like
> running two keystone instances on the same node?
>
>
It wouldn't be hard to run two instances of keystone on different points.
However, it is likely to also be a chunk of work to make devstack able to
handle (but less work than multinode I'm 90% sure).


> / Greg
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160208/3416c15f/attachment.html>


More information about the OpenStack-dev mailing list