[openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream

Georg Kunz georg.kunz at ericsson.com
Wed Mar 7 20:59:46 UTC 2018


Hi Adam,

> Raoul Scarazzini <rasca at redhat.com> wrote:
> >On 06/03/2018 13:27, Adam Spiers wrote:
> >> Hi Raoul and all,
> >> Sorry for joining this discussion late!
> >[...]
> >> I do not work on TripleO, but I'm part of the wider OpenStack
> >> sub-communities which focus on HA[0] and more recently,
> >> self-healing[1].  With that hat on, I'd like to suggest that maybe
> >> it's possible to collaborate on this in a manner which is agnostic to
> >> the deployment mechanism.  There is an open spec on this>
> >> https://review.openstack.org/#/c/443504/
> >> which was mentioned in the Denver PTG session on destructive testing
> >> which you referenced[2].
> >[...]
> >>    https://www.opnfv.org/community/projects/yardstick
> >[...]
> >> Currently each sub-community and vendor seems to be reinventing HA
> >> testing by itself to some extent, which is easier to accomplish in
> >> the short-term, but obviously less efficient in the long-term.  It
> >> would be awesome if we could break these silos down and join efforts!
> >> :-)
> >
> >Hi Adam,
> >First of all thanks for your detailed answer. Then let me be honest
> >while saying that I didn't know yardstick.
> 
> Neither did I until Sydney, despite being involved with OpenStack HA for
> many years ;-)  I think this shows that either a) there is room for improved
> communication between the OpenStack and OPNFV communities, or b) I
> need to take my head out of the sand more often ;-)
> 
> >I need to start from scratch
> >here to understand what this project is. In any case, the exact meaning
> >of this thread is to involve people and have a more comprehensive look
> >at what's around.
> >The point here is that, as you can see from the tripleo-ha-utils spec
> >[1] I've created, the project is meant for TripleO specifically. On one
> >side this is a significant limitation, but on the other one, due to the
> >pluggable nature of the project, I think that integrations with other
> >software like you are proposing is not impossible.
> 
> Yep.  I totally sympathise with the tension between the need to get
> something working quickly, vs. the need to collaborate with the community
> in the most efficient way.
> 
> >Feel free to add your comments to the review.
> 
> The spec looks great to me; I don't really have anything to add, and I don't
> feel comfortable voting in a project which I know very little about.
> 
> >In the meantime, I'll check yardstick to see which kind of bridge we
> >can build to avoid reinventing the wheel.
> 
> Great, thanks!  I wish I could immediately help with this, but I haven't had the
> chance to learn yardstick myself yet.  We should probably try to recruit
> someone from OPNFV to provide advice.  I've cc'd Georg who IIRC was the
> person who originally told me about yardstick :-)  He is an NFV expert and is
> also very interested in automated testing efforts:
> 
>     http://lists.openstack.org/pipermail/openstack-dev/2017-
> November/124942.html
> 
> so he may be able to help with this architectural challenge.

Thank you for bringing this up here. Better collaboration and sharing of knowledge, methodologies and tools across the communities is really what I'd like to see and facilitate. Hence, I am happy to help.

I have already started to advertise the newly proposed QA SIG in the OPNFV test WG and I'll happily do the same for the self-healing SIG and any HA testing efforts in general. There is certainly some overlapping interest in these testing aspects between the QA SIG and the self-healing SIG and hence collaboration between both SIGs is crucial.

One remark regarding tools and frameworks: I consider the true value of a SIG to be a place for talking about methodologies and best practices: What do we need to test? What are the challenges? How can we approach this across communities? The tools and frameworks are important and we should investigate which tools are available, how good they are, how much they fit a given purpose, but at the end of the day they are tools meant to enable well designed testing methodologies.

> Also you should be aware that work has already started on Eris, the extreme
> testing framework proposed in this user story:
> 
>     http://specs.openstack.org/openstack/openstack-user-stories/user-
> stories/proposed/openstack_extreme_testing.html
> 
> and in the spec you already saw:
> 
>     https://review.openstack.org/#/c/443504/
> 
> You can see ongoing work here:
> 
>     https://github.com/LCOO/eris
>     https://openstack-
> lcoo.atlassian.net/wiki/spaces/LCOO/pages/13393034/Eris+-
> +Extreme+Testing+Framework+for+OpenStack
> 
> It looks like there is a plan to propose a new SIG for this, although personally I
> would be very happy to see it adopted by the self-healing SIG, since this
> framework is exactly what is needed for testing any self-healing mechanism.
> 
> I'm hoping that Sampath and/or Gautum will chip in here, since I think they're
> currently the main drivers for Eris.
> 
> I'm beginning to think that maybe we should organise a video conference call
> to coordinate efforts between the various interested parties.  If there is
> appetite for that, the first question is: who wants to be involved?  To answer
> that, I have created an etherpad where interested people can sign up:
> 
>     https://etherpad.openstack.org/p/extreme-testing-contacts
> 
> and I've cc'd people who I think would probably be interested.  Does this
> sound like a good approach?

We discussed a very similar idea in Dublin in the context of the QA SIG. I very much like the idea of a cross-community, cross-team, and apparently even cross-SIG approach.
 
Cheers
Georg




More information about the OpenStack-dev mailing list