[openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream
aspiers at suse.com
Wed Mar 7 10:20:58 UTC 2018
Raoul Scarazzini <rasca at redhat.com> wrote:
>On 06/03/2018 13:27, Adam Spiers wrote:
>> Hi Raoul and all,
>> Sorry for joining this discussion late!
>> I do not work on TripleO, but I'm part of the wider OpenStack
>> sub-communities which focus on HA and more recently,
>> self-healing. With that hat on, I'd like to suggest that maybe
>> it's possible to collaborate on this in a manner which is agnostic to
>> the deployment mechanism. There is an open spec on this> https://review.openstack.org/#/c/443504/
>> which was mentioned in the Denver PTG session on destructive testing
>> which you referenced.
>> Currently each sub-community and vendor seems to be reinventing HA
>> testing by itself to some extent, which is easier to accomplish in the
>> short-term, but obviously less efficient in the long-term. It would
>> be awesome if we could break these silos down and join efforts! :-)
>First of all thanks for your detailed answer. Then let me be honest
>while saying that I didn't know yardstick.
Neither did I until Sydney, despite being involved with OpenStack HA
for many years ;-) I think this shows that either a) there is room
for improved communication between the OpenStack and OPNFV
communities, or b) I need to take my head out of the sand more often ;-)
>I need to start from scratch
>here to understand what this project is. In any case, the exact meaning
>of this thread is to involve people and have a more comprehensive look
>at what's around.
>The point here is that, as you can see from the tripleo-ha-utils spec
> I've created, the project is meant for TripleO specifically. On one
>side this is a significant limitation, but on the other one, due to the
>pluggable nature of the project, I think that integrations with other
>software like you are proposing is not impossible.
Yep. I totally sympathise with the tension between the need to get
something working quickly, vs. the need to collaborate with the
community in the most efficient way.
>Feel free to add your comments to the review.
The spec looks great to me; I don't really have anything to add, and I
don't feel comfortable voting in a project which I know very little
>In the meantime, I'll check yardstick to see which kind of bridge we
>can build to avoid reinventing the wheel.
Great, thanks! I wish I could immediately help with this, but I
haven't had the chance to learn yardstick myself yet. We should
probably try to recruit someone from OPNFV to provide advice. I've
cc'd Georg who IIRC was the person who originally told me about
yardstick :-) He is an NFV expert and is also very interested in
automated testing efforts:
so he may be able to help with this architectural challenge.
Also you should be aware that work has already started on Eris, the
extreme testing framework proposed in this user story:
and in the spec you already saw:
You can see ongoing work here:
It looks like there is a plan to propose a new SIG for this, although
personally I would be very happy to see it adopted by the self-healing
SIG, since this framework is exactly what is needed for testing any
I'm hoping that Sampath and/or Gautum will chip in here, since I think
they're currently the main drivers for Eris.
I'm beginning to think that maybe we should organise a video
conference call to coordinate efforts between the various interested
parties. If there is appetite for that, the first question is: who
wants to be involved? To answer that, I have created an etherpad
where interested people can sign up:
and I've cc'd people who I think would probably be interested. Does
this sound like a good approach?
More information about the OpenStack-dev