On Thu, Jan 17, 2019 at 10:37 AM Ben Nemec <openstack@nemebean.com> wrote:
This thread got a bit sidetracked with potential use-cases for etcd3
(which seems to happen a lot with this topic...), but we still need to
decide how we're going to actually communicate with etcd from OpenStack
services. Does anyone have input on that?

I have been successful testing the cinder-volume service using etcd3-gateway [1] to access etcd3 via tooz.coordination. Work great, although I haven't stress tested the setup.

[1] https://github.com/dims/etcd3-gateway

Alan

Thanks.

-Ben

On 12/3/18 4:48 PM, Ben Nemec wrote:
> Hi,
>
> I wanted to revisit this topic because it has come up in some downstream
> discussions around Cinder A/A HA and the last time we talked about it
> upstream was a year and a half ago[1]. There have certainly been changes
> since then so I think it's worth another look. For context, the
> conclusion of that session was:
>
> "Let's use etcd 3.x in the devstack CI, projects that are eventlet based
> an use the etcd v3 http experimental API and those that don't can use
> the etcd v3 gRPC API. Dims will submit a patch to tooz for the new
> driver with v3 http experimental API. Projects should feel free to use
> the DLM based on tooz+etcd3 from now on. Others projects can figure out
> other use cases for etcd3."
>
> The main question that has come up is whether this is still the best
> practice or if we should revisit the preferred drivers for etcd. Gorka
> has gotten the grpc-based driver working in a Cinder driver that needs
> etcd[2], so there's a question as to whether we still need the HTTP
> etcd-gateway or if everything should use grpc. I will admit I'm nervous
> about trying to juggle eventlet and grpc, but if it works then my only
> argument is general misgivings about doing anything clever that involves
> eventlet. :-)
>
> It looks like the HTTP API for etcd has moved out of experimental
> status[3] at this point, so that's no longer an issue. There was some
> vague concern from a downstream packaging perspective that the grpc
> library might use a funky build system, whereas the etcd3-gateway
> library only depends on existing OpenStack requirements.
>
> On the other hand, I don't know how much of a hassle it is to deploy and
> manage a grpc-gateway. I'm kind of hoping someone has already been down
> this road and can advise about what they found.
>
> Thanks.
>
> -Ben
>
> 1: https://etherpad.openstack.org/p/BOS-etcd-base-service
> 2:
> https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111
>
> 3: https://github.com/grpc-ecosystem/grpc-gateway