[all] Etcd as DLM

Julia Kreger juliaashleykreger at gmail.com
Tue Dec 4 02:47:25 UTC 2018


Indeed it is a considered a base service, but I'm unaware of why it was
decided to not have any abstraction layer on top. That sort of defeats the
adoption of tooz as a standard in the community. Plus with the rest of our
code bases, we have a number of similar or identical patterns and it would
be ideal to have a single library providing the overall interface for the
purposes of consistency. Could you provide some more background on that
decision?

I guess what I'd really like to see is an oslo.db interface into etcd3.

-Julia

On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> It is a full base service already:
> https://governance.openstack.org/tc/reference/base-services.html
>
> Projects have been free to use it for quite some time. I'm not sure if any
> actually are yet though.
>
> It was decided not to put an abstraction layer on top as its pretty simple
> and commonly deployed.
>
> Thanks,
> Kevin
> ------------------------------
> *From:* Julia Kreger [juliaashleykreger at gmail.com]
> *Sent:* Monday, December 03, 2018 3:53 PM
> *To:* Ben Nemec
> *Cc:* Davanum Srinivas; geguileo at redhat.com;
> openstack-discuss at lists.openstack.org
> *Subject:* Re: [all] Etcd as DLM
>
> I would like to slightly interrupt this train of thought for an
> unscheduled vision of the future!
>
> What if we could allow a component to store data in etcd3's key value
> store like how we presently use oslo_db/sqlalchemy?
>
> While I personally hope to have etcd3 as a DLM for ironic one day, review
> bandwidth permitting, it occurs to me that etcd3 could be leveraged for
> more than just DLM. If we have a common vision to enable data storage, I
> suspect it might help provide overall guidance as to how we want to
> interact with the service moving forward.
>
> -Julia
>
> On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec <openstack at nemebean.com> wrote:
>
>> Hi,
>>
>> I wanted to revisit this topic because it has come up in some downstream
>> discussions around Cinder A/A HA and the last time we talked about it
>> upstream was a year and a half ago[1]. There have certainly been changes
>> since then so I think it's worth another look. For context, the
>> conclusion of that session was:
>>
>> "Let's use etcd 3.x in the devstack CI, projects that are eventlet based
>> an use the etcd v3 http experimental API and those that don't can use
>> the etcd v3 gRPC API. Dims will submit a patch to tooz for the new
>> driver with v3 http experimental API. Projects should feel free to use
>> the DLM based on tooz+etcd3 from now on. Others projects can figure out
>> other use cases for etcd3."
>>
>> The main question that has come up is whether this is still the best
>> practice or if we should revisit the preferred drivers for etcd. Gorka
>> has gotten the grpc-based driver working in a Cinder driver that needs
>> etcd[2], so there's a question as to whether we still need the HTTP
>> etcd-gateway or if everything should use grpc. I will admit I'm nervous
>> about trying to juggle eventlet and grpc, but if it works then my only
>> argument is general misgivings about doing anything clever that involves
>> eventlet. :-)
>>
>> It looks like the HTTP API for etcd has moved out of experimental
>> status[3] at this point, so that's no longer an issue. There was some
>> vague concern from a downstream packaging perspective that the grpc
>> library might use a funky build system, whereas the etcd3-gateway
>> library only depends on existing OpenStack requirements.
>>
>> On the other hand, I don't know how much of a hassle it is to deploy and
>> manage a grpc-gateway. I'm kind of hoping someone has already been down
>> this road and can advise about what they found.
>>
>> Thanks.
>>
>> -Ben
>>
>> 1: https://etherpad.openstack.org/p/BOS-etcd-base-service
>> 2:
>>
>> https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111
>> 3: https://github.com/grpc-ecosystem/grpc-gateway
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20181203/e093140f/attachment.html>


More information about the openstack-discuss mailing list