Hi, I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was: "Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3." The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-) It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements. On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found. Thanks. -Ben 1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2: https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65... 3: https://github.com/grpc-ecosystem/grpc-gateway
I would like to slightly interrupt this train of thought for an unscheduled vision of the future! What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy? While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward. -Julia On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec <openstack@nemebean.com> wrote:
Hi,
I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was:
"Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3."
The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-)
It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements.
On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found.
Thanks.
-Ben
1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2:
https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65... 3: https://github.com/grpc-ecosystem/grpc-gateway
It is a full base service already: https://governance.openstack.org/tc/reference/base-services.html Projects have been free to use it for quite some time. I'm not sure if any actually are yet though. It was decided not to put an abstraction layer on top as its pretty simple and commonly deployed. Thanks, Kevin ________________________________ From: Julia Kreger [juliaashleykreger@gmail.com] Sent: Monday, December 03, 2018 3:53 PM To: Ben Nemec Cc: Davanum Srinivas; geguileo@redhat.com; openstack-discuss@lists.openstack.org Subject: Re: [all] Etcd as DLM I would like to slightly interrupt this train of thought for an unscheduled vision of the future! What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy? While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward. -Julia On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec <openstack@nemebean.com<mailto:openstack@nemebean.com>> wrote: Hi, I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was: "Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3." The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-) It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements. On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found. Thanks. -Ben 1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2: https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65... 3: https://github.com/grpc-ecosystem/grpc-gateway
Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision? I guess what I'd really like to see is an oslo.db interface into etcd3. -Julia On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M <Kevin.Fox@pnnl.gov> wrote:
It is a full base service already: https://governance.openstack.org/tc/reference/base-services.html
Projects have been free to use it for quite some time. I'm not sure if any actually are yet though.
It was decided not to put an abstraction layer on top as its pretty simple and commonly deployed.
Thanks, Kevin ------------------------------ *From:* Julia Kreger [juliaashleykreger@gmail.com] *Sent:* Monday, December 03, 2018 3:53 PM *To:* Ben Nemec *Cc:* Davanum Srinivas; geguileo@redhat.com; openstack-discuss@lists.openstack.org *Subject:* Re: [all] Etcd as DLM
I would like to slightly interrupt this train of thought for an unscheduled vision of the future!
What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy?
While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward.
-Julia
On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec <openstack@nemebean.com> wrote:
Hi,
I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was:
"Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3."
The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-)
It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements.
On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found.
Thanks.
-Ben
1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2:
https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65... 3: https://github.com/grpc-ecosystem/grpc-gateway
On 03/12, Julia Kreger wrote:
Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision?
I guess what I'd really like to see is an oslo.db interface into etcd3.
-Julia
Hi, I think that some projects won't bother with the etcd interface since it would require some major rework of the whole service to get it working. Take Cinder for example. We do complex conditional updates that, as far as I know, cannot be satisfied with etcd's Compare-and-Swap functionality. We could modify all our code to make it support both relational databases and key-value stores, but I'm not convinced it would be worthwhile considering the huge effort it would require. I believe there are other OpenStack projects that have procedural code stored on the database, which would probably be hard to make compatible with key-value stores. Cheers, Gorka.
On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M <Kevin.Fox@pnnl.gov> wrote:
It is a full base service already: https://governance.openstack.org/tc/reference/base-services.html
Projects have been free to use it for quite some time. I'm not sure if any actually are yet though.
It was decided not to put an abstraction layer on top as its pretty simple and commonly deployed.
Thanks, Kevin ------------------------------ *From:* Julia Kreger [juliaashleykreger@gmail.com] *Sent:* Monday, December 03, 2018 3:53 PM *To:* Ben Nemec *Cc:* Davanum Srinivas; geguileo@redhat.com; openstack-discuss@lists.openstack.org *Subject:* Re: [all] Etcd as DLM
I would like to slightly interrupt this train of thought for an unscheduled vision of the future!
What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy?
While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward.
-Julia
On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec <openstack@nemebean.com> wrote:
Hi,
I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was:
"Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3."
The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-)
It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements.
On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found.
Thanks.
-Ben
1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2:
https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65... 3: https://github.com/grpc-ecosystem/grpc-gateway
Copying Mike Bayer since he's our resident DB expert. One more comment inline. On 12/4/18 4:08 AM, Gorka Eguileor wrote:
On 03/12, Julia Kreger wrote:
Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision?
I guess what I'd really like to see is an oslo.db interface into etcd3.
-Julia
Hi,
I think that some projects won't bother with the etcd interface since it would require some major rework of the whole service to get it working.
I don't think Julia was suggesting that every project move to etcd, just that we make it available for projects that want to use it this way.
Take Cinder for example. We do complex conditional updates that, as far as I know, cannot be satisfied with etcd's Compare-and-Swap functionality. We could modify all our code to make it support both relational databases and key-value stores, but I'm not convinced it would be worthwhile considering the huge effort it would require.
I believe there are other OpenStack projects that have procedural code stored on the database, which would probably be hard to make compatible with key-value stores.
Cheers, Gorka.
On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M <Kevin.Fox@pnnl.gov> wrote:
It is a full base service already: https://governance.openstack.org/tc/reference/base-services.html
Projects have been free to use it for quite some time. I'm not sure if any actually are yet though.
It was decided not to put an abstraction layer on top as its pretty simple and commonly deployed.
Thanks, Kevin ------------------------------ *From:* Julia Kreger [juliaashleykreger@gmail.com] *Sent:* Monday, December 03, 2018 3:53 PM *To:* Ben Nemec *Cc:* Davanum Srinivas; geguileo@redhat.com; openstack-discuss@lists.openstack.org *Subject:* Re: [all] Etcd as DLM
I would like to slightly interrupt this train of thought for an unscheduled vision of the future!
What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy?
While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward.
-Julia
On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec <openstack@nemebean.com> wrote:
Hi,
I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was:
"Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3."
The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-)
It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements.
On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found.
Thanks.
-Ben
1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2:
https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65... 3: https://github.com/grpc-ecosystem/grpc-gateway
On Tue, Dec 4, 2018 at 11:42 AM Ben Nemec <openstack@nemebean.com> wrote:
Copying Mike Bayer since he's our resident DB expert. One more comment inline.
so the level of abstraction oslo.db itself provides is fairly light - it steps in for the initial configuration of the database engine, for the job of reworking exceptions into something more locallized, and then for supplying a basic transactional begin/commit pattern that includes concepts that openstack uses a lot. it also has some helpers for things like special datatypes, test frameworks, and stuff like that. That is, oslo.db is not a full blown "abstraction" layer, it exposes the SQLAlchemy API which is then where you have the major level of abstraction. Given that, making oslo.db do for etcd3 what it does for SQLAlchemy would be an appropriate place for such a thing. It would be all new code and not really have much overlap with anything that's there right now, but still would be feasible at least at the level of, "get a handle to etcd3, here's the basic persistence / query pattern we use with it, here's a test framework that will allow test suites to use it". At the level of actually reading and writing data to etcd3 as well as querying, that's a bigger task, and certainly that is not a SQLAlchemy thing either. If etcd3's interface is a simple enough "get" / "put" / "query" and then some occasional special operations, those kinds of abstraction APIs are often not too terrible to write. Also note that we have a key/value database interface right now in oslo.cache which uses dogpile.cache against both memcached and redis right now. If you really only needed put/get with etcd3, it could do that also, but I would assume we have the need for more of a fine grained interface than that. Haven't studied etcd3 as of yet. But I'd be interested in supporting it in oslo somewhere.
On 12/4/18 4:08 AM, Gorka Eguileor wrote:
On 03/12, Julia Kreger wrote:
Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision?
I guess what I'd really like to see is an oslo.db interface into etcd3.
-Julia
Hi,
I think that some projects won't bother with the etcd interface since it would require some major rework of the whole service to get it working.
I don't think Julia was suggesting that every project move to etcd, just that we make it available for projects that want to use it this way.
Take Cinder for example. We do complex conditional updates that, as far as I know, cannot be satisfied with etcd's Compare-and-Swap functionality. We could modify all our code to make it support both relational databases and key-value stores, but I'm not convinced it would be worthwhile considering the huge effort it would require.
I believe there are other OpenStack projects that have procedural code stored on the database, which would probably be hard to make compatible with key-value stores.
Cheers, Gorka.
On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M <Kevin.Fox@pnnl.gov> wrote:
It is a full base service already: https://governance.openstack.org/tc/reference/base-services.html
Projects have been free to use it for quite some time. I'm not sure if any actually are yet though.
It was decided not to put an abstraction layer on top as its pretty simple and commonly deployed.
Thanks, Kevin ------------------------------ *From:* Julia Kreger [juliaashleykreger@gmail.com] *Sent:* Monday, December 03, 2018 3:53 PM *To:* Ben Nemec *Cc:* Davanum Srinivas; geguileo@redhat.com; openstack-discuss@lists.openstack.org *Subject:* Re: [all] Etcd as DLM
I would like to slightly interrupt this train of thought for an unscheduled vision of the future!
What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy?
While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward.
-Julia
On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec <openstack@nemebean.com> wrote:
Hi,
I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was:
"Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3."
The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-)
It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements.
On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found.
Thanks.
-Ben
1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2:
https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65... 3: https://github.com/grpc-ecosystem/grpc-gateway
Mike Bayer <mike_mp@zzzcomputing.com> writes:
On Tue, Dec 4, 2018 at 11:42 AM Ben Nemec <openstack@nemebean.com> wrote:
Copying Mike Bayer since he's our resident DB expert. One more comment inline.
so the level of abstraction oslo.db itself provides is fairly light - it steps in for the initial configuration of the database engine, for the job of reworking exceptions into something more locallized, and then for supplying a basic transactional begin/commit pattern that includes concepts that openstack uses a lot. it also has some helpers for things like special datatypes, test frameworks, and stuff like that.
That is, oslo.db is not a full blown "abstraction" layer, it exposes the SQLAlchemy API which is then where you have the major level of abstraction.
Given that, making oslo.db do for etcd3 what it does for SQLAlchemy would be an appropriate place for such a thing. It would be all new code and not really have much overlap with anything that's there right now, but still would be feasible at least at the level of, "get a handle to etcd3, here's the basic persistence / query pattern we use with it, here's a test framework that will allow test suites to use it".
If there's no real overlap, it sounds like maybe a new (or at least different, see below) library would be more appropriate. That would let the authors/reviewers focus on whatever configuration abstraction we need for etcd3, and not worry about the relational database stuff in oslo.db now.
At the level of actually reading and writing data to etcd3 as well as querying, that's a bigger task, and certainly that is not a SQLAlchemy thing either. If etcd3's interface is a simple enough "get" / "put" / "query" and then some occasional special operations, those kinds of abstraction APIs are often not too terrible to write.
There are a zillion client libraries for etcd already. Let's see which one has the most momentum, and use that.
Also note that we have a key/value database interface right now in oslo.cache which uses dogpile.cache against both memcached and redis right now. If you really only needed put/get with etcd3, it could do that also, but I would assume we have the need for more of a fine grained interface than that. Haven't studied etcd3 as of yet. But I'd be interested in supporting it in oslo somewhere.
Using oslo.cache might make sense, too. Doug
On Wed, Dec 5, 2018 at 9:18 AM Doug Hellmann <doug@doughellmann.com> wrote:
Mike Bayer <mike_mp@zzzcomputing.com> writes:
On Tue, Dec 4, 2018 at 11:42 AM Ben Nemec <openstack@nemebean.com> wrote:
Copying Mike Bayer since he's our resident DB expert. One more comment inline.
so the level of abstraction oslo.db itself provides is fairly light - it steps in for the initial configuration of the database engine, for the job of reworking exceptions into something more locallized, and then for supplying a basic transactional begin/commit pattern that includes concepts that openstack uses a lot. it also has some helpers for things like special datatypes, test frameworks, and stuff like that.
That is, oslo.db is not a full blown "abstraction" layer, it exposes the SQLAlchemy API which is then where you have the major level of abstraction.
Given that, making oslo.db do for etcd3 what it does for SQLAlchemy would be an appropriate place for such a thing. It would be all new code and not really have much overlap with anything that's there right now, but still would be feasible at least at the level of, "get a handle to etcd3, here's the basic persistence / query pattern we use with it, here's a test framework that will allow test suites to use it".
If there's no real overlap, it sounds like maybe a new (or at least different, see below) library would be more appropriate. That would let the authors/reviewers focus on whatever configuration abstraction we need for etcd3, and not worry about the relational database stuff in oslo.db now.
OK, my opinion on that is informed by how oslo.db is organized; in that it has no relational database concepts in the base, which are instead local to oslo_db.sqlalchemy. It originally intended to be abstraction for "databases" in general. There may be some value sharing some concepts across relational and key/value databases, to the extent they are used as the primary data storage service for an application and not just a cache, although this may not be practical right now and we might consider oslo_db to just be slightly mis-named.
At the level of actually reading and writing data to etcd3 as well as querying, that's a bigger task, and certainly that is not a SQLAlchemy thing either. If etcd3's interface is a simple enough "get" / "put" / "query" and then some occasional special operations, those kinds of abstraction APIs are often not too terrible to write.
There are a zillion client libraries for etcd already. Let's see which one has the most momentum, and use that.
Right, but I'm not talking about client libraries I'm talking about an abstraction layer. So that the openstack app that talks to etcd3 and tomorrow might want to talk to FoundationDB wouldn't have to rip all the code out entirely. or more immediately, when the library that has the "most momentum" no longer does, and we need to switch. Openstack's switch from MySQL-python to pymysql is a great example of this, as well as the switch of memcached drivers from python-memcached to pymemcached. consumers of oslo libraries should only have to change a configuration string for changes like this, not any imports or calling conventions. Googling around I'm not seeing much that does this other than dogpile.cache and a few small projects that don't look very polished. This is probably because it's sort of trivial to make a basic one and then sort of hard to expose vendor-specific features once you've done so. but still IMO worthwhile.
Also note that we have a key/value database interface right now in oslo.cache which uses dogpile.cache against both memcached and redis right now. If you really only needed put/get with etcd3, it could do that also, but I would assume we have the need for more of a fine grained interface than that. Haven't studied etcd3 as of yet. But I'd be interested in supporting it in oslo somewhere.
Using oslo.cache might make sense, too.
I think the problems of caching are different than those of primary data store. Caching assumes data is impermanent, that it expires with a given length of time, and that the thing being stored is opaque and can't be queried directly or in the aggregate at least as far as the caching API is concerned (e.g. no "fetch by 'field'", for starters). Whereas a database abstraction API would include support for querying, as well as that it would treat the data as permanent and critical rather than a transitory, stale copy of something. so while I'm 0 on not using oslo.db I'm -1 on using oslo.cache.
Doug
On 04/12, Ben Nemec wrote:
Copying Mike Bayer since he's our resident DB expert. One more comment inline.
On 12/4/18 4:08 AM, Gorka Eguileor wrote:
On 03/12, Julia Kreger wrote:
Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision?
I guess what I'd really like to see is an oslo.db interface into etcd3.
-Julia
Hi,
I think that some projects won't bother with the etcd interface since it would require some major rework of the whole service to get it working.
I don't think Julia was suggesting that every project move to etcd, just that we make it available for projects that want to use it this way.
Hi, My bad, I assumed that was the intention, otherwise, shouldn't we first ask how many projects would start using this key-value interface if it did exist? I mean, if there's only going to be 1 project using it, then it may be better to go with the standard pattern of implementing it in that project, and only extract it once there is a need for such a common library. Cheers, Gorka.
Take Cinder for example. We do complex conditional updates that, as far as I know, cannot be satisfied with etcd's Compare-and-Swap functionality. We could modify all our code to make it support both relational databases and key-value stores, but I'm not convinced it would be worthwhile considering the huge effort it would require.
I believe there are other OpenStack projects that have procedural code stored on the database, which would probably be hard to make compatible with key-value stores.
Cheers, Gorka.
On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M <Kevin.Fox@pnnl.gov> wrote:
It is a full base service already: https://governance.openstack.org/tc/reference/base-services.html
Projects have been free to use it for quite some time. I'm not sure if any actually are yet though.
It was decided not to put an abstraction layer on top as its pretty simple and commonly deployed.
Thanks, Kevin ------------------------------ *From:* Julia Kreger [juliaashleykreger@gmail.com] *Sent:* Monday, December 03, 2018 3:53 PM *To:* Ben Nemec *Cc:* Davanum Srinivas; geguileo@redhat.com; openstack-discuss@lists.openstack.org *Subject:* Re: [all] Etcd as DLM
I would like to slightly interrupt this train of thought for an unscheduled vision of the future!
What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy?
While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward.
-Julia
On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec <openstack@nemebean.com> wrote:
Hi,
I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was:
"Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3."
The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-)
It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements.
On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found.
Thanks.
-Ben
1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2:
https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65... 3: https://github.com/grpc-ecosystem/grpc-gateway
Julia Kreger wrote:
Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision?
Dims can probably summarize it better than I can do. When we were discussing adding a DLM as a base service, we had a lot of discussion at several events and on several threads weighing that option (a "tooz-compatible DLM" vs. "etcd"). IIRC the final decision had to do with leveraging specific etcd features vs. using the smallest common denominator, while we expect everyone to be deploying etcd.
I guess what I'd really like to see is an oslo.db interface into etcd3.
Not sure that is what you're looking for, but the concept of an oslo.db interface to a key-value store was explored by a research team and the FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of distributing Nova data around. Their ROME oslo.db driver PoC was using Redis, but I think it could be adapted to use etcd quite easily. Some pointers: https://github.com/beyondtheclouds/rome https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revisin... -- Thierry Carrez (ttx)
On 12/04/2018 08:15 AM, Thierry Carrez wrote:
Julia Kreger wrote:
Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision?
Dims can probably summarize it better than I can do.
When we were discussing adding a DLM as a base service, we had a lot of discussion at several events and on several threads weighing that option (a "tooz-compatible DLM" vs. "etcd"). IIRC the final decision had to do with leveraging specific etcd features vs. using the smallest common denominator, while we expect everyone to be deploying etcd.
I guess what I'd really like to see is an oslo.db interface into etcd3.
Not sure that is what you're looking for, but the concept of an oslo.db interface to a key-value store was explored by a research team and the FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of distributing Nova data around. Their ROME oslo.db driver PoC was using Redis, but I think it could be adapted to use etcd quite easily.
Note that it's not appropriate to replace *all* use of an RDBMS in OpenStack-land with etcd. I hope I wasn't misunderstood in my statement earlier. Just *some* use cases are better served by a key/value store, and etcd3's transactions and watches are a great tool for solving *some* use cases -- but definitely not all :) Anyway, just making sure nobody's going to accuse me of saying OpenStack should abandon all RDBMS use for a KVS. :) Best, -jay
Some pointers:
https://github.com/beyondtheclouds/rome
https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revisin...
On Tue, Dec 4, 2018 at 5:53 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 12/04/2018 08:15 AM, Thierry Carrez wrote:
Julia Kreger wrote:
Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision?
Dims can probably summarize it better than I can do.
When we were discussing adding a DLM as a base service, we had a lot of discussion at several events and on several threads weighing that option (a "tooz-compatible DLM" vs. "etcd"). IIRC the final decision had to do with leveraging specific etcd features vs. using the smallest common denominator, while we expect everyone to be deploying etcd.
I guess what I'd really like to see is an oslo.db interface into etcd3.
Not sure that is what you're looking for, but the concept of an oslo.db interface to a key-value store was explored by a research team and the FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of distributing Nova data around. Their ROME oslo.db driver PoC was using Redis, but I think it could be adapted to use etcd quite easily.
Note that it's not appropriate to replace *all* use of an RDBMS in OpenStack-land with etcd. I hope I wasn't misunderstood in my statement earlier.
Just *some* use cases are better served by a key/value store, and etcd3's transactions and watches are a great tool for solving *some* use cases -- but definitely not all :)
Anyway, just making sure nobody's going to accuse me of saying OpenStack should abandon all RDBMS use for a KVS. :)
Best, -jay
Definitely not interpreted that way and not what I was thinking either. I definitely see there is value, and your thoughts do greatly confirm that at least I'm not the only crazy person thinking it could be a good idea^(TM).
Some pointers:
https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revisin...
On 12/4/18 2:15 PM, Thierry Carrez wrote:
Not sure that is what you're looking for, but the concept of an oslo.db interface to a key-value store was explored by a research team and the FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of distributing Nova data around. Their ROME oslo.db driver PoC was using Redis, but I think it could be adapted to use etcd quite easily.
Some pointers:
https://github.com/beyondtheclouds/rome
https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revisin...
That's interesting, thank you! I'd like to remind though that Edge/Fog cases assume high latency, which is not the best fit for strongly consistent oslo.db data backends, like Etcd or Galera. Technically, it had been proved in the past a few years that only causal consistency, which is like eventual consistency but works much better for end users [0], is a way to go for Edge clouds. Except that there is *yet* a decent implementation exists of a causal consistent KVS! So my take is, if we'd ever want to redesign ORM transactions et al to CAS operations and KVS, it should be done not for Etcd in mind, but a future causal consistent solution. [0] https://www.usenix.org/system/files/login/articles/08_lloyd_41-43_online.pdf -- Best regards, Bogdan Dobrelya, Irc #bogdando
This thread is shifting a bit... I'd like to throw in another related idea we're talking about it... There is storing data in key/value stores and there is also storing data in document stores. Kubernetes uses a key/value store and builds a document store out of it. All its api then runs through a document store, not a key/value store. This model has proven to be quite powerful. I wonder if an abstraction over document stores would be useful? wrapping around k8s crds would be interesting. A lightweight openstack without mysql would have some interesting benifits. Thanks, Kevin ________________________________________ From: Bogdan Dobrelya [bdobreli@redhat.com] Sent: Tuesday, December 04, 2018 6:29 AM To: openstack-discuss@lists.openstack.org Subject: Re: [all][FEMDC] Etcd as DLM On 12/4/18 2:15 PM, Thierry Carrez wrote:
Not sure that is what you're looking for, but the concept of an oslo.db interface to a key-value store was explored by a research team and the FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of distributing Nova data around. Their ROME oslo.db driver PoC was using Redis, but I think it could be adapted to use etcd quite easily.
Some pointers:
https://github.com/beyondtheclouds/rome
https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revisin...
That's interesting, thank you! I'd like to remind though that Edge/Fog cases assume high latency, which is not the best fit for strongly consistent oslo.db data backends, like Etcd or Galera. Technically, it had been proved in the past a few years that only causal consistency, which is like eventual consistency but works much better for end users [0], is a way to go for Edge clouds. Except that there is *yet* a decent implementation exists of a causal consistent KVS! So my take is, if we'd ever want to redesign ORM transactions et al to CAS operations and KVS, it should be done not for Etcd in mind, but a future causal consistent solution. [0] https://www.usenix.org/system/files/login/articles/08_lloyd_41-43_online.pdf -- Best regards, Bogdan Dobrelya, Irc #bogdando
I like where this is going! Comment in-line. On Tue, Dec 4, 2018 at 8:53 AM Fox, Kevin M <Kevin.Fox@pnnl.gov> wrote:
This thread is shifting a bit...
I'd like to throw in another related idea we're talking about it...
There is storing data in key/value stores and there is also storing data in document stores.
Kubernetes uses a key/value store and builds a document store out of it. All its api then runs through a document store, not a key/value store.
This model has proven to be quite powerful. I wonder if an abstraction over document stores would be useful? wrapping around k8s crds would be interesting. A lightweight openstack without mysql would have some interesting benifits.
I suspect the code would largely be the same, if we support key/value then I would hope that then we could leverage all of that with just opening/connecting of the document store. Perhaps this is worth some further investigation, but ultimately what I've been thinking is if we _could_ allow some, not all, services to operate in completely decoupled fashion we better enable them to support OpenStack and neighboring technologies. Ironic is kind of the obvious starting point of sorts since everyone needs to start with some baremetal somewhere if they are are building their own infrastructure up.
Thanks, Kevin ________________________________________ From: Bogdan Dobrelya [bdobreli@redhat.com] Sent: Tuesday, December 04, 2018 6:29 AM To: openstack-discuss@lists.openstack.org Subject: Re: [all][FEMDC] Etcd as DLM
On 12/4/18 2:15 PM, Thierry Carrez wrote:
Not sure that is what you're looking for, but the concept of an oslo.db interface to a key-value store was explored by a research team and the FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of distributing Nova data around. Their ROME oslo.db driver PoC was using Redis, but I think it could be adapted to use etcd quite easily.
Some pointers:
https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revisin...
That's interesting, thank you! I'd like to remind though that Edge/Fog cases assume high latency, which is not the best fit for strongly consistent oslo.db data backends, like Etcd or Galera. Technically, it had been proved in the past a few years that only causal consistency, which is like eventual consistency but works much better for end users [0], is a way to go for Edge clouds. Except that there is *yet* a decent implementation exists of a causal consistent KVS!
So my take is, if we'd ever want to redesign ORM transactions et al to CAS operations and KVS, it should be done not for Etcd in mind, but a future causal consistent solution.
[0]
https://www.usenix.org/system/files/login/articles/08_lloyd_41-43_online.pdf
-- Best regards, Bogdan Dobrelya, Irc #bogdando
On 12/03/2018 06:53 PM, Julia Kreger wrote:
I would like to slightly interrupt this train of thought for an unscheduled vision of the future!
What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy?
While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward.
Considering Ironic doesn't have a database schema that really uses the relational database properly, I think this is an excellent idea. [1] Ironic's database schema is mostly a bunch of giant JSON BLOB fields that are (ab)used by callers to add unstructured data pointing at a node's UUID. Which is pretty much what a KVS like etcd was made for, so I say, go for it. Best, -jay [1] The same can be said for quite a few tables in Nova's cell DB, namely compute_nodes, instance_info_caches, instance_metadata, instance_system_metadata, instance_extra, instance_actions, instance_action_events and pci_devices. And Nova's API DB has the aggregate_metadata, flavor_extra_specs, request_specs, build_requests and key_pairs tables, all of which are good candidates for non-relational storage.
I've asked for it a while ago, but will ask again since the subject came back up. :) If ironic could target k8s crds for storage, it would be significantly easier to deploy an under cloud. Between k8s api, the k8s cluster api and ironic's api, a truly self hosting k8s could be possible. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes@gmail.com] Sent: Tuesday, December 04, 2018 4:52 AM To: openstack-discuss@lists.openstack.org Subject: Re: [all] Etcd as DLM On 12/03/2018 06:53 PM, Julia Kreger wrote:
I would like to slightly interrupt this train of thought for an unscheduled vision of the future!
What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy?
While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward.
Considering Ironic doesn't have a database schema that really uses the relational database properly, I think this is an excellent idea. [1] Ironic's database schema is mostly a bunch of giant JSON BLOB fields that are (ab)used by callers to add unstructured data pointing at a node's UUID. Which is pretty much what a KVS like etcd was made for, so I say, go for it. Best, -jay [1] The same can be said for quite a few tables in Nova's cell DB, namely compute_nodes, instance_info_caches, instance_metadata, instance_system_metadata, instance_extra, instance_actions, instance_action_events and pci_devices. And Nova's API DB has the aggregate_metadata, flavor_extra_specs, request_specs, build_requests and key_pairs tables, all of which are good candidates for non-relational storage.
This thread got a bit sidetracked with potential use-cases for etcd3 (which seems to happen a lot with this topic...), but we still need to decide how we're going to actually communicate with etcd from OpenStack services. Does anyone have input on that? Thanks. -Ben On 12/3/18 4:48 PM, Ben Nemec wrote:
Hi,
I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was:
"Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3."
The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-)
It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements.
On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found.
Thanks.
-Ben
1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2: https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65...
On Thu, Jan 17, 2019 at 10:37 AM Ben Nemec <openstack@nemebean.com> wrote:
This thread got a bit sidetracked with potential use-cases for etcd3 (which seems to happen a lot with this topic...), but we still need to decide how we're going to actually communicate with etcd from OpenStack services. Does anyone have input on that?
I have been successful testing the cinder-volume service using etcd3-gateway [1] to access etcd3 via tooz.coordination. Work great, although I haven't stress tested the setup. [1] https://github.com/dims/etcd3-gateway Alan Thanks.
-Ben
On 12/3/18 4:48 PM, Ben Nemec wrote:
Hi,
I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was:
"Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3."
The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-)
It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements.
On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found.
Thanks.
-Ben
1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2:
https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a65...
On Thu, Jan 17, 2019 at 11:21:45AM -0500, Alan Bishop wrote:
On Thu, Jan 17, 2019 at 10:37 AM Ben Nemec <openstack@nemebean.com> wrote:
This thread got a bit sidetracked with potential use-cases for etcd3 (which seems to happen a lot with this topic...), but we still need to decide how we're going to actually communicate with etcd from OpenStack services. Does anyone have input on that?
I have been successful testing the cinder-volume service using etcd3-gateway [1] to access etcd3 via tooz.coordination. Work great, although I haven't stress tested the setup.
[1] https://github.com/dims/etcd3-gateway
Alan
Devstack by default (and therefore most gate testing by default) have been using etcd3 via tooz for Cinder lock coordination for over a year now. Oh, almost two years apparently: https://review.openstack.org/#/c/466298/14/lib/cinder Other than some intermittent etcd availability issues that Matt Riedemann has noticed recently, it appears to be working fine. Fine enough that I wasn't even aware of it until Matt brought it up at least. Sean
participants (11)
-
Alan Bishop
-
Ben Nemec
-
Bogdan Dobrelya
-
Doug Hellmann
-
Fox, Kevin M
-
Gorka Eguileor
-
Jay Pipes
-
Julia Kreger
-
Mike Bayer
-
Sean McGinnis
-
Thierry Carrez