[openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

Sean Dague sean at dague.net
Wed Mar 15 12:54:55 UTC 2017

On 03/15/2017 02:16 AM, Clint Byrum wrote:
> Excerpts from Monty Taylor's message of 2017-03-15 04:36:24 +0100:
>> On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
>>> Team,
>>> So one more thing popped up again on IRC:
>>> https://etherpad.openstack.org/p/oslo.config_etcd_backend
>>> What do you think? interested in this work?
>>> Thanks,
>>> Dims
>>> PS: Between this thread and the other one about Tooz/DLM and
>>> os-lively, we can probably make a good case to add etcd as a base
>>> always-on service.
>> As I mentioned in the other thread, there was specific and strong
>> anti-etcd sentiment in Tokyo which is why we decided to use an
>> abstraction. I continue to be in favor of us having one known service in
>> this space, but I do think that it's important to revisit that decision
>> fully and in context of the concerns that were raised when we tried to
>> pick one last time.
>> It's worth noting that there is nothing particularly etcd-ish about
>> storing config that couldn't also be done with zk and thus just be an
>> additional api call or two added to Tooz with etcd and zk drivers for it.
> Combine that thought with the "please have an ingest/export" thought,
> and I think you have a pretty operator-friendly transition path. Would
> be pretty great to have a release of OpenStack that just lets you add
> an '[etcd]', or '[config-service]' section maybe, to your config files,
> and then once you've fully migrated everything, lets you delete all the
> other sections. Then the admin nodes still have the full configs and
> one can just edit configs in git and roll them out by ingesting.
> (Then the magical rainbow fairy ponies teach our services to watch their
> config service for changes and restart themselves).

Make sure to add:

... (after fully quiescing, when they are not processing any inflight
work, when they are part of a pool so that they can be rolling restarted
without impacting other services trying to connect to them, with a
rollback to past config should the new config cause a crash).

There are a ton of really interesting things about a network registry,
that makes many things easier. However, from an operational point of
view I would be concerned about the idea of services restarting
themselves in a non orchestrated manner. Or that a single key set in the
registry triggers a complete reboot of the cluster. It's definitely less
clear to understand the linkage of the action that took down your cloud
and why when the operator isn't explicit about "and restart this service


Sean Dague

More information about the OpenStack-dev mailing list