[openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

Tim Bell Tim.Bell at cern.ch
Wed Mar 22 06:23:49 UTC 2017


> On 22 Mar 2017, at 00:53, Alex Schultz <aschultz at redhat.com> wrote:
> 
> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson <me at not.mn> wrote:
>> 
>> 
>> On 21 Mar 2017, at 15:34, Alex Schultz wrote:
>> 
>>> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson <me at not.mn> wrote:
>>>> I've been following this thread, but I must admit I seem to have missed something.
>>>> 
>>>> What problem is being solved by storing per-server service configuration options in an external distributed CP system that is currently not possible with the existing pattern of using local text files?
>>>> 
>>> 
>>> This effort is partially to help the path to containerization where we
>>> are delivering the service code via container but don't want to
>>> necessarily deliver the configuration in the same fashion.  It's about
>>> ease of configuration where moving service -> config files (on many
>>> hosts/containers) to service -> config via etcd (single source
>>> cluster).  It's also about an alternative to configuration management
>>> where today we have many tools handling the files in various ways
>>> (templates, from repo, via code providers) and trying to come to a
>>> more unified way of representing the configuration such that the end
>>> result is the same for every deployment tool.  All tools load configs
>>> into $place and services can be configured to talk to $place.  It
>>> should be noted that configuration files won't go away because many of
>>> the companion services still rely on them (rabbit/mysql/apache/etc) so
>>> we're really talking about services that currently use oslo.
>> 
>> Thanks for the explanation!
>> 
>> So in the future, you expect a node in a clustered OpenStack service to be deployed and run as a container, and then that node queries a centralized etcd (or other) k/v store to load config options. And other services running in the (container? cluster?) will load config from local text files managed in some other way.
> 
> No the goal is in the etcd mode, that it  may not be necessary to load
> the config files locally at all.  That being said there would still be
> support for having some configuration from a file and optionally
> provide a kv store as another config point.  'service --config-file
> /etc/service/service.conf --config-etcd proto://ip:port/slug'
> 
>> 
>> No wait. It's not the *services* that will load the config from a kv store--it's the config management system? So in the process of deploying a new container instance of a particular service, the deployment tool will pull the right values out of the kv system and inject those into the container, I'm guessing as a local text file that the service loads as normal?
>> 
> 
> No the thought is to have the services pull their configs from the kv
> store via oslo.config.  The point is hopefully to not require
> configuration files at all for containers.  The container would get
> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
> /etc/myconfigs/).  At that point it just becomes another place to load
> configurations from via oslo.config.  Configuration management comes
> in as a way to load the configs either as a file or into etcd.  Many
> operators (and deployment tools) are already using some form of
> configuration management so if we can integrate in a kv store output
> option, adoption becomes much easier than making everyone start from
> scratch.
> 
>> This means you could have some (OpenStack?) service for inventory management (like Karbor) that is seeding the kv store, the cloud infrastructure software itself is "cloud aware" and queries the central distributed kv system for the correct-right-now config options, and the cloud service itself gets all the benefits of dynamic scaling of available hardware resources. That's pretty cool. Add hardware to the inventory, the cloud infra itself expands to make it available. Hardware fails, and the cloud infra resizes to adjust. Apps running on the infra keep doing their thing consuming the resources. It's clouds all the way down :-)
>> 
>> Despite sounding pretty interesting, it also sounds like a lot of extra complexity. Maybe it's worth it. I don't know.
>> 
> 
> Yea there's extra complexity at least in the
> deployment/management/monitoring of the new service or maybe not.
> Keeping configuration files synced across 1000s of nodes (or
> containers) can be just as hard however.
> 

Would there be a mechanism to stage configuration changes (such as a QA/production environment) or have different configurations for different hypervisors?

We have some of our hypervisors set for high performance which needs a slightly different nova.conf (such as CPU passthrough).

Tim

>> Thanks again for the explanation.
>> 
>> 
>> --John
>> 
>> 
>> 
>> 
>>> 
>>> Thanks,
>>> -Alex
>>> 
>>>> 
>>>> --John
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
>>>> 
>>>>> Jay,
>>>>> 
>>>>> the /v3alpha HTTP API  (grpc-gateway) supports watch
>>>>> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json
>>>>> 
>>>>> -- Dims
>>>>> 
>>>>> On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>>>>>> On 03/21/2017 04:29 PM, Clint Byrum wrote:
>>>>>>> 
>>>>>>> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
>>>>>>>> 
>>>>>>>> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
>>>>>>>>> 
>>>>>>>>> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow <harlowja at fastmail.com>
>>>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> * How does reloading work (does it)?
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> No. There is nothing that we can do in oslo that will make services
>>>>>>>>> magically reload configuration. It's also unclear to me if that's
>>>>>>>>> something to do. In a containerized environment, wouldn't it be
>>>>>>>>> simpler to deploy new services? Otherwise, supporting signal based
>>>>>>>>> reload as we do today should be trivial.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Reloading works today with files, that's why the question is important
>>>>>>>> to think through. There is a special flag to set on options that are
>>>>>>>> "mutable" and then there are functions within oslo.config to reload.
>>>>>>>> Those are usually triggered when a service gets a SIGHUP or something
>>>>>>>> similar.
>>>>>>>> 
>>>>>>>> We need to decide what happens to a service's config when that API
>>>>>>>> is used and the backend is etcd. Maybe nothing, because every time
>>>>>>>> any config option is accessed the read goes all the way through to
>>>>>>>> etcd? Maybe a warning is logged because we don't support reloads?
>>>>>>>> Maybe an error is logged? Or maybe we flush the local cache and start
>>>>>>>> reading from etcd on future accesses?
>>>>>>>> 
>>>>>>> 
>>>>>>> etcd provides the ability to "watch" keys. So one would start a thread
>>>>>>> that just watches the keys you want to reload on, and when they change
>>>>>>> that thread will see a response and can reload appropriately.
>>>>>>> 
>>>>>>> https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html
>>>>>> 
>>>>>> 
>>>>>> Yep. Unfortunately, you won't be able to start an eventlet greenthread to
>>>>>> watch an etcd3/gRPC key. The python grpc library is incompatible with
>>>>>> eventlet/gevent's monkeypatching technique and causes a complete program
>>>>>> hang if you try to communicate with the etcd3 server from a greenlet. Fun!
>>>>>> 
>>>>>> So, either use etcd2 (the no-longer-being-worked-on HTTP API) or don't use
>>>>>> eventlet in your client service.
>>>>>> 
>>>>>> Best,
>>>>>> -jay
>>>>>> 
>>>>>> 
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Davanum Srinivas :: https://twitter.com/dims
>>>>> 
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list