[openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

Alex Schultz aschultz at redhat.com
Wed Mar 22 14:24:52 UTC 2017


On Wed, Mar 22, 2017 at 7:58 AM, Paul Belanger <pabelanger at redhat.com> wrote:
> On Tue, Mar 21, 2017 at 05:53:35PM -0600, Alex Schultz wrote:
>> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson <me at not.mn> wrote:
>> >
>> >
>> > On 21 Mar 2017, at 15:34, Alex Schultz wrote:
>> >
>> >> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson <me at not.mn> wrote:
>> >>> I've been following this thread, but I must admit I seem to have missed something.
>> >>>
>> >>> What problem is being solved by storing per-server service configuration options in an external distributed CP system that is currently not possible with the existing pattern of using local text files?
>> >>>
>> >>
>> >> This effort is partially to help the path to containerization where we
>> >> are delivering the service code via container but don't want to
>> >> necessarily deliver the configuration in the same fashion.  It's about
>> >> ease of configuration where moving service -> config files (on many
>> >> hosts/containers) to service -> config via etcd (single source
>> >> cluster).  It's also about an alternative to configuration management
>> >> where today we have many tools handling the files in various ways
>> >> (templates, from repo, via code providers) and trying to come to a
>> >> more unified way of representing the configuration such that the end
>> >> result is the same for every deployment tool.  All tools load configs
>> >> into $place and services can be configured to talk to $place.  It
>> >> should be noted that configuration files won't go away because many of
>> >> the companion services still rely on them (rabbit/mysql/apache/etc) so
>> >> we're really talking about services that currently use oslo.
>> >
>> > Thanks for the explanation!
>> >
>> > So in the future, you expect a node in a clustered OpenStack service to be deployed and run as a container, and then that node queries a centralized etcd (or other) k/v store to load config options. And other services running in the (container? cluster?) will load config from local text files managed in some other way.
>>
>> No the goal is in the etcd mode, that it  may not be necessary to load
>> the config files locally at all.  That being said there would still be
>> support for having some configuration from a file and optionally
>> provide a kv store as another config point.  'service --config-file
>> /etc/service/service.conf --config-etcd proto://ip:port/slug'
>>
> Hmm, not sure I like this.  Having a service magically read from 2 different
> configuration source at run time, merge them, and reload, seems overly
> complicated. And even harder to debug.
>

That's something inherently supported by oslo.config today. We even do
it for dist provided packaging (I also don't like it, but it's an
established pattern).

>> >
>> > No wait. It's not the *services* that will load the config from a kv store--it's the config management system? So in the process of deploying a new container instance of a particular service, the deployment tool will pull the right values out of the kv system and inject those into the container, I'm guessing as a local text file that the service loads as normal?
>> >
>>
>> No the thought is to have the services pull their configs from the kv
>> store via oslo.config.  The point is hopefully to not require
>> configuration files at all for containers.  The container would get
>> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
>> /etc/myconfigs/).  At that point it just becomes another place to load
>> configurations from via oslo.config.  Configuration management comes
>> in as a way to load the configs either as a file or into etcd.  Many
>> operators (and deployment tools) are already using some form of
>> configuration management so if we can integrate in a kv store output
>> option, adoption becomes much easier than making everyone start from
>> scratch.
>>
>> > This means you could have some (OpenStack?) service for inventory management (like Karbor) that is seeding the kv store, the cloud infrastructure software itself is "cloud aware" and queries the central distributed kv system for the correct-right-now config options, and the cloud service itself gets all the benefits of dynamic scaling of available hardware resources. That's pretty cool. Add hardware to the inventory, the cloud infra itself expands to make it available. Hardware fails, and the cloud infra resizes to adjust. Apps running on the infra keep doing their thing consuming the resources. It's clouds all the way down :-)
>> >
>> > Despite sounding pretty interesting, it also sounds like a lot of extra complexity. Maybe it's worth it. I don't know.
>> >
>>
>> Yea there's extra complexity at least in the
>> deployment/management/monitoring of the new service or maybe not.
>> Keeping configuration files synced across 1000s of nodes (or
>> containers) can be just as hard however.
>>
> Please correct me if I am wrong, because I still have my container training
> wheels on. I understand the need for etcd, and operators to write their
> configuration into it.  Why I am struggling with still, is why you need
> oslo.config to support it.  There is nothing stopping an operator today from
> using etcd / confd in a container, right?  I can only imagine countless other
> services that run in containers using them.
>

We want oslo.config to support it as a source for configuration.
Dealing with files in containers is complicated. If we can remove the
requirement to munge configurations for containers,
deployment/updating containers becomes easier.  The service container
becomes a single artifact to be deployed with less moving parts which
helps reduce complexity and errors.  The process for moving a single
container artifact is a lot easier than moving container and updating
configurations based on where it's landing.

> Why do we, openstack, need to write our own custom thing and be different in
> this regard?  Why do we need our services to talk directly to etcd? When things
> like apache2, nginx, other non-openstack service just use etcd / confd?
>

Openstack service configuration is inherently complex.  Since we
cannot control how supporting services do their configuration
(apache/mysql/nginx/etc) we aren't talking about how to try and solve
for that today.  What we're looking to focus on is how can we simplify
the containerization of openstack services and the related
configurations that we need to apply.  Today we have several different
deployment solutions that attempt to solve for configuring openstack
services (via templates, static files, providers, etc).  What we're
trying to discuss is how to continue to support the existing as well
as brainstorming about moving to etcd as the destination from
configuration management and the consumption by openstack services.

> From reading the thread, it seems the issue is more about keeping files inside
> the container out of the container, which makes them easier to maintain.
>
> And this is the part I need help with. If I was to do a POC this afternoon,
> inside a container. I would do the following:
>
> Create container with keystone
> Create etcd service
> Add confd into container
>  - mkdir -p /etc/confd/{conf.d,templates}
>  - vi /etc/confd/conf.d/myconfig.toml
>  - vi /etc/confd/templates/myconfig.conf.tmpl
>  - start confd
>

This is one possible deployment strategy but it does not support the
existing deployment tools.  It's a valid solution, but why introduce
confd in the container if you don't have to because the service can
read directly from etcd?  Additionally how does debugging that look
like when you need to go look at the configurations that get written
out. Do you have to go inspect each container and what happens when
the confd sync process breaks down?  The thought by having etcd be the
source of truth is that operators look in etcd for the configuration
for live/staged configurations. We'd probably want to help with
tooling to support this, but I think it would be easier to have a tool
to operate on a single etcd cluster than try and go poll X number of
containers on Z hosts.

> I would use ansible or puppet or Dockerfile to properly setup myconfg.toml for
> keystone. Same with myconf.conf.tmpl, it would be a for d in dict: for key,
> value in d thing, to glob down any thing from etcd into ini format.
>
> This does mean you still need to manage confd templates however, but this is
> okay, because if I deploy any other application with etcd, this is how I would
> it.

We're not talking about not supporting such a configuration. In fact
the integration of etcd with oslo.config means this would still be a
valid deployment method.  You'd still need something that would know
how to load oslo.config items into etcd which is part of this
discussion.

>
> Which gets me to my final question. How are you proposing we configure apache2
> or nginx with etcd? I can only assume it is using something like the process
> above?

We're not proposing that.  Since there's already legacy ways of
configuring those (and who knows if the operator actually wants to
containerize those), we're only scoping the discussion around
openstack services at the moment to see what that looks like.


>
>> > Thanks again for the explanation.
>> >
>> >
>> > --John
>> >
>> >
>> >
>> >
>> >>
>> >> Thanks,
>> >> -Alex
>> >>
>> >>>
>> >>> --John
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
>> >>>
>> >>>> Jay,
>> >>>>
>> >>>> the /v3alpha HTTP API  (grpc-gateway) supports watch
>> >>>> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json
>> >>>>
>> >>>> -- Dims
>> >>>>
>> >>>> On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>> >>>>> On 03/21/2017 04:29 PM, Clint Byrum wrote:
>> >>>>>>
>> >>>>>> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
>> >>>>>>>
>> >>>>>>> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
>> >>>>>>>>
>> >>>>>>>> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow <harlowja at fastmail.com>
>> >>>>>>>> wrote:
>> >>>>>>>>
>> >>>>>>>>> * How does reloading work (does it)?
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> No. There is nothing that we can do in oslo that will make services
>> >>>>>>>> magically reload configuration. It's also unclear to me if that's
>> >>>>>>>> something to do. In a containerized environment, wouldn't it be
>> >>>>>>>> simpler to deploy new services? Otherwise, supporting signal based
>> >>>>>>>> reload as we do today should be trivial.
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Reloading works today with files, that's why the question is important
>> >>>>>>> to think through. There is a special flag to set on options that are
>> >>>>>>> "mutable" and then there are functions within oslo.config to reload.
>> >>>>>>> Those are usually triggered when a service gets a SIGHUP or something
>> >>>>>>> similar.
>> >>>>>>>
>> >>>>>>> We need to decide what happens to a service's config when that API
>> >>>>>>> is used and the backend is etcd. Maybe nothing, because every time
>> >>>>>>> any config option is accessed the read goes all the way through to
>> >>>>>>> etcd? Maybe a warning is logged because we don't support reloads?
>> >>>>>>> Maybe an error is logged? Or maybe we flush the local cache and start
>> >>>>>>> reading from etcd on future accesses?
>> >>>>>>>
>> >>>>>>
>> >>>>>> etcd provides the ability to "watch" keys. So one would start a thread
>> >>>>>> that just watches the keys you want to reload on, and when they change
>> >>>>>> that thread will see a response and can reload appropriately.
>> >>>>>>
>> >>>>>> https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html
>> >>>>>
>> >>>>>
>> >>>>> Yep. Unfortunately, you won't be able to start an eventlet greenthread to
>> >>>>> watch an etcd3/gRPC key. The python grpc library is incompatible with
>> >>>>> eventlet/gevent's monkeypatching technique and causes a complete program
>> >>>>> hang if you try to communicate with the etcd3 server from a greenlet. Fun!
>> >>>>>
>> >>>>> So, either use etcd2 (the no-longer-being-worked-on HTTP API) or don't use
>> >>>>> eventlet in your client service.
>> >>>>>
>> >>>>> Best,
>> >>>>> -jay
>> >>>>>
>> >>>>>
>> >>>>> __________________________________________________________________________
>> >>>>> OpenStack Development Mailing List (not for usage questions)
>> >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Davanum Srinivas :: https://twitter.com/dims
>> >>>>
>> >>>> __________________________________________________________________________
>> >>>> OpenStack Development Mailing List (not for usage questions)
>> >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>> __________________________________________________________________________
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list