[openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

Doug Hellmann doug at doughellmann.com
Thu Mar 30 18:55:51 UTC 2017


Excerpts from Paul Belanger's message of 2017-03-22 09:58:46 -0400:
> On Tue, Mar 21, 2017 at 05:53:35PM -0600, Alex Schultz wrote:
> > On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson <me at not.mn> wrote:
> > >
> > >
> > > On 21 Mar 2017, at 15:34, Alex Schultz wrote:
> > >
> > >> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson <me at not.mn> wrote:
> > >>> I've been following this thread, but I must admit I seem to have missed something.
> > >>>
> > >>> What problem is being solved by storing per-server service configuration options in an external distributed CP system that is currently not possible with the existing pattern of using local text files?
> > >>>
> > >>
> > >> This effort is partially to help the path to containerization where we
> > >> are delivering the service code via container but don't want to
> > >> necessarily deliver the configuration in the same fashion.  It's about
> > >> ease of configuration where moving service -> config files (on many
> > >> hosts/containers) to service -> config via etcd (single source
> > >> cluster).  It's also about an alternative to configuration management
> > >> where today we have many tools handling the files in various ways
> > >> (templates, from repo, via code providers) and trying to come to a
> > >> more unified way of representing the configuration such that the end
> > >> result is the same for every deployment tool.  All tools load configs
> > >> into $place and services can be configured to talk to $place.  It
> > >> should be noted that configuration files won't go away because many of
> > >> the companion services still rely on them (rabbit/mysql/apache/etc) so
> > >> we're really talking about services that currently use oslo.
> > >
> > > Thanks for the explanation!
> > >
> > > So in the future, you expect a node in a clustered OpenStack service to be deployed and run as a container, and then that node queries a centralized etcd (or other) k/v store to load config options. And other services running in the (container? cluster?) will load config from local text files managed in some other way.
> > 
> > No the goal is in the etcd mode, that it  may not be necessary to load
> > the config files locally at all.  That being said there would still be
> > support for having some configuration from a file and optionally
> > provide a kv store as another config point.  'service --config-file
> > /etc/service/service.conf --config-etcd proto://ip:port/slug'
> > 
> Hmm, not sure I like this.  Having a service magically read from 2 different
> configuration source at run time, merge them, and reload, seems overly
> complicated. And even harder to debug.
> 
> > >
> > > No wait. It's not the *services* that will load the config from a kv store--it's the config management system? So in the process of deploying a new container instance of a particular service, the deployment tool will pull the right values out of the kv system and inject those into the container, I'm guessing as a local text file that the service loads as normal?
> > >
> > 
> > No the thought is to have the services pull their configs from the kv
> > store via oslo.config.  The point is hopefully to not require
> > configuration files at all for containers.  The container would get
> > where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
> > /etc/myconfigs/).  At that point it just becomes another place to load
> > configurations from via oslo.config.  Configuration management comes
> > in as a way to load the configs either as a file or into etcd.  Many
> > operators (and deployment tools) are already using some form of
> > configuration management so if we can integrate in a kv store output
> > option, adoption becomes much easier than making everyone start from
> > scratch.
> > 
> > > This means you could have some (OpenStack?) service for inventory management (like Karbor) that is seeding the kv store, the cloud infrastructure software itself is "cloud aware" and queries the central distributed kv system for the correct-right-now config options, and the cloud service itself gets all the benefits of dynamic scaling of available hardware resources. That's pretty cool. Add hardware to the inventory, the cloud infra itself expands to make it available. Hardware fails, and the cloud infra resizes to adjust. Apps running on the infra keep doing their thing consuming the resources. It's clouds all the way down :-)
> > >
> > > Despite sounding pretty interesting, it also sounds like a lot of extra complexity. Maybe it's worth it. I don't know.
> > >
> > 
> > Yea there's extra complexity at least in the
> > deployment/management/monitoring of the new service or maybe not.
> > Keeping configuration files synced across 1000s of nodes (or
> > containers) can be just as hard however.
> > 
> Please correct me if I am wrong, because I still have my container training
> wheels on. I understand the need for etcd, and operators to write their
> configuration into it.  Why I am struggling with still, is why you need
> oslo.config to support it.  There is nothing stopping an operator today from
> using etcd / confd in a container, right?  I can only imagine countless other
> services that run in containers using them.
> 
> Why do we, openstack, need to write our own custom thing and be different in
> this regard?  Why do we need our services to talk directly to etcd? When things
> like apache2, nginx, other non-openstack service just use etcd / confd?
> 
> From reading the thread, it seems the issue is more about keeping files inside
> the container out of the container, which makes them easier to maintain.
> 
> And this is the part I need help with. If I was to do a POC this afternoon,
> inside a container. I would do the following:
> 
> Create container with keystone
> Create etcd service
> Add confd into container
>  - mkdir -p /etc/confd/{conf.d,templates}
>  - vi /etc/confd/conf.d/myconfig.toml
>  - vi /etc/confd/templates/myconfig.conf.tmpl
>  - start confd
> 
> I would use ansible or puppet or Dockerfile to properly setup myconfg.toml for
> keystone. Same with myconf.conf.tmpl, it would be a for d in dict: for key,
> value in d thing, to glob down any thing from etcd into ini format.
> 
> This does mean you still need to manage confd templates however, but this is
> okay, because if I deploy any other application with etcd, this is how I would
> it.
> 
> Which gets me to my final question. How are you proposing we configure apache2
> or nginx with etcd? I can only assume it is using something like the process
> above?

This is a good point, and another reason I think we need a bit more
detail written down (spec?) before we start adding things to
oslo.config.

Doug

> 
> > > Thanks again for the explanation.
> > >
> > >
> > > --John
> > >
> > >
> > >
> > >
> > >>
> > >> Thanks,
> > >> -Alex
> > >>
> > >>>
> > >>> --John
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
> > >>>
> > >>>> Jay,
> > >>>>
> > >>>> the /v3alpha HTTP API  (grpc-gateway) supports watch
> > >>>> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json
> > >>>>
> > >>>> -- Dims
> > >>>>
> > >>>> On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes <jaypipes at gmail.com> wrote:
> > >>>>> On 03/21/2017 04:29 PM, Clint Byrum wrote:
> > >>>>>>
> > >>>>>> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
> > >>>>>>>
> > >>>>>>> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
> > >>>>>>>>
> > >>>>>>>> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow <harlowja at fastmail.com>
> > >>>>>>>> wrote:
> > >>>>>>>>
> > >>>>>>>>> * How does reloading work (does it)?
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> No. There is nothing that we can do in oslo that will make services
> > >>>>>>>> magically reload configuration. It's also unclear to me if that's
> > >>>>>>>> something to do. In a containerized environment, wouldn't it be
> > >>>>>>>> simpler to deploy new services? Otherwise, supporting signal based
> > >>>>>>>> reload as we do today should be trivial.
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> Reloading works today with files, that's why the question is important
> > >>>>>>> to think through. There is a special flag to set on options that are
> > >>>>>>> "mutable" and then there are functions within oslo.config to reload.
> > >>>>>>> Those are usually triggered when a service gets a SIGHUP or something
> > >>>>>>> similar.
> > >>>>>>>
> > >>>>>>> We need to decide what happens to a service's config when that API
> > >>>>>>> is used and the backend is etcd. Maybe nothing, because every time
> > >>>>>>> any config option is accessed the read goes all the way through to
> > >>>>>>> etcd? Maybe a warning is logged because we don't support reloads?
> > >>>>>>> Maybe an error is logged? Or maybe we flush the local cache and start
> > >>>>>>> reading from etcd on future accesses?
> > >>>>>>>
> > >>>>>>
> > >>>>>> etcd provides the ability to "watch" keys. So one would start a thread
> > >>>>>> that just watches the keys you want to reload on, and when they change
> > >>>>>> that thread will see a response and can reload appropriately.
> > >>>>>>
> > >>>>>> https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html
> > >>>>>
> > >>>>>
> > >>>>> Yep. Unfortunately, you won't be able to start an eventlet greenthread to
> > >>>>> watch an etcd3/gRPC key. The python grpc library is incompatible with
> > >>>>> eventlet/gevent's monkeypatching technique and causes a complete program
> > >>>>> hang if you try to communicate with the etcd3 server from a greenlet. Fun!
> > >>>>>
> > >>>>> So, either use etcd2 (the no-longer-being-worked-on HTTP API) or don't use
> > >>>>> eventlet in your client service.
> > >>>>>
> > >>>>> Best,
> > >>>>> -jay
> > >>>>>
> > >>>>>
> > >>>>> __________________________________________________________________________
> > >>>>> OpenStack Development Mailing List (not for usage questions)
> > >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>>>
> > >>>>
> > >>>>
> > >>>> --
> > >>>> Davanum Srinivas :: https://twitter.com/dims
> > >>>>
> > >>>> __________________________________________________________________________
> > >>>> OpenStack Development Mailing List (not for usage questions)
> > >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>>
> > >>> __________________________________________________________________________
> > >>> OpenStack Development Mailing List (not for usage questions)
> > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>>
> > >>
> > >> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



More information about the OpenStack-dev mailing list