[openstack-dev] [TripleO] easily identifying how services are configured
Dan Prince
dprince at redhat.com
Thu Oct 25 22:05:05 UTC 2018
On Wed, Oct 17, 2018 at 11:15 AM Alex Schultz <aschultz at redhat.com> wrote:
>
> Time to resurrect this thread.
>
> On Thu, Jul 5, 2018 at 12:14 PM James Slagle <james.slagle at gmail.com> wrote:
> >
> > On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince <dprince at redhat.com> wrote:
> > > Last week I was tinkering with my docker configuration a bit and was a
> > > bit surprised that puppet/services/docker.yaml no longer used puppet to
> > > configure the docker daemon. It now uses Ansible [1] which is very cool
> > > but brings up the question of how should we clearly indicate to
> > > developers and users that we are using Ansible vs Puppet for
> > > configuration?
> > >
> > > TripleO has been around for a while now, has supported multiple
> > > configuration ans service types over the years: os-apply-config,
> > > puppet, containers, and now Ansible. In the past we've used rigid
> > > directory structures to identify which "service type" was used. More
> > > recently we mixed things up a bit more even by extending one service
> > > type from another ("docker" services all initially extended the
> > > "puppet" services to generate config files and provide an easy upgrade
> > > path).
> > >
> > > Similarly we now use Ansible all over the place for other things in
> > > many of or docker and puppet services for things like upgrades. That is
> > > all good too. I guess the thing I'm getting at here is just a way to
> > > cleanly identify which services are configured via Puppet vs. Ansible.
> > > And how can we do that in the least destructive way possible so as not
> > > to confuse ourselves and our users in the process.
> > >
> > > Also, I think its work keeping in mind that TripleO was once a multi-
> > > vendor project with vendors that had different preferences on service
> > > configuration. Also having the ability to support multiple
> > > configuration mechanisms in the future could once again present itself
> > > (thinking of Kubernetes as an example). Keeping in mind there may be a
> > > conversion period that could well last more than a release or two.
> > >
> > > I suggested a 'services/ansible' directory with mixed responces in our
> > > #tripleo meeting this week. Any other thoughts on the matter?
> >
> > I would almost rather see us organize the directories by service
> > name/project instead of implementation.
> >
> > Instead of:
> >
> > puppet/services/nova-api.yaml
> > puppet/services/nova-conductor.yaml
> > docker/services/nova-api.yaml
> > docker/services/nova-conductor.yaml
> >
> > We'd have:
> >
> > services/nova/nova-api-puppet.yaml
> > services/nova/nova-conductor-puppet.yaml
> > services/nova/nova-api-docker.yaml
> > services/nova/nova-conductor-docker.yaml
> >
> > (or perhaps even another level of directories to indicate
> > puppet/docker/ansible?)
> >
> > Personally, such an organization is something I'm more used to. It
> > feels more similar to how most would expect a puppet module or ansible
> > role to be organized, where you have the abstraction (service
> > configuration) at a higher directory level than specific
> > implementations.
> >
> > It would also lend itself more easily to adding implementations only
> > for specific services, and address the question of if a new top level
> > implementation directory needs to be created. For example, adding a
> > services/nova/nova-api-chef.yaml seems a lot less contentious than
> > adding a top level chef/services/nova-api.yaml.
> >
> > It'd also be nice if we had a way to mark the default within a given
> > service's directory. Perhaps services/nova/nova-api-default.yaml,
> > which would be a new template that just consumes the default? Or
> > perhaps a symlink, although it was pointed out symlinks don't work in
> > swift containers. Still, that could possibly be addressed in our plan
> > upload workflows. Then the resource-registry would point at
> > nova-api-default.yaml. One could easily tell which is the default
> > without having to cross reference with the resource-registry.
> >
>
> So since I'm adding a new ansible service, I thought I'd try and take
> a stab at this naming thing. I've taken James's idea and proposed an
> implementation here:
> https://review.openstack.org/#/c/588111/
>
> The idea would be that the THT code for the service deployment would
> end up in something like:
>
> deployment/<service>/<specific-service-element>-<implementation>.yaml
A matter of preference but I can live with this.
>
> Additionally I took a stab at combining the puppet/docker service
> definitions for the aodh services in a similar structure to start
> reducing the overhead we've had from maintaining the docker/puppet
> implementations seperately. You can see the patch
> https://review.openstack.org/#/c/611188/ for an additional example of
> this.
>
> Please let me know what you think.
I'm okay with it in that it consolidates some things (which we greatly
need to do). It does address my initial concern in that people are now
putting Ansible services into the puppet/services directory albeit a
bit heavy handed in that it changes everything (rather than just the
new Ansible services). Understood that you are also eliminating the
"inheritance" from the docker/services to the puppet/services files...
by simply eliminating the Puppet varients. I had hoped to implement a
baremetal Packstack like installer (still somewhat popular) with t-h-t
but there doesn't appear to be anyone in the Packstack camp speaking
up for the idea here. So much for consolidation. There doesn't seem to
be anyone else in TripleO (aside from myself) who wants baremetal so I
guess be gone with it. Opportunity to test on baremetal lost! Like we
discussed we could have easily kept both and used jinja templates to
avoid the "inheritance" across the baremetal(puppet) and
docker/services and thus also minimized our Heat resources which have
grown. There *are* ways to implement this addressing all concerns and
keeping both features. We are choosing not do this however.
I see people are already +2'ing the patches so it appears to have been
decided. But, Are we going to commit to trying to move all the
existing services to this new format within a release? There are 200
or so t-h-t patches up for review that would likely need rebased. And
yes this will make backports more difficult as well.
In all this is we are changing our entire directory structure but not
really removing any dependencies. I suppose if we are going to do the
work I'd rather see us drop some dependencies in the process too :/.
Where destructive meets productive is our best investment I think.
Dan
>
> Thanks,
> -Alex
>
> >
> > --
> > -- James Slagle
> > --
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list