[openstack-dev] [TripleO] easily identifying how services are configured

Dan Prince dprince at redhat.com
Tue Aug 7 15:18:59 UTC 2018


On Thu, Aug 2, 2018 at 5:42 PM Steve Baker <sbaker at redhat.com> wrote:
>
>
>
> On 02/08/18 13:03, Alex Schultz wrote:
> > On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya <bdobreli at redhat.com> wrote:
> >> On 7/6/18 7:02 PM, Ben Nemec wrote:
> >>>
> >>>
> >>> On 07/05/2018 01:23 PM, Dan Prince wrote:
> >>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
> >>>>>
> >>>>> I would almost rather see us organize the directories by service
> >>>>> name/project instead of implementation.
> >>>>>
> >>>>> Instead of:
> >>>>>
> >>>>> puppet/services/nova-api.yaml
> >>>>> puppet/services/nova-conductor.yaml
> >>>>> docker/services/nova-api.yaml
> >>>>> docker/services/nova-conductor.yaml
> >>>>>
> >>>>> We'd have:
> >>>>>
> >>>>> services/nova/nova-api-puppet.yaml
> >>>>> services/nova/nova-conductor-puppet.yaml
> >>>>> services/nova/nova-api-docker.yaml
> >>>>> services/nova/nova-conductor-docker.yaml
> >>>>>
> >>>>> (or perhaps even another level of directories to indicate
> >>>>> puppet/docker/ansible?)
> >>>>
> >>>> I'd be open to this but doing changes on this scale is a much larger
> >>>> developer and user impact than what I was thinking we would be willing
> >>>> to entertain for the issue that caused me to bring this up (i.e. how to
> >>>> identify services which get configured by Ansible).
> >>>>
> >>>> Its also worth noting that many projects keep these sorts of things in
> >>>> different repos too. Like Kolla fully separates kolla-ansible and
> >>>> kolla-kubernetes as they are quite divergent. We have been able to
> >>>> preserve some of our common service architectures but as things move
> >>>> towards kubernetes we may which to change things structurally a bit
> >>>> too.
> >>>
> >>> True, but the current directory layout was from back when we intended to
> >>> support multiple deployment tools in parallel (originally
> >>> tripleo-image-elements and puppet).  Since I think it has become clear that
> >>> it's impractical to maintain two different technologies to do essentially
> >>> the same thing I'm not sure there's a need for it now.  It's also worth
> >>> noting that kolla-kubernetes basically died because there wasn't enough
> >>> people to maintain both deployment methods, so we're not the only ones who
> >>> have found that to be true.  If/when we move to kubernetes I would
> >>> anticipate it going like the initial containers work did - development for a
> >>> couple of cycles, then a switch to the new thing and deprecation of the old
> >>> thing, then removal of support for the old thing.
> >>>
> >>> That being said, because of the fact that the service yamls are
> >>> essentially an API for TripleO because they're referenced in user
> >>
> >> this ^^
> >>
> >>> resource registries, I'm not sure it's worth the churn to move everything
> >>> either.  I think that's going to be an issue either way though, it's just a
> >>> question of the scope.  _Something_ is going to move around no matter how we
> >>> reorganize so it's a problem that needs to be addressed anyway.
> >>
> >> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
> >> maintainers doing backports for queens (and the LTS downstream release based
> >> on it). Now imagine kubernetes support comes within those next a few years,
> >> before we can let the old API just go...
> >>
> >> I have an example [0] to share all that pain brought by a simple move of
> >> 'API defaults' from environments/services-docker to environments/services
> >> plus environments/services-baremetal. Each time a file changes contents by
> >> its old location, like here [1], I had to run a lot of sanity checks to
> >> rebase it properly. Like checking for the updated paths in resource
> >> registries are still valid or had to/been moved as well, then picking the
> >> source of truth for diverged old vs changes locations - all that to loose
> >> nothing important in progress.
> >>
> >> So I'd say please let's do *not* change services' paths/namespaces in t-h-t
> >> "API" w/o real need to do that, when there is no more alternatives left to
> >> that.
> >>
> > Ok so it's time to dig this thread back up. I'm currently looking at
> > the chrony support which will require a new service[0][1]. Rather than
> > add it under puppet, we'll likely want to leverage ansible. So I guess
> > the question is where do we put services going forward?  Additionally
> > as we look to truly removing the baremetal deployment options and
> > puppet service deployment, it seems like we need to consolidate under
> > a single structure.  Given that we don't want force too much churn,
> > does this mean that we should align to the docker/services/*.yaml
> > structure or should we be proposing a new structure that we can try to
> > align on.
> >
> > There is outstanding tech-debt around the nested stacks and references
> > within these services when we added the container deployments so it's
> > something that would be beneficial to start tackling sooner rather
> > than later.  Personally I think we're always going to have the issue
> > when we rename files that could have been referenced by custom
> > templates, but I don't think we can continue to carry the outstanding
> > tech debt around these static locations.  Should we be investing in
> > coming up with some sort of mappings that we can use/warn a user on
> > when we move files?
>
> When Stein development starts, the puppet services will have been
> deprecated for an entire cycle. Can I suggest we use this reorganization
> as the time we delete the puppet services files? This would release us
> of the burden of maintaining a deployment method that we no longer use.
> Also we'll gain a deployment speedup by removing a nested stack for each
> docker based service.
>
> Then I'd suggest doing an "mv docker/services services" and moving any
> remaining files in the puppet directory into that. This is basically the
> naming that James suggested, except we wouldn't have to suffix the files
> with -puppet.yaml, -docker.yaml unless we still had more than one
> deployment method for that service.

Refactoring the 'services' directories should be safer in the future
(if we support softlinking as you suggest below). But even now I think
we can do some level of organization within the 'services' directories
to accommodate as very few users utilize these directly. It is
primarily via the t-h-t 'environments' files that users consume our
services. So long as we take care to keep our environments in order I
think refactoring should be fine right? All my initial ask here was
lets not confuse ourselves by puppet services that are now configured
entirely by Ansible in the puppet/services directory. We can start
shaping things however we'd like now and gradually update the existing
environments to consume the new ways, taking care that upgrades are
well supported along the way. And optimizations too!

One really painful thing to go and do is moving the environments files
themselves. We've already done this in Rocky
(environments/services-docker is now called environments/services!) so
most of the pain is already absorbed here. In hindsight I think we
should be more careful how we add/refactor the environments directory
in the future.... The rest of the tree is more internal to TripleO
development however and we can refactor that more freely I think.

>
> Finally, we could consider symlinking docker/services to services for a
> cycle. I'm not sure how a swift-stored plan would handle this, but this
> would be a great reason to land Ian's plan speedup patch[1] which stores
> tripleo-heat-templates in a tarball :)

 Aside from the performance benefits of this patch there are lots of
hidden features there. Symlinking is one of them and should work fine
as heatclient would send the files to Heat via the local filesystem
instead of relying on Swift. This does increase the load on Heat API a
bit but we should be able to adjust our heat-api configs if needed to
account for this hit.

As an aside Swift as a storage backend for the templates was fine back
in the days before we had hundreds of .j2 templates. Now that things
are being rendered from templates all over the place the Swift
solution and all the associated get/updates to and from the each Swift
object are a really inefficient way of dealing with large sets of
templates that need rendering. Local filesystem operations will beat
out what we are doing now every time and should provide some
scalability into the future as we continue to adapt how we utilize
t-h-t in the future.

Dan

>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2018-August/132768.html
>
> > Thanks,
> > -Alex
> >
> > [0] https://review.openstack.org/#/c/586679/
> > [1] https://review.openstack.org/#/c/588111/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list