[openstack-dev] [TripleO] easily identifying how services are configured

Alex Schultz aschultz at redhat.com
Fri Aug 3 14:50:29 UTC 2018


On Thu, Aug 2, 2018 at 11:32 PM, Cédric Jeanneret <cjeanner at redhat.com> wrote:
>
>
> On 08/02/2018 11:41 PM, Steve Baker wrote:
>>
>>
>> On 02/08/18 13:03, Alex Schultz wrote:
>>> On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya <bdobreli at redhat.com>
>>> wrote:
>>>> On 7/6/18 7:02 PM, Ben Nemec wrote:
>>>>>
>>>>>
>>>>> On 07/05/2018 01:23 PM, Dan Prince wrote:
>>>>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
>>>>>>>
>>>>>>> I would almost rather see us organize the directories by service
>>>>>>> name/project instead of implementation.
>>>>>>>
>>>>>>> Instead of:
>>>>>>>
>>>>>>> puppet/services/nova-api.yaml
>>>>>>> puppet/services/nova-conductor.yaml
>>>>>>> docker/services/nova-api.yaml
>>>>>>> docker/services/nova-conductor.yaml
>>>>>>>
>>>>>>> We'd have:
>>>>>>>
>>>>>>> services/nova/nova-api-puppet.yaml
>>>>>>> services/nova/nova-conductor-puppet.yaml
>>>>>>> services/nova/nova-api-docker.yaml
>>>>>>> services/nova/nova-conductor-docker.yaml
>>>>>>>
>>>>>>> (or perhaps even another level of directories to indicate
>>>>>>> puppet/docker/ansible?)
>>>>>>
>>>>>> I'd be open to this but doing changes on this scale is a much larger
>>>>>> developer and user impact than what I was thinking we would be willing
>>>>>> to entertain for the issue that caused me to bring this up (i.e.
>>>>>> how to
>>>>>> identify services which get configured by Ansible).
>>>>>>
>>>>>> Its also worth noting that many projects keep these sorts of things in
>>>>>> different repos too. Like Kolla fully separates kolla-ansible and
>>>>>> kolla-kubernetes as they are quite divergent. We have been able to
>>>>>> preserve some of our common service architectures but as things move
>>>>>> towards kubernetes we may which to change things structurally a bit
>>>>>> too.
>>>>>
>>>>> True, but the current directory layout was from back when we
>>>>> intended to
>>>>> support multiple deployment tools in parallel (originally
>>>>> tripleo-image-elements and puppet).  Since I think it has become
>>>>> clear that
>>>>> it's impractical to maintain two different technologies to do
>>>>> essentially
>>>>> the same thing I'm not sure there's a need for it now.  It's also worth
>>>>> noting that kolla-kubernetes basically died because there wasn't enough
>>>>> people to maintain both deployment methods, so we're not the only
>>>>> ones who
>>>>> have found that to be true.  If/when we move to kubernetes I would
>>>>> anticipate it going like the initial containers work did -
>>>>> development for a
>>>>> couple of cycles, then a switch to the new thing and deprecation of
>>>>> the old
>>>>> thing, then removal of support for the old thing.
>>>>>
>>>>> That being said, because of the fact that the service yamls are
>>>>> essentially an API for TripleO because they're referenced in user
>>>>
>>>> this ^^
>>>>
>>>>> resource registries, I'm not sure it's worth the churn to move
>>>>> everything
>>>>> either.  I think that's going to be an issue either way though, it's
>>>>> just a
>>>>> question of the scope.  _Something_ is going to move around no
>>>>> matter how we
>>>>> reorganize so it's a problem that needs to be addressed anyway.
>>>>
>>>> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
>>>> maintainers doing backports for queens (and the LTS downstream
>>>> release based
>>>> on it). Now imagine kubernetes support comes within those next a few
>>>> years,
>>>> before we can let the old API just go...
>>>>
>>>> I have an example [0] to share all that pain brought by a simple move of
>>>> 'API defaults' from environments/services-docker to
>>>> environments/services
>>>> plus environments/services-baremetal. Each time a file changes
>>>> contents by
>>>> its old location, like here [1], I had to run a lot of sanity checks to
>>>> rebase it properly. Like checking for the updated paths in resource
>>>> registries are still valid or had to/been moved as well, then picking
>>>> the
>>>> source of truth for diverged old vs changes locations - all that to
>>>> loose
>>>> nothing important in progress.
>>>>
>>>> So I'd say please let's do *not* change services' paths/namespaces in
>>>> t-h-t
>>>> "API" w/o real need to do that, when there is no more alternatives
>>>> left to
>>>> that.
>>>>
>>> Ok so it's time to dig this thread back up. I'm currently looking at
>>> the chrony support which will require a new service[0][1]. Rather than
>>> add it under puppet, we'll likely want to leverage ansible. So I guess
>>> the question is where do we put services going forward?  Additionally
>>> as we look to truly removing the baremetal deployment options and
>>> puppet service deployment, it seems like we need to consolidate under
>>> a single structure.  Given that we don't want force too much churn,
>>> does this mean that we should align to the docker/services/*.yaml
>>> structure or should we be proposing a new structure that we can try to
>>> align on.
>>>
>>> There is outstanding tech-debt around the nested stacks and references
>>> within these services when we added the container deployments so it's
>>> something that would be beneficial to start tackling sooner rather
>>> than later.  Personally I think we're always going to have the issue
>>> when we rename files that could have been referenced by custom
>>> templates, but I don't think we can continue to carry the outstanding
>>> tech debt around these static locations.  Should we be investing in
>>> coming up with some sort of mappings that we can use/warn a user on
>>> when we move files?
>>
>> When Stein development starts, the puppet services will have been
>> deprecated for an entire cycle. Can I suggest we use this reorganization
>> as the time we delete the puppet services files? This would release us
>> of the burden of maintaining a deployment method that we no longer use.
>> Also we'll gain a deployment speedup by removing a nested stack for each
>> docker based service.
>>
>> Then I'd suggest doing an "mv docker/services services" and moving any
>> remaining files in the puppet directory into that. This is basically the
>> naming that James suggested, except we wouldn't have to suffix the files
>> with -puppet.yaml, -docker.yaml unless we still had more than one
>> deployment method for that service.
>
> We must be cuatious, as a tree change might prevent backporting things
> when we need them in older releases. That was also discussed during the
> latter thread regarding reorganization - although I'm also all for a
> "simplify that repository" thing, it might become tricky in some cases :/.
>

Yes there will be pain in back porting issues, but that shouldn't stop
us from addressing this long standing tech debt.  Right now I think we
have so many performance related problems due to the structure that
it's getting to a point where we have to address it.  Over the course
of the last 3-4 cycles, our deployment jobs have started to hit the 3
hour mark where they used to be be 2 hours.  As I mentioned earlier
some of this is related to the nested stacks from this structure.  I'm
less concerned about the back ports and more concerned about the user
impact on upgrades as we move files around. It's hard to know what
users were actually referencing and we keep bumping into the same
problem whenever we move files around in THT.  If we completed the
reorg in one cycle, then back ports to Rocky would hit be harder, but
any backports for < Rocky would just be the Rocky version.  As folks
back porting changes, they only have to figure out this transition
once. We already had several of these back port related complexities
with puppet/docker (Ocata/Pike) and the structure changes between
Mitaka/Newton so I'm not completely sure it's that big of an issue.

>>
>> Finally, we could consider symlinking docker/services to services for a
>> cycle. I'm not sure how a swift-stored plan would handle this, but this
>> would be a great reason to land Ian's plan speedup patch[1] which stores
>> tripleo-heat-templates in a tarball :)
>
> Might be worth a try. Might also allow to backport things, as the
> "original files" would stay in the "old" location, making the new tree
> compatible with older release like Newton (hey, yes, LTS for Red Hat). I
> think the templates are aggregated and generated prior the upload,
> meaning it should not create new issues. Hopefully.
>
> Maybe shardy can jump in and provide some more info?
>
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2018-August/132768.html
>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/#/c/586679/
>>> [1] https://review.openstack.org/#/c/588111/
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Cédric Jeanneret
> Software Engineer
> DFG:DF
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list