[openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

Jiří Stránský jistr at redhat.com
Wed May 18 10:13:07 UTC 2016


On 16.5.2016 23:54, Pradeep Kilambi wrote:
> On Mon, May 16, 2016 at 3:33 PM, James Slagle <james.slagle at gmail.com>
> wrote:
>
>> On Mon, May 16, 2016 at 10:34 AM, Pradeep Kilambi <prad at redhat.com> wrote:
>>> Hi Everyone:
>>>
>>> I wanted to start a discussion around considering backporting Aodh to
>>> stable/liberty for upgrades. We have been discussing quite a bit on whats
>>> the best way for our users to upgrade ceilometer alarms to Aodh when
>> moving
>>> from liberty to mitaka. A quick refresh on what changed, In Mitaka,
>>> ceilometer alarms were replaced by Aodh. So only way to get alarms
>>> functionality is use aodh. Now when the user kicks off upgrades from
>> liberty
>>> to Mitaka, we want to make sure alarms continue to function as expected
>>> during the process which could take multiple days. To accomplish this I
>>> propose the following approach:
>>>
>>> * Backport Aodh functionality to stable/liberty. Note, Aodh
>> functionality is
>>> backwards compatible, so with Aodh running, ceilometer api and client
>> will
>>> redirect requests to Aodh api. So this should not impact existing users
>> who
>>> are using ceilometer api or client.
>>>
>>> * As part of Aodh deployed via heat stack update, ceilometer alarms
>> services
>>> will be replaced by openstack-aodh-*. This will be done by the puppet
>> apply
>>> as part of stack convergence phase.
>>>
>>> * Add checks in the Mitaka pre upgrade steps when overcloud install kicks
>>> off to check and warn the user to update to liberty + aodh to ensure
>> aodh is
>>> running. This will ensure heat stack update is run and, if alarming is
>> used,
>>> Aodh is running as expected.
>>>
>>> The upgrade scenarios between various releases would work as follows:
>>>
>>> Liberty -> Mitaka
>>>
>>> * Upgrade starts with ceilometer alarms running
>>> * A pre-flight check will kick in to make sure Liberty is upgraded to
>>> liberty + aodh with stack update
>>> * Run heat stack update to upgrade to aodh
>>> * Now ceilometer alarms should be removed and Aodh should be running
>>> * Proceed with mitaka upgrade
>>> * End result, Aodh continue to run as expected
>>>
>>> Liberty + aodh -> Mitaka:
>>>
>>> * Upgrade starts with Aodh running
>>> * A pre-flight check will kick in to make sure Liberty is upgraded to
>> Aodh
>>> with stack update
>>> * Confirming Aodh is indeed running, proceed with Mitaka upgrade with
>> Aodh
>>> running
>>> * End result, Aodh continue to be run as expected
>>>
>>>
>>> This seems to be a good way to get the upgrades working for aodh. Other
>> less
>>> effective options I can think of are:
>>>
>>> 1. Let the Mitaka upgrade kick off and do "yum update" which replace aodh
>>> during migration, alarm functionality will be down until puppet converge
>>> runs and configures Aodh. This means alarms will be down during upgrade
>>> which is not ideal.
>>>
>>> 2. During Mitaka upgrades, replace with Aodh and add a bash script that
>>> fully configures Aodh and ensures aodh is functioning. This will involve
>>> significant work and results in duplicating everything puppet does today.
>>
>> How much duplication would this really be? Why would it have to be in bash?
>>
>
> Well pretty much entire aodh configuration will need to happen, Here is
> what we do in devstack, something along these lines[1]. So in short, we'll
> need to install, create users, configure db and coordination backends,
> configure api to run under mod wsgi. Sure, it doesn't have to be bash,
> assumed that would be easiest to invoke during upgrades.
>
>
>
>>
>> Could it be:
>>
>> Liberty -> Mitaka
>>
>> * Upgrade starts with ceilometer alarms running
>> * Add a new hook for the first step of Mitaka upgrade that does:
>>   ** sets up mitaka repos
>>   ** migrates from ceilometer alarms to aodh, can use puppet
>>   ** ensures aodh is running
>> * Proceed with rest of mitaka upgrade
>>
>> At most, it seems we'd have to surround the puppet apply with some
>> pacemaker commands to possibly set maintenance mode and migrate
>> constraints.
>>
>> The puppet manifest itself would just be the includes and classes for aodh.
>>
>
>
> Yea I guess we could do something like this, i'm not fully clear on the
> details on how and when this would be called. But with the below caveat you
> mentioned already.
>

Yes this is a possibility, it's still not fully utilizing the Puppet we 
have for deployment, we'd have at least a custom manifest, but hopefully 
it wouldn't be too big.

In case the AODH classes include some other Puppet classes from their 
code, we could end up applying more config changes than desired in this 
phase and break something. I'm hoping that this is more of a theoretical 
concern rather than practical, but probably deserves some verification.

>
>
>>
>> One complication might be that the aodh packages from Mitaka might
>> pull in new deps that required updating other OpenStack services,
>> which we wouldn't yet want to do. That is probably worth confirming
>> though.
>>
>
> Yea we will be pulling in at least some new oslo deps and client libraries
> for sure. But wouldnt yum update during the upgrades do that anyway? or
> would aodh setup run before yum update phase of upgrade process?

Good question :) We could probably also do it in the middle of 
controller update phase, between step 1 (stop services + package update) 
and step 2 (start services) [1]. If we don't make the Puppet code 
actually start the service (we'll let step 2 do it), i think it should 
be fine. We already upgrade block storage in a similar way, as this is 
an action that also needs to happen after controller services are 
stopped and before they're started (also visible at [1]).

------

Non-AODH, considering general (future) cases, which may also require 
changes on other nodes than just controllers:

Our best opportunity to make config changes is probably when we first 
apply the X+1 release t-h-t and puppet on the deployment. That's the 
last step of the upgrade, aka the converge step. The pros of this 
approach are:

* In many cases we probably wouldn't have to craft specific scripting or 
Puppet to do the upgrade. We could utilize the same Puppet as when doing 
stack-create. (Perhaps a small script before running puppet would be 
required to un-deploy/stop old stuff, but that should be simpler problem 
than deploying and configuring new stuff. Also, this is something we 
have to do regardless of the approach we pick.)

* We run the Puppet cloud-wide at the same time, converging the config 
on all node types, not just on controllers.

* Packages are release X+1 at this point, and we can run the *full* 
Puppet manifests. We don't have to worry about accidentally pulling in 
more changes that might not work on release X.

The downside of this approach is that we require the X+1 packages to 
work with X configs/services until we get to the converge phase where 
we'd bump the whole cloud to new configs/services. And this is the 
reason why we can't do it with AODH --we've missed the deprecation 
period. This would work if done during K->L upgrade, but won't work 
during L->M as i've been told ceilometer-alarm isn't present in Mitaka 
release at all.


Jirka


[1] 
https://github.com/openstack/tripleo-heat-templates/blob/d6574fa32a0954aea02684984d9a93c444ec34f0/extraconfig/tasks/major_upgrade_pacemaker.yaml

>
> ~ Prad
>
>
> [1] https://github.com/openstack/aodh/blob/master/devstack/plugin.sh
>
>
>
>>
>> --
>> -- James Slagle
>> --
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list