[openstack-dev] [Heat] Instance naming in IG/ASG and problems related to UpdatePolicy
zbitter at redhat.com
Fri Aug 30 16:00:54 UTC 2013
On 30/08/13 16:23, Chan, Winson C wrote:
> Regarding the last set of comments on the UpdatePolicy, I want to bring your attention to a few items. I already submitted a new patch set and didn't want to reply on the old patch set so that's why I emailed.
> As you are aware, IG/ASG currently create instances by appending group name and #. On resize, it identifies the newest instances to remove by sorting on the name string and removing from the end of the list.
> Based on your comments, in the new patch set I have changed the naming of the instances to just a # without prefixing the group name (or self.name). I also remove the name ranges stuff. But we still have the following problems…
> 1. On a decrease in size where the oldest instances should be removed… Since the naming is still number based, this means we'll have to remove instances starting from 0 (since 0 is the oldest). This leaves a gap in the beginning of the list. So on the next resize to increase, where to increase? Continue the numbering from the end?
Yes, although I don't see it as a requirement for this patch that we
remove the oldest first. (In fact, it would probably be better to
minimise the changes in this patch, and switch from killing the newest
first to the oldest first in some future patch.) I mentioned it because
it's a likely future requirement, and therefore worth considering at the
> 2. On replace, I let the UpdateReplace handle the batch replacement. However, for the use case where we need to support MinInstancesInService (min instances in service = 2, batch size = 2, current size = 2), this means we need to create the new instances first before deleting the old ones instead of letting the instance update to handle it. Also, with the naming restriction, this means I will have to create the 2 new replacements as '2' and '3'. After I delete the original '0' and '1', there's a gap in the numbering of the instances… Then this leads to the same question as above. What happen on a resize after?
Urg, good point. I hadn't thought about that case :(
The new update algorithm will ensure that there are always two instances
in service, because it won't delete the replacement until the new one
has been created. The problem here is what we discussed the other day -
we would need to update the Load Balancer at various points in the
middle of the stack update.
As we discussed then, one way to do this would be to override the
Instance class (as we used to do, see
and insert the lb update by overriding some convenient trigger point. At
the end of Resource.create() would cover every case except where we're
swapping in an old resource during a rollback (here -
I *think* that doing it at the beginning of Resource.delete() as well
would ensure that one gets covered.
Alternatively, we could pass a callback to Stack.update() and let the
update algorithm notify us when something relevant is happening.
Unfortunately, both of these options involve fairly tight coupling
between the autoscaling group and the update algorithm, to the point it
would preclude us moving the autoscaling implementation to a separate
service and having it interact with the instances template only through
the ReST API. So unless anybody else has a bright idea, this is probably
> The ideal I think is to just use some random short id for the name of the instances and then store a creation timestamp somewhere with the resource and use the timestamp to determine the age of the instances for removal. Thoughts?
I really like this idea. We already store the creation time in each
resource, which should be pretty much exactly what you want to sort by here.
Making the (logical) resource names unique through randomness alone will
require them to be quite long though. It would be better to use the
lowest currently-unused integers... which is probably pretty close to
what you're already doing that I said seemed unnecessary.
So, in conclusion, uh... ignore me and just do the simplest thing you
think will work? ;)
More information about the OpenStack-dev