[openstack-dev] 答复: [heat] autoscaling across regions and availability zones

Mike Spreitzer mspreitz at us.ibm.com
Wed Jul 9 19:19:27 UTC 2014


Huangtianhua <huangtianhua at huawei.com> wrote on 07/04/2014 02:35:56 AM:

> I have register a bp about this : https://blueprints.launchpad.net/
> heat/+spec/implement-autoscalinggroup-availabilityzones
>> ・         And I am thinking how to implement this recently.
>> ・         According to AWS autoscaling implementation  “attempts to 
> distribute instances evenly between the Availability Zones that are 
> enabled for your Auto Scaling group. 
> ・         Auto Scaling does this by attempting to launch new 
> instances in the Availability Zone with the fewest instances. If the
> attempt fails, however, Auto Scaling will attempt to launch in other
> zones until it succeeds.”
> 
> But there is a doubt about the “fewest instance”, .e.g 
> 
> There are two azs,
>    Az1: has two instances
>    Az2: has three instances
> ・            And then to create a asg with 4 instances, I think we 
> should create two instances respectively in az1 and az2, right? Now 
> if need to extend to 5 instances for the asg, which az to lauch new 
instance?
> If you interested in this bp, I think we can discuss thisJ

The way AWS handles this is described in 
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html#arch-AutoScalingMultiAZ

That document leaves a lot of freedom to the cloud provider.  And 
rightfully so, IMO.  To answer your specific example, when spreading 5 
instances across 2 zones, the cloud provider gets to pick which zone gets 
3 and which zone gets 2.  As for what a Heat scaling group should do, that 
depends on what Nova can do for Heat.  I have been told that Nova's 
instance-creation operation takes an optional parameter that identifies 
one AZ and, if that parameter is not provided, then a configured default 
AZ is used.  Effectively, the client has to make the choice.  I would 
start out with Heat making a random choice; in subsequent development it 
might query or monitor Nova for some statistics to guide the choice.

An even more interesting issue is the question of choosing which member(s) 
to remove when scaling down.  The approach taken by AWS is documented at 
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/us-termination-policy.html
but the design there has redundant complexity and the doc is not well 
written.  Following is a short sharp presentation of an isomorphic system.
A client that owns an ASG configures that ASG to have a series (possibly 
empty) of instance termination policies; the client can change the series 
during the ASG's lifetime.  Each policy is drawn from the following menu:
OldestLaunchConfiguration
ClosestToNextInstanceHour
OldestInstance
NewestInstance
(see the AWS doc for the exact meaning of each).  The signature of a 
policy is this: given a set of candidate instances for removal, return a 
subset (possibly the whole input set).
When it is time to remove instances from an ASG, they are chosen one by 
one.  AWS uses the following procedure to choose one instance to remove. 
1.      Choose the AZ from which the instance will be removed.  The choice 
is based primarily on balancing the number of group members in each AZ, 
and ties are broken randomly.
2.      Starting with a "candidate set" consisting of all the ASG's 
members in the chosen AZ, run the configured series of policies to 
progressively narrow down the set of candidates.
3.      Use OldestLaunchConfiguration and then ClosestToNextInstanceHour 
to further narrow the set of candidates.
4.      Make a random choice among the final set of candidates.
Since each policy returns its input when its input's size is 1 we do not 
need to talk about early exits when defining the procedure (although the 
implementation might make such optimizations).
I plan to draft a spec.
Thanks,
Mike


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140709/441f49a7/attachment.html>


More information about the OpenStack-dev mailing list