[openstack-dev] [heat] autoscaling across regions and availability zones
Zane Bitter
zbitter at redhat.com
Tue Jul 1 22:54:58 UTC 2014
On 01/07/14 16:23, Mike Spreitzer wrote:
> An AWS autoscaling group can span multiple availability zones in one
> region. What is the thinking about how to get analogous functionality
> in OpenStack?
Correct, you specify a list of availability zones (instead of just one),
and AWS distributes servers across them in some sort of round-robin
fashion. We should implement this.
> Warmup question: what is the thinking about how to get the levels of
> isolation seen between AWS regions when using OpenStack? What is the
> thinking about how to get the level of isolation seen between AWS AZs in
> the same AWS Region when using OpenStack? Do we use OpenStack Region
> and AZ, respectively? Do we believe that OpenStack AZs can really be as
> independent as we want them (note that this is phrased to not assume we
> only want as much isolation as AWS provides --- they have had high
> profile outages due to lack of isolation between AZs in a region)?
That seems like a question for individual operators, rather than for
OpenStack. OpenStack allows you, as an operator, to create AZs and
Regions... how good a job you do is up to you.
> I am going to assume that the answer to the question about ASG spanning
> involves spanning OpenStack regions and/or AZs. In the case of spanning
> AZs, Heat has already got one critical piece: the
> OS::Heat::InstanceGroup and AWS::AutoScaling::AutoScalingGroup types of
> resources take a list of AZs as an optional parameter.
That's technically true, but we don't read the list :(
> Presumably all
> four kinds of scaling group (i.e., also OS::Heat::AutoScalingGroup and
> OS::Heat::ResourceGroup) should have such a parameter. We would need to
> change the code that generates the template for the nested stack that is
> the group, so that it spreads the members across the AZs in a way that
> is as balanced as is possible at the time.
+1
> Currently, a stack does not have an AZ. That makes the case of an
> OS::Heat::AutoScalingGroup whose members are nested stacks interesting
> --- how does one of those nested stacks get into the right AZ? And what
> does that mean, anyway? The meaning would have to be left up to the
> template author. But he needs something he can write in his member
> template to reference the desired AZ for the member stack. I suppose we
> could stipulate that if the member template has a parameter named
> "availability_zone" and typed "string" then the scaling group takes care
> of providing the right value to that parameter.
The concept of an availability zone for a stack is not meaningful.
Servers have availability zones; stacks exist in one region. It is up to
the *operator*, not the user, to deploy Heat in such a way that it
remains highly-available assuming the Region is still up.
So yes, the tricky part is how to handle that when the scaling unit is
not a server (or a provider template with the same interface as a server).
One solution would have been to require that the scaled unit was,
indeed, either an OS::Nova::Server or a provider template with the same
interface as (or a superset of) an OS::Nova::Server, but the consensus
was against that. (Another odd consequence of this decision is that
we'll potentially be overwriting an AZ specified in the "launch config"
section with one from the list supplied to the scaling group itself.)
For provider templates, we could insert a pseudo-parameter containing
the availability zone. I think that could be marginally better than
taking over one of the user's parameters, but you're basically on the
right track IMO.
Unfortunately, that is not the end of the story, because we still have
to deal with other types of resources being scaled. I always advocated
for an autoscaling resource where the scaled unit was either a provider
stack (if you provided a template) or an OS::Nova::Server (if you
didn't), but the implementation that landed followed the design of
ResourceGroup by allowing (actually, requiring) you to specify an
arbitrary resource type.
We could do something fancy here involving tagging the properties schema
with metadata so that we could allow plugin authors to map the AZ list
to an arbitrary property. However, I propose that we just raise a
validation error if the AZ is specified for a resource that is not
either an OS::Nova::Server or a provider template.
> To spread across regions adds two things. First, all four kinds of
> scaling group would need the option to be given a list of regions
> instead of a list of AZs. More likely, a list of contexts as defined in
> https://review.openstack.org/#/c/53313/--- that would make this handle
> multi-cloud as well as multi-region. The other thing this adds is a
> concern for context health. It is not enough to ask Ceilometer to
> monitor member health --- in multi-region or multi-cloud you also have
> to worry about the possibility that Ceilometer itself goes away. It
> would have to be the scaling group's responsibility to monitor for
> context health, and react properly to failure of a whole context.
I don't think having groups scale across region barriers makes any sense
either conceptually or in practical implementation terms. Services talk
to service endpoints in the same region - in fact, that is the
*definition* of a region.
We had a long debate about how we should handle cross-region stacks in
Heat (CloudFormation, BTW, doesn't allow them), summarised by this wiki
page:
https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat/The_Missing_Diagram
and the result was a consensus that we would allow OS::Heat::Stack
resources - and *only* those resources - to be created in a remote
region. (i.e. Option 2 on the wiki page.)
So the solution here for the user is to create an autoscaling group in
each region (they can re-use the same template), each of which talks to
its local Ceilometer.
> Does this sound about right? If so, I could draft a spec.
Yes, but only the multi-AZ part, not the multi-Region part.
cheers,
Zane.
More information about the OpenStack-dev
mailing list