[openstack-dev] [heat] One more lifecycle plug point - in scaling groups

Zane Bitter zbitter at redhat.com
Wed Jul 2 21:38:36 UTC 2014


On 01/07/14 21:09, Mike Spreitzer wrote:
> Zane Bitter <zbitter at redhat.com> wrote on 07/01/2014 07:05:15 PM:
>
>  > On 01/07/14 16:30, Mike Spreitzer wrote:
>  > > Thinking about my favorite use case for lifecycle plug points for cloud
>  > > providers (i.e., giving something a chance to make a holistic placement
>  > > decision), it occurs to me that one more is needed: a scale-down plug
>  > > point.  A plugin for this point has a distinctive job: to decide which
>  > > group member(s) to remove from a scaling group (i.e.,
>  > > OS::Heat::AutoScalingGroup or OS::Heat::InstanceGroup or
>  > > OS::Heat::ResourceGroup or AWS::AutoScaling::AutoScalingGroup).  The
>  > > plugin's signature could be something like this: given a list of group
>  > > members and a number to remove, return the list of members to remove
>  > > (or, equivalently, return the list of members to keep).  What do
> you think?
>  >
>  > I think you're not thinking big enough ;)
>
> I agree, I was taking only a small bite in hopes of a quick success.
>
>  > There exist a whole class of applications that would benefit from
>  > autoscaling but which are not quite stateless. (For example, a PaaS.) So
>  > it's not enough to have plugins that place the choice of which node to
>  > scale down under operator control; in fact it needs to be under
>  > _application_ control.
>
> Exactly.  There are two different roles that want such control; in
> general, neither is happy if only the other gets it.  Now the question
> becomes, how do we get them to play nice together?  In the case of
> TripleO there may be an exceptionally easy out: the owner of an
> application deployed on the undercloud may well be the same as the
> provider of the undercloud (i.e., the operator whose end goal is to
> provide the overcloud(s) ).

Let's assume that this feature takes that form of some additional data 
in the alarm trigger (i.e. the input to the scaling policy) that 
specifies which server(s) to delete first. The application would handle 
this by receiving the trigger from Ceilometer (or wherever) itself, and 
then inserting the additional data before passing it to Heat/autoscaling.

That gives us three options for e.g. a holistic scheduler to insert 
hints as to which servers to delete:

(a) Insert them into the outgoing triggers from Ceilometer. The 
application has the choice to override.

(b) Let the user explicitly configure the flow of notifications. So it 
could be any of:
Ceilometer -> Heat
Ceilometer -> Scheduler -> Heat
Ceilometer -> Application -> Heat
Ceilometer -> Scheduler -> Application -> Heat

(c) Insert them into incoming triggers in Heat whenever the application 
has not specified them. This is basically your original proposal.

I'm guessing that, of those, (c) is probably the winner. But we'd need 
to have that debate.

Another possible implementation is to do it with a notification and 
reply, rather than including it in the existing datapath.

>  > This is on the roadmap, and TripleO really needs it, so hopefully it
>  > will happen in Juno.
>
> I assume you mean giving this control to the application, which I
> presume amounts to giving it to the template author.  Is this written up
> somewhere?

I had a quick look, but it doesn't appear we have a blueprint for it 
yet, unless you count the notifications blueprint that Steve mentioned 
(but I don't think that addresses this case specifically).

cheers,
Zane.




More information about the OpenStack-dev mailing list