[openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

Angus Salkeld asalkeld at mirantis.com
Mon Apr 6 22:30:06 UTC 2015


On Fri, Apr 3, 2015 at 8:51 PM, Daniel Comnea <comnea.dani at gmail.com> wrote:

> Hi all,
>
> Does anyone know if the above use case has made it into the convergence
> project and in which release was/ is going to be merged?
>
>
Hi

Phase one of convergence should be merged in early L (we have some of the
patches merged now).
Phase two is to separate the "checking" of actual state into a new RPC
service within Heat.
Then you *could* run that checker periodically (or receive RPC
notifications) to learn of changes
to the stack's state during the lifetime of the stack. That *might* get
done in L - we will just have to see
how things go.

-Angus


> Thanks,
> Dani
>
> On Tue, Oct 28, 2014 at 5:40 PM, Daniel Comnea <comnea.dani at gmail.com>
> wrote:
>
>> Thanks all for reply.
>>
>> I have spoke with Qiming and @Shardy (IRC nickname) and they confirmed
>> this is not possible as of today. Someone else - sorry i forgot his nicname
>> on IRC suggested to write a Ceilometer query to count the number of
>> instances but what @ZhiQiang said is true and this is what we have seen via
>> the instance sample
>>
>> *@Clint - *that is the case indeed
>>
>> *@ZhiQiang* - what do you mean by "*count of resource should be queried
>> from specific service's API*"? Is it related to Ceilometer's event types
>> configuration?
>>
>> *@Mike - *my use case is very simple: i have a group of instances and in
>> case the # of instances reach the minimum number i set, i would like a new
>> instance to be spun up - think like a cluster where i want to maintain a
>> minimum number of members
>>
>> With regards to the proposal you made -
>> https://review.openstack.org/#/c/127884/ that works but only in a
>> specific use case hence is not generic because the assumption is that my
>> instances are hooked behind a LBaaS which is not always the case.
>>
>> Looking forward to see the 'convergence' in action.
>>
>>
>> Cheers,
>> Dani
>>
>> On Tue, Oct 28, 2014 at 3:06 AM, Mike Spreitzer <mspreitz at us.ibm.com>
>> wrote:
>>
>>> Daniel Comnea <comnea.dani at gmail.com> wrote on 10/27/2014 07:16:32 AM:
>>>
>>> > Yes i did but if you look at this example
>>> >
>>> >
>>> https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml
>>> >
>>>
>>> > the flow is simple:
>>>
>>> > CPU alarm in Ceilometer triggers the "type: OS::Heat::ScalingPolicy"
>>> > which then triggers the "type: OS::Heat::AutoScalingGroup"
>>>
>>> Actually the ScalingPolicy does not "trigger" the ASG.  BTW,
>>> "ScalingPolicy" is mis-named; it is not a full policy, it is only an action
>>> (the condition part is missing --- as you noted, that is in the Ceilometer
>>> alarm).  The so-called ScalingPolicy does the action itself when
>>> triggered.  But it respects your configured min and max size.
>>>
>>> What are you concerned about making your scaling group smaller than your
>>> configured minimum?  Just checking here that there is not a
>>> misunderstanding.
>>>
>>> As Clint noted, there is a large-scale effort underway to make Heat
>>> maintain what it creates despite deletion of the underlying resources.
>>>
>>> There is also a small-scale effort underway to make ASGs recover from
>>> members stopping proper functioning for whatever reason.  See
>>> https://review.openstack.org/#/c/127884/ for a proposed interface and
>>> initial implementation.
>>>
>>> Regards,
>>> Mike
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150407/2bb5d1cf/attachment.html>


More information about the OpenStack-dev mailing list