<div dir="ltr"><div><div>Thanks Angus for feedback.<br><br></div>Best,<br></div>Dani<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 6, 2015 at 11:30 PM, Angus Salkeld <span dir="ltr"><<a href="mailto:asalkeld@mirantis.com" target="_blank">asalkeld@mirantis.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Fri, Apr 3, 2015 at 8:51 PM, Daniel Comnea <span dir="ltr"><<a href="mailto:comnea.dani@gmail.com" target="_blank">comnea.dani@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Hi all,<br><br></div>Does anyone know if the above use case has made it into the convergence project and in which release was/ is going to be merged?<br><br></div></div></div></blockquote><div><br></div></span><div>Hi</div><div><br></div><div>Phase one of convergence should be merged in early L (we have some of the patches merged now).</div><div>Phase two is to separate the "checking" of actual state into a new RPC service within Heat.</div><div>Then you *could* run that checker periodically (or receive RPC notifications) to learn of changes</div><div>to the stack's state during the lifetime of the stack. That *might* get done in L - we will just have to see</div><div>how things go.</div><div><br></div><div>-Angus</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><div><div></div>Thanks,<br></div>Dani<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 28, 2014 at 5:40 PM, Daniel Comnea <span dir="ltr"><<a href="mailto:comnea.dani@gmail.com" target="_blank">comnea.dani@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Thanks all for reply.<br><br></div><div>I have spoke with Qiming and @Shardy (IRC nickname) and they confirmed this is not possible as of today. Someone else - sorry i forgot his nicname on IRC suggested to write a Ceilometer query to count the number of instances but what @ZhiQiang said is true and this is what we have seen via the instance sample </div><div><br></div><b>@Clint - </b>that is the case indeed<br></div><br><b>@ZhiQiang</b> - what do you mean by "<i>count of resource should be queried from specific service's API</i>"? Is it related to Ceilometer's event types configuration?<br><br></div><div><b>@Mike - </b>my use case is very simple: i have a group of instances and in case the # of instances reach the minimum number i set, i would like a new instance to be spun up - think like a cluster where i want to maintain a minimum number of members<br><br></div><div>With regards to the proposal you made - <a href="https://review.openstack.org/#/c/127884/" target="_blank">https://review.openstack.org/#/c/127884/</a> that works but only in a specific use case hence is not generic because the assumption is that my instances are hooked behind a LBaaS which is not always the case.<br><br></div><div>Looking forward to see the 'convergence' in action.<br><br><br></div><div>Cheers,<br>Dani<br></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On Tue, Oct 28, 2014 at 3:06 AM, Mike Spreitzer <span dir="ltr"><<a href="mailto:mspreitz@us.ibm.com" target="_blank">mspreitz@us.ibm.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div><tt><font>Daniel Comnea <<a href="mailto:comnea.dani@gmail.com" target="_blank">comnea.dani@gmail.com</a>> wrote on
10/27/2014 07:16:32 AM:<span><br>
<br>
> Yes i did but if you look at this example<br>
> <br>
> </span></font></tt><a href="https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml" target="_blank"><tt><font>https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml</font></tt></a><span><tt><font><br>
> <br>
</font></tt>
<br><tt><font>> the flow is simple:<br>
</font></tt>
<br><tt><font>> CPU alarm in Ceilometer triggers the "type:
OS::Heat::ScalingPolicy"<br>
> which then triggers the "type: OS::Heat::AutoScalingGroup"<br>
</font></tt>
<br></span><tt><font>Actually the ScalingPolicy does not "trigger"
the ASG. BTW, "ScalingPolicy" is mis-named; it is not a
full policy, it is only an action (the condition part is missing --- as
you noted, that is in the Ceilometer alarm). The so-called ScalingPolicy
does the action itself when triggered. But it respects your configured
min and max size.</font></tt>
<br>
<br><tt><font>What are you concerned about making your scaling group
smaller than your configured minimum? Just checking here that there
is not a misunderstanding.</font></tt>
<br>
<br><tt><font>As Clint noted, there is a large-scale effort underway
to make Heat maintain what it creates despite deletion of the underlying
resources.</font></tt>
<br>
<br><tt><font>There is also a small-scale effort underway to make
ASGs recover from members stopping proper functioning for whatever reason.
See </font></tt><a href="https://review.openstack.org/#/c/127884/" target="_blank"><tt><font color="blue">https://review.openstack.org/#/c/127884/</font></tt></a><tt><font>
for a proposed interface and initial implementation.</font></tt>
<br>
<br><tt><font>Regards,</font></tt>
<br><tt><font>Mike</font></tt><br></div></div><span>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></span></blockquote></div><br></div>
</blockquote></div><br></div>
</div></div><br></div></div>__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div></div>
<br>__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>