<div dir="ltr"><div><div>Yes, that's exactly what I was looking for.<br></div>You're my hero!<br><br></div> Shinobu<br><div><div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 31, 2015 at 7:47 AM, Angus Salkeld <span dir="ltr"><<a href="mailto:asalkeld@mirantis.com" target="_blank">asalkeld@mirantis.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Sat, Aug 29, 2015 at 10:33 AM Shinobu Kinjo <<a href="mailto:skinjo@redhat.com" target="_blank">skinjo@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
Here is a situation which I faced, that can be reproduced.<br>
<br>
I did "heat resource-signal" multiple times simultaneously to scale instance.<br>
As a result 3 resources were made in the scaling group having max_size=2.<br>
<br>
"stack-list -n" showed me 3 stacks in one parent.<br>
<br>
Heat itself seems not to actually check resource data in the database.<br>
<br>
Is there any lock/unlock mechanism like mutex in heat implementation like locking<br>
database when auto-scaling feature is triggered.<br>
<br></blockquote><div><br></div><div>Hi</div><div><br></div><div>In master there is such a check:</div><div><a href="https://github.com/openstack/heat/blob/master/heat/scaling/cooldown.py#L33-L50" target="_blank">https://github.com/openstack/heat/blob/master/heat/scaling/cooldown.py#L33-L50</a><br></div><div><br></div><div>This makes sure that there are no concurrent scaling actions.</div><div><br></div><div>-Angus</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Or is there plan to deploy such a mutex mechanism.<br>
What I'm concerning about more is that cielometer also has some feature to triggering<br>
auto-scaling.<br>
<br>
So I would like to make sure that there is a mechanism to keep data consistency on<br>
each component.<br>
<br>
Please let me know, if I've missed anything.<br>
<br>
Shinobu<br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div></div>
<br>__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>Email:<br> <a href="mailto:shinobu@linux.com" target="_blank">shinobu@linux.com</a><br> <a href="mailto:skinjo@redhat.com" target="_blank">skinjo@redhat.com</a><br><br><a href="http://i-shinobu.hatenablog.com/" target="_blank">Life with Distributed Computational System based on OpenSource</a><br></div></div></div></div></div></div></div></div></div>
</div></div></div></div></div></div>