Yes, that's exactly what I was looking for. You're my hero! Shinobu On Mon, Aug 31, 2015 at 7:47 AM, Angus Salkeld <asalkeld at mirantis.com> wrote: > On Sat, Aug 29, 2015 at 10:33 AM Shinobu Kinjo <skinjo at redhat.com> wrote: > >> Hello, >> >> Here is a situation which I faced, that can be reproduced. >> >> I did "heat resource-signal" multiple times simultaneously to scale >> instance. >> As a result 3 resources were made in the scaling group having max_size=2. >> >> "stack-list -n" showed me 3 stacks in one parent. >> >> Heat itself seems not to actually check resource data in the database. >> >> Is there any lock/unlock mechanism like mutex in heat implementation like >> locking >> database when auto-scaling feature is triggered. >> >> > Hi > > In master there is such a check: > > https://github.com/openstack/heat/blob/master/heat/scaling/cooldown.py#L33-L50 > > This makes sure that there are no concurrent scaling actions. > > -Angus > > > >> Or is there plan to deploy such a mutex mechanism. >> What I'm concerning about more is that cielometer also has some feature >> to triggering >> auto-scaling. >> >> So I would like to make sure that there is a mechanism to keep data >> consistency on >> each component. >> >> Please let me know, if I've missed anything. >> >> Shinobu >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Email: shinobu at linux.com skinjo at redhat.com Life with Distributed Computational System based on OpenSource <http://i-shinobu.hatenablog.com/> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150831/4efd07f6/attachment.html>