[openstack-dev] [Heat] rough draft of Heat autoscaling API

Randall Burt randall.burt at RACKSPACE.COM
Thu Nov 14 20:52:48 UTC 2013


On Nov 14, 2013, at 1:05 PM, Christopher Armstrong <chris.armstrong at rackspace.com<mailto:chris.armstrong at rackspace.com>> wrote:

On Thu, Nov 14, 2013 at 11:00 AM, Randall Burt <randall.burt at rackspace.com<mailto:randall.burt at rackspace.com>> wrote:

On Nov 14, 2013, at 12:44 PM, Zane Bitter <zbitter at redhat.com<mailto:zbitter at redhat.com>>
 wrote:

> On 14/11/13 18:51, Randall Burt wrote:
>>
>> On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
>> <chris.armstrong at rackspace.com<mailto:chris.armstrong at rackspace.com> <mailto:chris.armstrong at rackspace.com<mailto:chris.armstrong at rackspace.com>>>
>>  wrote:
>>
>>> On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
>>> <randall.burt at rackspace.com<mailto:randall.burt at rackspace.com> <mailto:randall.burt at rackspace.com<mailto:randall.burt at rackspace.com>>> wrote:
>>>    Regarding web hook execution and cool down, I think the response
>>>    should be something like 307 if the hook is on cool down with an
>>>    appropriate retry-after header.
>
> I strongly disagree with this even ignoring the security issue mentioned below. Being in the cooldown period is NOT an error, and the caller should absolutely NOT try again later - the request has been received and correctly acted upon (by doing nothing).

But how do I know nothing was done? I may have very good reasons to re-scale outside of ceilometer or other mechanisms and absolutely SHOULD try again later.  As it stands, I have no way of knowing that my scaling action didn't happen without examining my physical resources. 307 is a legitimate response in these cases, but I'm certainly open to other suggestions.


I agree there should be a way to find out what happened, but in a way that requires a more strongly authenticated request. My preference would be to use an audit log system (I haven't been keeping up with the current thoughts on the design for Heat's event/log API) that can be inspected via API.

Fair enough. I'm just thinking of folks who want to set this up but use external tools/monitoring solutions for the actual eventing. Having those tools grep through event logs seems a tad cumbersome, but I do understand the desire to make these un-authenticated secrets makes that terribly difficult.



--
IRC: radix
Christopher Armstrong
Rackspace
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131114/a4e1104d/attachment.html>


More information about the OpenStack-dev mailing list