[openstack-dev] [Heat] rough draft of Heat autoscaling API

Christopher Armstrong chris.armstrong at rackspace.com
Thu Nov 21 17:44:33 UTC 2013


On Thu, Nov 21, 2013 at 5:18 AM, Zane Bitter <zbitter at redhat.com> wrote:

> On 20/11/13 23:49, Christopher Armstrong wrote:
>
>> On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter <zbitter at redhat.com
>> <mailto:zbitter at redhat.com>> wrote:
>>
>>     On 20/11/13 16:07, Christopher Armstrong wrote:
>>
>>         On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter <zbitter at redhat.com
>>         <mailto:zbitter at redhat.com>
>>         <mailto:zbitter at redhat.com <mailto:zbitter at redhat.com>>> wrote:
>>
>>              On 19/11/13 19:14, Christopher Armstrong wrote:
>>
>>         thought we had a workable solution with the "LoadBalancerMember"
>>         idea,
>>         which you would use in a way somewhat similar to
>>         CinderVolumeAttachment
>>         in the above example, to hook servers up to load balancers.
>>
>>
>>     I haven't seen this proposal at all. Do you have a link? How does it
>>     handle the problem of wanting to notify an arbitrary service (i.e.
>>     not necessarily a load balancer)?
>>
>>
>> It's been described in the autoscaling wiki page for a while, and I
>> thought the LBMember idea was discussed at the summit, but I wasn't
>> there to verify that :)
>>
>> https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F
>>
>> Basically, the LoadBalancerMember resource (which is very similar to the
>> CinderVolumeAttachment) would be responsible for removing and adding IPs
>> from/to the load balancer (which is actually a direct mapping to the way
>> the various LB APIs work). Since this resource lives with the server
>> resource inside the scaling unit, we don't really need to get anything
>> _out_ of that stack, only pass _in_ the load balancer ID.
>>
>
> I see a couple of problems with this approach:
>
> 1) It makes the default case hard. There's no way to just specify a server
> and hook it up to a load balancer like you can at the moment. Instead, you
> _have_ to create a template (or template snippet - not really any better)
> to add this extra resource in, even for what should be the most basic,
> default case (scale servers behind a load balancer).
>

We can provide a standard resource/template for this, LoadBalancedServer,
to make the common case trivial and only require the user to pass
parameters, not a whole template.


> 2) It relies on a plugin being present for any type of thing you might
> want to notify.


I don't understand this point. What do you mean by a plugin? I was assuming
OS::Neutron::PoolMember (not LoadBalancerMember -- I went and looked up the
actual name) would become a standard Heat resource, not a third-party thing
(though third parties could provide their own through the usual heat
extension mechanisms).

(fwiw the rackspace load balancer API works identically, so it seems a
pretty standard design).


>
> At summit and - to the best of my recollection - before, we talked about
> scaling a generic group of resources and passing notifications to a generic
> controller, with the types of both defined by the user. I was expecting you
> to propose something based on webhooks, which is why I was surprised not to
> see anything about it in the API. (I'm not prejudging that that is the way
> to go... I'm actually wondering if Marconi has a role to play here.)
>
>
I think the main benefit of PoolMember is:

1) it matches with the Neutron LBaaS API perfectly, just like all the rest
of our resources, which represent individual REST objects.

2) it's already understandable. I don't understand the idea behind
notifications or how they would work to solve our problems. You can keep
saying that the notifications idea will solve our problems, but I can't
figure out how it would solve our problem unless someone actually explains
it :)


-- 
IRC: radix
Christopher Armstrong
Rackspace
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131121/8e01cf75/attachment.html>


More information about the OpenStack-dev mailing list