[openstack-dev] [Quantum][LBaaS] Feedback needed: Healthmonitor workflow.

Oleg Bondarev obondarev at mirantis.com
Fri Jul 19 10:29:17 UTC 2013


Hi,
First want to mention that currently health monitor is not a pure DB
object: upon creation the request is also send to device/driver.
Another thing is that there is no delete_health_monitor in driver API
(delete_pool_health_monitor only deletes the association) which is weird
because potentially health monitors will remain on device forever.

>From what I saw from the thread referenced by Salvatore the main argument
against health monitor templates approach was:
*>In NetScaler, F5 and similar products, health monitors are created*
>*beforehand just like in the API, and then they are “bound” to pools (our*
>*association API), so the mapping will be more natural*

IMO this is a strong argument until we start to maintain several
devices/drivers/service_providers per LB service.
With several drivers we'll need to send create/update/delete_health_monitor
requests to each driver and it's a big overhead. Same for pool monitor
assiciations as described in Eugene's initial mail.
I think it's not a big deal for drivers such as F5 to create/delete monitor
object upon creating/deleting 'private' health monitor.
So I vote for monitor templates and 1:1 mapping of 'private' health
monitors to pools until it's not too late and I'm ready to implement this
in havana.
Thoughts?

Thanks,
Oleg


On Thu, Jun 20, 2013 at 6:53 PM, Salvatore Orlando <sorlando at nicira.com>wrote:

> The idea of health-monitor templates was first discussed here:
> http://lists.openstack.org/pipermail/openstack-dev/2012-November/003233.html
> See follow-up on that mailing list thread to understand pro and cons of
> the idea.
>
> I will avoid moaning about backward compatibility at the moment, but that
> something else we need to discuss at some point if go ahead with changes in
> the API.
>
> Salvatore
>
>
> On 20 June 2013 14:54, Samuel Bercovici <SamuelB at radware.com> wrote:
>
>>  Hi,****
>>
>> ** **
>>
>> I agree with this.****
>>
>> We are facing challenges when the global health pool is changed to
>> atomically modify all the groups that are linked to this health check as
>> the groups might be configured in different devices.****
>>
>> So if one of the group modification fails it is very difficult to revert
>> the change back.****
>>
>> ** **
>>
>> -Sam.****
>>
>> ** **
>>
>> ** **
>>
>> *From:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>> *Sent:* Thursday, June 20, 2013 3:10 PM
>> *To:* OpenStack Development Mailing List
>> *Cc:* Avishay Balderman; Samuel Bercovici
>> *Subject:* [Quantum][LBaaS] Feedback needed: Healthmonitor workflow.****
>>
>> ** **
>>
>> Hi community,****
>>
>> ** **
>>
>> Here's a question.****
>>
>> Currently Health monitors in Loadbalancer service are made in such way
>> that health monitor itself is a global shared database object. ****
>>
>> If user wants to add health monitor to a pool, it adds association
>> between pool and health monitor.****
>>
>> In order to update existing health monitor (change url, for example)
>> service will need to go over existing pool-health monitor associations
>> notifying devices of this change.****
>>
>> ** **
>>
>> I think it could be changed to the following workflow:****
>>
>> Instead of adding pool-healthmonitor association, use health monitor
>> object as a template (probably renaming is needed) and add 'private' health
>> monitor to the pool. ****
>>
>> So all further operations would result in changing health monitor on one
>> device only.****
>>
>> ** **
>>
>> What do you think?****
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130719/5a485fd9/attachment.html>


More information about the OpenStack-dev mailing list