[openstack-dev] [Quantum] [LBaaS] Health Monitor Failure Detection

Ilya Shakhat ishakhat at mirantis.com
Thu Nov 22 15:37:59 UTC 2012


This approach is good, but we should be aware that it may work not for all
types of load balancers. For example, in HAProxy health monitor is
transformed into nameless option inside backend section. The status of
member (server) shows whether last check was succeed or failed with some
code (http://cbonte.github.com/haproxy-dconv/configuration-1.4.html#9.1).
The driver may try to map the code to health monitor, but it may be not
trivial to do or the mapping may be not one-to-one.

Ilya.

2012/11/22 Mellquist, Peter <peter.mellquist at hp.com>

>  Hi Youcef,****
>
> ** **
>
> This looks like it will work fine. I like this since it shows the state of
> all the monitors for the member.****
>
> ** **
>
> Peter.****
>
> ** **
>
> ** **
>
> *From:* Youcef Laribi [mailto:Youcef.Laribi at eu.citrix.com]
> *Sent:* Wednesday, November 21, 2012 12:34 PM
>
> *To:* OpenStack Development Mailing List
> *Subject:* Re: [openstack-dev] [Quantum] [LBaaS] Health Monitor Failure
> Detection****
>
>  ** **
>
> Hi Peter,****
>
> ** **
>
> Yes you bring up a good point that we missed in the API. Usually in LB
> products when user displays the member, then the monitors used to monitor
> the member and their status are also displayed. We can adopt the same
> approach as follows:****
>
> ** **
>
> "member" :  {****
>
>                 "id":"c57f581b-c834-408f-93fa-30543cf30618",****
>
>                 "tenant_id": "310df60f-2a10-4ee5-9554-98393092194c",****
>
>                 "pool_id": "cfc6589d-f949-4c66-99d2-c2da56ef3764",****
>
>                 "address": "192.168.224.31",****
>
>                 "port": 8080,****
>
>                "weight" : 1,****
>
>                 "admin_state_up" : *true*,****
>
>                 "status" : "INACTIVE",****
>
>                 "health_monitors_status" : [****
>
>                      {****
>
>                         "health_monitor_id" :
> "f3eeab00-8367-4524-b662-55e64d4cacb5",****
>
>                         "status" : "FAIL"****
>
>                      },****
>
>                      {****
>
>                         "health_monitor_id" : "
> 70995ff0-341a-11e2-81c1-0800200c9a66",**
>
>                         "status" : "SUCCESS"****
>
>                      }****
>
> ** **
>
>                 ]****
>
>               }****
>
> ** **
>
> Would this  be acceptable?****
>
> ** **
>
> Youcef****
>
> ** **
>
> ** **
>
> *From:* Mellquist, Peter [mailto:peter.mellquist at hp.com]
> *Sent:* Wednesday, November 21, 2012 11:17 AM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] Quantum LBaaS Health Monitor Failure Detection*
> ***
>
> ** **
>
> Hi LBaaS’ers,****
>
> ** **
>
> With the current capability to associate multiple health monitors with a
> pool, it does not seem possible to see which specific health monitor has
> detected an issue. For example, If I have a PING and HTTP monitor and the
> HTTP monitor fails how can the user or admin have visibility into the
> specific failure?****
>
> “When a pool has several monitors associated with it, each member of the
> pool is monitored by all these monitors. If any monitor declares the member
> as unhealthy, then the member status is changed to INACTIVE and the member
> won't participate in its pool's load balancing. In other words, ALL
> monitors must declare the member to be healthy for it to stay ACTIVE. “***
> *
>
> Member status changing to INACTIVE does not describe the reason why.
> Would  it make sense to have a member status reason with the reference to
> the health monitor id in this case? We could then see which health monitor
> is failing and allow the user or admin to take appropriate action.****
>
> ** **
>
> Peter.****
>
> "member" :  {****
>
>                 "id":"c57f581b-c834-408f-93fa-30543cf30618",****
>
>                 "tenant_id": "310df60f-2a10-4ee5-9554-98393092194c",****
>
>                 "pool_id": "cfc6589d-f949-4c66-99d2-c2da56ef3764",****
>
>                 "address": "192.168.224.31",****
>
>                 "port": 8080,****
>
>                "weight" : 1,****
>
>                 "admin_state_up" : *true*,****
>
>                 "status" : "INACTIVE",****
>
>                 "status_reason" : "f3eeab00-8367-4524-b662-55e64d4cacb5"**
> **
>
>               }                                             ****
>
> ** **
>
> {****
>
>     "health_monitor" :****
>
>       {****
>
>          "id" : "f3eeab00-8367-4524-b662-55e64d4cacb5",****
>
>          "type" : "HTTP",****
>
>          "delay" : 20,****
>
>          "timeout": 10,****
>
>          "max_retries": 3,****
>
>          "http_method" : "GET",****
>
>          "url_path" : "/",****
>
>          "expected_codes" : "200,202",****
>
>          "admin_state_up": *true*,****
>
>          "status": "ACTIVE"****
>
>       }****
>
> }****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121122/fcdeb8f7/attachment.html>


More information about the OpenStack-dev mailing list