[openstack-dev] 答复: [Quantum] [LBaaS] Health Monitor Failure Detection

Leon Cui lcui at vmware.com
Fri Nov 23 00:40:47 UTC 2012


Hi Ilya,

Yes there is no health monitor concept in haproxy. AFAIK, haproxy doesn’t
support multiple health monitors for a single pool.  Therefore the drive
may just map the backend status (in haproxy domain) to the very health
monitor that associated with the pool.



Thanks

Leon

发件人: Ilya Shakhat [mailto:ishakhat at mirantis.com]
发送时间: 2012年11月22日 23:38
收件人: OpenStack Development Mailing List
主题: Re: [openstack-dev] [Quantum] [LBaaS] Health Monitor Failure
Detection



This approach is good, but we should be aware that it may work not for all
types of load balancers. For example, in HAProxy health monitor is
transformed into nameless option inside backend section. The status of
member (server) shows whether last check was succeed or failed with some
code (http://cbonte.github.com/haproxy-dconv/configuration-1.4.html#9.1).
The driver may try to map the code to health monitor, but it may be not
trivial to do or the mapping may be not one-to-one.



Ilya.



2012/11/22 Mellquist, Peter <peter.mellquist at hp.com>

Hi Youcef,



This looks like it will work fine. I like this since it shows the state of
all the monitors for the member.



Peter.





From: Youcef Laribi [mailto:Youcef.Laribi at eu.citrix.com]
Sent: Wednesday, November 21, 2012 12:34 PM


To: OpenStack Development Mailing List

Subject: Re: [openstack-dev] [Quantum] [LBaaS] Health Monitor Failure
Detection



Hi Peter,



Yes you bring up a good point that we missed in the API. Usually in LB
products when user displays the member, then the monitors used to monitor
the member and their status are also displayed. We can adopt the same
approach as follows:



"member" :  {

                "id":"c57f581b-c834-408f-93fa-30543cf30618",

                "tenant_id": "310df60f-2a10-4ee5-9554-98393092194c",

                "pool_id": "cfc6589d-f949-4c66-99d2-c2da56ef3764",

                "address": "192.168.224.31",

                "port": 8080,

               "weight" : 1,

                "admin_state_up" : true,

                "status" : "INACTIVE",

                "health_monitors_status" : [

                     {

                        "health_monitor_id" :
"f3eeab00-8367-4524-b662-55e64d4cacb5",

                        "status" : "FAIL"

                     },

                     {

                        "health_monitor_id" :
"70995ff0-341a-11e2-81c1-0800200c9a66",

                        "status" : "SUCCESS"

                     }



                ]

              }



Would this  be acceptable?



Youcef





From: Mellquist, Peter [mailto:peter.mellquist at hp.com]
Sent: Wednesday, November 21, 2012 11:17 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Quantum LBaaS Health Monitor Failure Detection



Hi LBaaS’ers,



With the current capability to associate multiple health monitors with a
pool, it does not seem possible to see which specific health monitor has
detected an issue. For example, If I have a PING and HTTP monitor and the
HTTP monitor fails how can the user or admin have visibility into the
specific failure?

“When a pool has several monitors associated with it, each member of the
pool is monitored by all these monitors. If any monitor declares the
member as unhealthy, then the member status is changed to INACTIVE and the
member won't participate in its pool's load balancing. In other words, ALL
monitors must declare the member to be healthy for it to stay ACTIVE. “

Member status changing to INACTIVE does not describe the reason why.
Would  it make sense to have a member status reason with the reference to
the health monitor id in this case? We could then see which health monitor
is failing and allow the user or admin to take appropriate action.



Peter.

"member" :  {

                "id":"c57f581b-c834-408f-93fa-30543cf30618",

                "tenant_id": "310df60f-2a10-4ee5-9554-98393092194c",

                "pool_id": "cfc6589d-f949-4c66-99d2-c2da56ef3764",

                "address": "192.168.224.31",

                "port": 8080,

               "weight" : 1,

                "admin_state_up" : true,

                "status" : "INACTIVE",

                "status_reason" : "f3eeab00-8367-4524-b662-55e64d4cacb5"

              }



{

    "health_monitor" :

      {

         "id" : "f3eeab00-8367-4524-b662-55e64d4cacb5",

         "type" : "HTTP",

         "delay" : 20,

         "timeout": 10,

         "max_retries": 3,

         "http_method" : "GET",

         "url_path" : "/",

         "expected_codes" : "200,202",

         "admin_state_up": true,

         "status": "ACTIVE"

      }

}














_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121122/a8d0a576/attachment.html>


More information about the OpenStack-dev mailing list