[openstack-dev] 答复: [Quantum] [LBaaS] Health Monitor Failure Detection

Mellquist, Peter peter.mellquist at hp.com
Mon Nov 26 22:57:36 UTC 2012


Hi Leon,

The admin / user needs to know the specific health monitor which has failed so that they may take appropriate action to fix the member with the failed health check. If we not provide this detail, then the root cause on why the member has gone “inactive” will be unknown. Most JSON parsers allow parsing fields of interest and ignoring others so it should not be an issue for clients who are not interested in health monitors but … that being said as we continue to work out important use cases, like this one, we may find that a refactoring of the REST resources may be needed. I see this as an iterative process.

Thanks,
Peter.

From: Leon Cui [mailto:lcui at vmware.com]
Sent: Thursday, November 22, 2012 4:49 PM
To: 'OpenStack Development Mailing List'
Subject: [openstack-dev] 答复: [Quantum] [LBaaS] Health Monitor Failure Detection

Hi Peter and Youcef,
Showing the health monitor status details in member might be too much for user.  In most time, user (and Mgmt UI) might be only interested in the overall pool/member status unless user wants to drill down to the details.  My concern is that we are adding more and more details in the top resource GET API which add more payload and complex to parse. I’m wondering if we can add a separate API GET /resource/<resource_id>/details which can expose more detailed information for each resource.

Just my 2 cents.

Thanks
Leon
发件人: Mellquist, Peter [mailto:peter.mellquist at hp.com]
发送时间: 2012年11月22日 5:06
收件人: OpenStack Development Mailing List
主题: Re: [openstack-dev] [Quantum] [LBaaS] Health Monitor Failure Detection

Hi Youcef,

This looks like it will work fine. I like this since it shows the state of all the monitors for the member.

Peter.


From: Youcef Laribi [mailto:Youcef.Laribi at eu.citrix.com]
Sent: Wednesday, November 21, 2012 12:34 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Quantum] [LBaaS] Health Monitor Failure Detection

Hi Peter,

Yes you bring up a good point that we missed in the API. Usually in LB products when user displays the member, then the monitors used to monitor the member and their status are also displayed. We can adopt the same approach as follows:

"member" :  {
                "id":"c57f581b-c834-408f-93fa-30543cf30618",
                "tenant_id": "310df60f-2a10-4ee5-9554-98393092194c",
                "pool_id": "cfc6589d-f949-4c66-99d2-c2da56ef3764",
                "address": "192.168.224.31",
                "port": 8080,
               "weight" : 1,
                "admin_state_up" : true,
                "status" : "INACTIVE",
                "health_monitors_status" : [
                     {
                        "health_monitor_id" : "f3eeab00-8367-4524-b662-55e64d4cacb5",
                        "status" : "FAIL"
                     },
                     {
                        "health_monitor_id" : "70995ff0-341a-11e2-81c1-0800200c9a66",
                        "status" : "SUCCESS"
                     }

                ]
              }

Would this  be acceptable?

Youcef


From: Mellquist, Peter [mailto:peter.mellquist at hp.com]
Sent: Wednesday, November 21, 2012 11:17 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Quantum LBaaS Health Monitor Failure Detection

Hi LBaaS’ers,

With the current capability to associate multiple health monitors with a pool, it does not seem possible to see which specific health monitor has detected an issue. For example, If I have a PING and HTTP monitor and the HTTP monitor fails how can the user or admin have visibility into the specific failure?

“When a pool has several monitors associated with it, each member of the pool is monitored by all these monitors. If any monitor declares the member as unhealthy, then the member status is changed to INACTIVE and the member won't participate in its pool's load balancing. In other words, ALL monitors must declare the member to be healthy for it to stay ACTIVE. “

Member status changing to INACTIVE does not describe the reason why.  Would  it make sense to have a member status reason with the reference to the health monitor id in this case? We could then see which health monitor is failing and allow the user or admin to take appropriate action.



Peter.
"member" :  {
                "id":"c57f581b-c834-408f-93fa-30543cf30618",
                "tenant_id": "310df60f-2a10-4ee5-9554-98393092194c",
                "pool_id": "cfc6589d-f949-4c66-99d2-c2da56ef3764",
                "address": "192.168.224.31",
                "port": 8080,
               "weight" : 1,
                "admin_state_up" : true,
                "status" : "INACTIVE",
                "status_reason" : "f3eeab00-8367-4524-b662-55e64d4cacb5"
              }


{
    "health_monitor" :
      {
         "id" : "f3eeab00-8367-4524-b662-55e64d4cacb5",
         "type" : "HTTP",
         "delay" : 20,
         "timeout": 10,
         "max_retries": 3,
         "http_method" : "GET",
         "url_path" : "/",
         "expected_codes" : "200,202",
         "admin_state_up": true,
         "status": "ACTIVE"
      }
}










-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121126/886d2eff/attachment.html>


More information about the OpenStack-dev mailing list