[openstack-dev] [Quantum] [LBaaS] Health Monitor Failure Detection
Youcef Laribi
Youcef.Laribi at eu.citrix.com
Wed Nov 21 20:33:55 UTC 2012
Hi Peter,
Yes you bring up a good point that we missed in the API. Usually in LB products when user displays the member, then the monitors used to monitor the member and their status are also displayed. We can adopt the same approach as follows:
"member" : {
"id":"c57f581b-c834-408f-93fa-30543cf30618",
"tenant_id": "310df60f-2a10-4ee5-9554-98393092194c",
"pool_id": "cfc6589d-f949-4c66-99d2-c2da56ef3764",
"address": "192.168.224.31",
"port": 8080,
"weight" : 1,
"admin_state_up" : true,
"status" : "INACTIVE",
"health_monitors_status" : [
{
"health_monitor_id" : "f3eeab00-8367-4524-b662-55e64d4cacb5",
"status" : "FAIL"
},
{
"health_monitor_id" : "70995ff0-341a-11e2-81c1-0800200c9a66",
"status" : "SUCCESS"
}
]
}
Would this be acceptable?
Youcef
From: Mellquist, Peter [mailto:peter.mellquist at hp.com]
Sent: Wednesday, November 21, 2012 11:17 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Quantum LBaaS Health Monitor Failure Detection
Hi LBaaS'ers,
With the current capability to associate multiple health monitors with a pool, it does not seem possible to see which specific health monitor has detected an issue. For example, If I have a PING and HTTP monitor and the HTTP monitor fails how can the user or admin have visibility into the specific failure?
"When a pool has several monitors associated with it, each member of the pool is monitored by all these monitors. If any monitor declares the member as unhealthy, then the member status is changed to INACTIVE and the member won't participate in its pool's load balancing. In other words, ALL monitors must declare the member to be healthy for it to stay ACTIVE. "
Member status changing to INACTIVE does not describe the reason why. Would it make sense to have a member status reason with the reference to the health monitor id in this case? We could then see which health monitor is failing and allow the user or admin to take appropriate action.
Peter.
"member" : {
"id":"c57f581b-c834-408f-93fa-30543cf30618",
"tenant_id": "310df60f-2a10-4ee5-9554-98393092194c",
"pool_id": "cfc6589d-f949-4c66-99d2-c2da56ef3764",
"address": "192.168.224.31",
"port": 8080,
"weight" : 1,
"admin_state_up" : true,
"status" : "INACTIVE",
"status_reason" : "f3eeab00-8367-4524-b662-55e64d4cacb5"
}
{
"health_monitor" :
{
"id" : "f3eeab00-8367-4524-b662-55e64d4cacb5",
"type" : "HTTP",
"delay" : 20,
"timeout": 10,
"max_retries": 3,
"http_method" : "GET",
"url_path" : "/",
"expected_codes" : "200,202",
"admin_state_up": true,
"status": "ACTIVE"
}
}
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121121/d08d4848/attachment.html>
More information about the OpenStack-dev
mailing list