[Openstack-operators] memcached redundancy
Daneyon Hansen (danehans)
danehans at cisco.com
Thu Aug 14 19:05:27 UTC 2014
It has been a while, but I believe I load-balanced memcached using HAProxy (using sticky sessions) and observed no issues with fail-over.
Email: danehans at cisco.com
From: Joe Topjian <joe at topjian.net<mailto:joe at topjian.net>>
Date: Thursday, August 14, 2014 10:09 AM
To: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: [Openstack-operators] memcached redundancy
I have an OpenStack cloud with two HA cloud controllers. Each controller runs the standard controller components: glance, keystone, nova minus compute and network, cinder, horizon, mysql, rabbitmq, and memcached.
Everything except memcached is accessed through haproxy and everything is working great (well, rabbit can be finicky ... I might post about that if it continues).
The problem I currently have is how to effectively work with memcached in this environment. Since all components are load balanced, they need access to the same memcached servers. That's solved by the ability to specify multiple memcached servers in the various openstack config files.
But if I take a server down for maintenance, I notice a 2-3 second delay in all requests. I've confirmed it's memcached by editing the list of memcached servers in the config files and the delay goes away.
I'm wondering how people deploy memcached in environments like this? Are you using some type of memcached replication between servers? Or if a memcached server goes offline are you reconfiguring OpenStack to remove the offline memcached server?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-operators