<div dir="ltr">It sounds like there are two incorrect uses of memcached: The actual communication of the openstack components to memcached and using memcached itself as a persistent token store. Though from what it sounds like, if the former was done better, the latter wouldn't be too much of an issue?<div>
<br></div><div>I do agree that using something like memcached, which explicitly advertises itself as a bad solution for persistent storage, can ultimately be asking for trouble.</div><div><br></div><div>With that said, though, it looks like there are currently two choices for a keystone token backend: memcached and SQL. Both have obvious downsides. Personally I'd rather deal with my current memcached issues than go back to storing tokens in SQL.</div>
<div><br></div><div>... unless I'm missing something? Is there more to the current state of Keystone token backends than the memcached and SQL backends that have been around for the past few years?</div><div><br></div>
<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Aug 22, 2014 at 12:39 PM, Morgan Fainberg <span dir="ltr"><<a href="mailto:morgan.fainberg@gmail.com" target="_blank">morgan.fainberg@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">While keystone uses memcache as a possible token storage backend we are working towards eliminating the design that makes memcache a desirable token backend. <div>
<br></div><div>Using memcache for the token backend is not the best approach as the token backend (up through icehouse and in some cases will hold true for Juno) assumes stable storage for at least the life of the token. </div>
<div><br></div><div>I agree with Josh, we are likely using memcached incorrectly in a number of cases. </div><span class="HOEnZb"><font color="#888888"><div><br></div></font></span><div><span class="HOEnZb"><font color="#888888">--Morgan</font></span><div>
<div class="h5"><span></span><br><div><br>On Thursday, August 21, 2014, Joshua Harlow <<a href="mailto:harlowja@outlook.com" target="_blank">harlowja@outlook.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">+1 for this, remember the 'cache' in memcache *strongly* indicates what it should be used for.<br>
<br>
A useful link to read over @ <a href="http://joped.com/2009/03/a-rant-about-proper-memcache-usage/" target="_blank">http://joped.com/2009/03/a-rant-about-proper-memcache-usage/</a><br>
<br>
-Josh<br>
<br>
On Aug 21, 2014, at 11:19 AM, Clint Byrum <<a>clint@fewbar.com</a>> wrote:<br>
<br>
> Excerpts from Joe Topjian's message of 2014-08-14 09:09:59 -0700:<br>
>> Hello,<br>
>><br>
>> I have an OpenStack cloud with two HA cloud controllers. Each controller<br>
>> runs the standard controller components: glance, keystone, nova minus<br>
>> compute and network, cinder, horizon, mysql, rabbitmq, and memcached.<br>
>><br>
>> Everything except memcached is accessed through haproxy and everything is<br>
>> working great (well, rabbit can be finicky ... I might post about that if<br>
>> it continues).<br>
>><br>
>> The problem I currently have is how to effectively work with memcached in<br>
>> this environment. Since all components are load balanced, they need access<br>
>> to the same memcached servers. That's solved by the ability to specify<br>
>> multiple memcached servers in the various openstack config files.<br>
>><br>
>> But if I take a server down for maintenance, I notice a 2-3 second delay in<br>
>> all requests. I've confirmed it's memcached by editing the list of<br>
>> memcached servers in the config files and the delay goes away.<br>
><br>
> I've seen a few responses to this that show a _massive_ misunderstanding<br>
> of how memcached is intended to work.<br>
><br>
> Memcached should never need to be load balanced at the connection<br>
> level. It has a consistent hash ring based on the keys to handle<br>
> load balancing and failover. If you have 2 servers, and 1 is gone,<br>
> the failover should happen automatically. This gets important when you<br>
> have, say, 5 memcached servers as it means that given 1 failed server,<br>
> you retain n-1 RAM for caching.<br>
><br>
> What I suspect is happening is that we're not doing that right by<br>
> either not keeping persistent connections, or retrying dead servers<br>
> too aggressively.<br>
><br>
> In fact, it looks like the default one used in oslo-incubator's<br>
> 'memorycache', the 'memcache' driver, will by default retry dead servers<br>
> every 30 seconds, and wait 3 seconds for a timeout, which probably<br>
> matches the behavior you see. None of the places I looked in Nova seem<br>
> to allow passing in a different dead_retry or timeout. In my experience,<br>
> you probably want something like dead_retry == 600, so only one slow<br>
> operation every 10 minutes per process (so if you have 10 nova-api's<br>
> running, that's 10 requests every 10 minutes).<br>
><br>
> It is also possible that some of these objects are being re-created on<br>
> every request, as is common if caching is implemented too deep inside<br>
> "middleware" and not at the edges of a solution. I haven't dug deep<br>
> enough in, but suffice to say, replicating and load balancing may be the<br>
> cheaper solution to auditing the code and fixing it at this point.<br>
><br>
> _______________________________________________<br>
> OpenStack-operators mailing list<br>
> <a>OpenStack-operators@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
<a>OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</blockquote></div></div></div></div>
<br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br></div>