<div dir="ltr"><div><div>That's good to know about, but I don't think it was my issue (yet, so thanks for saving me from it as I would have been on vacation when it was likely to hit hard...)<br><br></div>I *only* had 58k expired tokens after deleting those I'm still getting 30-60 second times for nova list, though the keystone response sped up to a more reasonable 4-6sec<br>
<br></div>I do think conductor was killing me, spun up six tiny instances running nova-conductor on the cloud and now nova lists are back down to 6 seconds which is what they were before, and I can blame on the instances table needing a good cleaning.<br>
<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Aug 15, 2013 at 11:09 AM, Lorin Hochstein <span dir="ltr"><<a href="mailto:lorin@nimbisservices.com" target="_blank">lorin@nimbisservices.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div>Yes, it's the database token issue:</div><div><br></div><div><a href="https://ask.openstack.org/en/question/1740/keystone-never-delete-expires-token-in-database/" target="_blank">https://ask.openstack.org/en/question/1740/keystone-never-delete-expires-token-in-database/</a></div>
<div><a href="https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1032633" target="_blank">https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1032633</a></div><div><br></div><div><br></div><div>If you don't need PKI tokens, you can configure keystone for uuid tokens with the memcache backend instead: <<a href="http://pic.dhe.ibm.com/infocenter/tivihelp/v48r1/index.jsp?topic=/com.ibm.sco.doc_2.2/t_memcached_keystone.html" target="_blank">http://pic.dhe.ibm.com/infocenter/tivihelp/v48r1/index.jsp?topic=%2Fcom.ibm.sco.doc_2.2%2Ft_memcached_keystone.html</a>></div>
<div><br></div><div>If you want to use the PKI tokens, then you'll need to set up a cron job to clear out the old tokens from the database. There's a "keystone-manage token flush" command coming in havana so that this won't require raw SQL to do: <<a href="https://review.openstack.org/#/c/28133/" target="_blank">https://review.openstack.org/#/c/28133/</a>></div>
<div><br></div><div>You can also speed up the query by setting a database index on the "valid" column of the token table. This has been done for havana: <<a href="https://review.openstack.org/#/c/30753/" target="_blank">https://review.openstack.org/#/c/30753/</a>></div>
<div><br></div><div>
<span style="border-collapse:separate;border-spacing:0px"><div style="word-wrap:break-word"><span style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><div style="word-wrap:break-word">
<div>Take care,</div><div><br></div><div>Lorin</div><div>--</div><div>Lorin Hochstein</div><div>Lead Architect - Cloud Services</div><div>Nimbis Services, Inc.</div><div><a href="https://www.nimbisservices.com/" target="_blank">www.nimbisservices.com</a></div>
<div><br></div></div></span><br></div><br></span><br>
</div>
<br><div><div>On Aug 15, 2013, at 10:53 AM, Aubrey Wells <<a href="mailto:aubrey@vocalcloud.com" target="_blank">aubrey@vocalcloud.com</a>> wrote:</div><br><blockquote type="cite"><div dir="ltr">We have the same thing and found that the keystone tokens table had hundreds of thousands of expired tokens in it so the SELECT that gets done during the auth phase of API operations was taking ages to return. Wrote a script to clean up expired tokens and it hasn't recurred. A quick and dirty version to clean it up by hand would be 'delete from token where expires < NOW();' but you might want something a little safer in an automated script. </div>
<div class="gmail_extra"><br clear="all"><div><div dir="ltr">------------------<br>Aubrey Wells<br>Director | Network Services<br>VocalCloud<br><a href="tel:888.305.3850" value="+18883053850" target="_blank">888.305.3850</a><br>
<a href="mailto:support@vocalcloud.com" target="_blank">support@vocalcloud.com</a><br>
<a href="http://www.vocalcloud.com/" target="_blank">www.vocalcloud.com</a></div></div>
<br><br><div class="gmail_quote"><div><div class="h5">On Thu, Aug 15, 2013 at 10:45 AM, Jonathan Proulx <span dir="ltr"><<a href="mailto:jon@jonproulx.com" target="_blank">jon@jonproulx.com</a>></span> wrote:<br></div>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
<div dir="ltr"><div><div><div><div><div>Hi All,<br><br></div>I have a single controller node 60 compute node cloud on Ubuntu 12.04 / cloud archive and after upgrade to grizzly everything seem painfully slow.<br><br></div>
I've had 'nova list' take on the order of one minute to return (there's 65 non-deleted instances and a total of just under 500k instances in the instances table but that was true before upgrade as well)<br>
<br></div>The controller node is 4x busier with this tiny load of a single user and a few VMs as it has averaged in production with 1,500 VMs dozens of users and VMs starting every 6sec on average. <br><br>This has me a little worried but the system is so over spec'ed that I can't see it as my current problem as the previous average was 5% CPU utilization so now I'm only at 20%. All the databases fit comfortably in memory with plenty of room for caching so my disk I/0 is virtually nothing.<br>
<br></div>Not quite sure where to start. I'd like to blame conductor for serializing database access, but I really hope any service could handle at least one rack of servers before you needed to scale out...but besides the poor user experience of sluggish response I'm also getting timeouts if I try and start some number of 10's of servers, the usual work flow around here often involves 100's.<br>
<br></div><div>Anyone had similar problems and/or have suggestions of where else to look for bottle necks.<br></div><div><br></div>-Jon<br></div>
<br></div></div><div class="im">_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></div></blockquote></div><br></div><div class="im">
_______________________________________________<br>OpenStack-operators mailing list<br><a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</div></blockquote></div><br></div></blockquote></div><br></div>