<div dir="ltr"><div><div>Connection and session code in oslo-incubator: <a href="https://review.openstack.org/#/c/29464/">https://review.openstack.org/#/c/29464/</a></div><div>Change to Context: <a href="https://review.openstack.org/#/c/30363/">https://review.openstack.org/#/c/30363/</a></div>
<div>Decorator for sqlalchemy api: <a href="https://review.openstack.org/#/c/30370/">https://review.openstack.org/#/c/30370/</a></div></div><div><br></div><div>So back at the Portland summit myself and Jun Park presented about some of our difficulties scaling Openstack with the Folsom release: <a href="http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-openstack-in-a-traditional-hosting-environment">http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-openstack-in-a-traditional-hosting-environment</a>.</div>
<div><br></div><div>One of the main obstacles we ran into was the amount of chattiness to MySQL. As we were deploying literally hundreds of nodes per day we weren't able to dig in and weed out unnecessary traffic or delve into any type of optimization approach. Instead we utilized a well known database scaling paradigm: shoving off reads to replication slaves and only sending reads which are sensitive to replication latency to the write master. I feel like replication, be it in MySQL or Postgres, is a fairly well understood concept and has lots of tools and documentation around it. The only hard part IMO about scaling this way is that you need to audit your queries to understand which could be split out, but you also need to understand the intricacies of your application to understand when it is inappropriate to send a heavy query to a read slave. In other words, some queries hurt a lot, but we can't _always_ just send them to read slaves.</div>
<div><br></div><div>So rather than talk about it, here's some example code. Please look at the reviews above when you see me doing unfamiliar things with context, slave_connection, etc.</div><div><br></div><div><a href="https://review.openstack.org/#/c/38872">https://review.openstack.org/#/c/38872</a></div>
<div><br></div><div>In my example my DBA is upset because he's getting this query from every node that we have every periodic_interval. However, it wouldn't be good for me to simply send every call to nova.db.sqlalchemy.api.instance_get_all_by_host to a read slave. Some parts of the codebase are absolutely not tolerant of data that is possibly a few hundred milliseconds out of sync with the master. So we need a way to indicate you hit the slave this time, but not other times. That's where the lag_tolerant context comes in. Since context is passed all the way through the stack to the DB layer we can indicate that we are tolerant of laggy data and that's not going to be changed even if the call goes over RPC.</div>
<div><br></div><div>I'd appreciate any feedback on this approach, I have really only discussed it with Devananda van der Veen briefly but he was extremely helpful. This hopefully get some more eyes on it, so yeah, fire away!</div>
<div><br></div><div><br></div><div>-Mike Wilson</div></div>