[User-committee] [openstack-dev] DB issues with Grizzly
Matt Van Winkle
mvanwink at rackspace.com
Mon Apr 15 19:01:38 UTC 2013
Greeting all,
A few have reached out and asked for the follow up on this. I apologize for the delay. We have managed to go back and work most of the additional traffic out of the system. The short version is – it was driven by joining meta data to many common queries the compute nodes were running against the DBs. Here is a list of patches that were involved in the fix:
https://review.openstack.org/#/c/26136/11
https://review.openstack.org/#/c/26419/6
https://review.openstack.org/#/c/26418/
https://review.openstack.org/#/c/26420/
https://review.openstack.org/#/c/26694/
We have at least one more of these queries to update in an upcoming release. If all goes as planned, we will roll on to some of our larger regions. We'll keep you posted if we find anything else of note.
Thanks!
Matt
--
Matt Van Winkle
Manager, Cloud Engineering
Rackspace
210-312-4442(w)
mvanwink at racksapce.com
From: Matt Van Winkle <mvanwink at rackspace.com<mailto:mvanwink at rackspace.com>>
Reply-To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, April 5, 2013 4:31 PM
To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] DB issues with Grizzly
Greetings folks,
I sent the message below to the user committee this morning. They recommend I go ahead and bounce it this direction to start good conversation around some of the things we are seeing in Grizzly.
TLDR – we have hit a couple of database related walls with our deployment of an early version of Grizzly code at scale. The two main areas were the migrations themselves and overall network load from DB queries once it was deployed.
The email wasn't originally intended to be super technical, but it was suggested by several to forward as-is. We can definitely rope some of the folks that did the heavy lifting on this into the conversation and dig deeper. Many of them are probably already on this list. Please let me know what questions you have.
As it stands right now, we are actually testing some patches like the one below in our staging environments. It's too early to know the results, but we are hoping they bring the overall traffic load down for the most common queries so we can continue to deploy code in our other (and larger) regions.
Thanks!
Matt
From: Matt Van Winkle <mvanwink at rackspace.com<mailto:mvanwink at rackspace.com>>
Date: Friday, April 5, 2013 9:01 AM
To: "user-committee at lists.openstack.org<mailto:user-committee at lists.openstack.org>" <user-committee at lists.openstack.org<mailto:user-committee at lists.openstack.org>>
Subject: Feedback on Grizzly
Hello again, folks!
When I reached out a couple weeks ago, I mentioned that I was hoping that, along with being a large developer of OpenStack, Rackspace, could also contribute the committee's work as one of it's largest users via our public cloud. We just found our first opportunity. This week we deployed an early release of Grizzly code to one of our data centers.
Going in, we knew there were quite a few database migrations. As we studied them, however, they presented some challenges in the manner that they were executed. Using them as they were would have meant extended downtime for the databases given the size of our production data (row counts, etc). That downtime is problematic since it translates to the Public APIs being unavailable – something we aim to impact as minimally as possible during code deploys. Ultimately, we had to rewrite them ourselves to achieve the same out comes with less DB unavailability. There is plenty of work the community can do, and the committee can help guide, around better ways to change database structure while maintaining as much uptime as possible. If you need more details, I'm happy to bring the folks that worked on the rewrite into the conversation. Both will actually be at the summit.
The bigger surprise - and full disclosure, we learned a lot about the things we aren't testing in our deployment pipeline - was the dramatic increase in network traffic following the deploy. The new table structures, increased meta data and new queries in this version translated to about 10X in the amount of data being returned for some queries. Add to that, the fact that compute nodes are regularly querying for certain information or often performing a "check in", and we saw a 3X (or more) increase in network traffic on the management network we have for this particular DC (and it's a smaller one as our various deployments go). For now we have improved things slightly by turning off the following periodic tasks:
reboot_timeout
rescue_timeout
resize_confirm_window
These not running has the potential to create some other issues (zombies and such), but that can be managed.
It does look like the developers are already working on getting some of the queries updated:
https://review.openstack.org/#/c/26136/
https://review.openstack.org/#/c/26109/
All in all, I wanted to reach back out to you to follow up from before, because I think this particular experience is an excellent highlight that there is often a disconnect between some of the changes that come through to trunk and use of the code at scale. Almost everyone who was dealt with the above will be in Oregon week after next, so I'm happy to drag any and all into the mix to discuss further.
Thanks so much!
Matt
---------------
Matt Van Winkle
Manager, Cloud Engineering
Rackspace
210-312-4442(w)
mvanwink at racksapce.com<mailto:mvanwink at racksapce.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/user-committee/attachments/20130415/8164b154/attachment.html>
More information about the User-committee
mailing list