[Openstack] Nova DB Connection Pooling

Joshua Harlow harlowja at yahoo-inc.com
Tue Sep 27 03:14:05 UTC 2011


It seems like it would be good to talk about this during the conference, since it seems sort of odd to have pieces of data that are shared across zones along with pieces of data that are not shared across zones. It seems like it might be better to provide a unified view of the zones (from a management and operational standpoint)? I wouldn't want to manage X DB's with X dashboards.... Keystone seems to help with the auth and glance with image management, but then u still have this nova DB usage that doesn't quite fit in the puzzle (in my opinion).

I would personally rather have a distributed data-store act as the DB, this can then be the "single DB", thus making everything fit better (or at least a db service so that this could be a possibility for users with a large number of distributed compute nodes in different data centers). Imposing a single DB deployment per zone seems to restrictive, instead of say imposing a nova-db service (as an example) that could talk to mysql (for those who want a simple solution) or say could talk to riak [$or other nosql here$] (for those who want a distributed yet "single db-like" solution).

On 9/26/11 7:26 PM, "Sandy Walsh" <sandy.walsh at RACKSPACE.COM> wrote:

Sure ... was there something in particular you wanted to know about?

The overview:

The assumption with Zones is there is a single DB deployment per Zone. When I say "single DB", that could be clustered/HA as need be. But the intention is no sharing of DB between zones.

This, of course, has caused us some problems with respect to Instance/Flavor/User ID's being shared across zones. But these have largely been mitigated with the use of UUID's, Glance & Keystone. Not sure how Networks and Volumes will behave.

Data collected from child zones get encrypted blobs of data from the child that may contain ID's or zone-local information, but it's not generally available to the parent zones. They're ephemeral magic cookies.

We don't do a lot of disk access in the distributed scheduler. Most stuff is in-memory and transient.

-S

________________________________________
From: openstack-bounces+sandy.walsh=rackspace.com at lists.launchpad.net [openstack-bounces+sandy.walsh=rackspace.com at lists.launchpad.net] on behalf of Devin Carlen [devin.carlen at gmail.com]
Sent: Monday, September 26, 2011 10:26 PM
To: Soren Hansen
Cc: openstack at lists.launchpad.net
Subject: Re: [Openstack] Nova DB Connection Pooling

We really need to hear from Sandy Walsh on this thread so he can elaborate on how the distributed scheduling works (with multiple mysql databases).

Devin


On Sep 26, 2011, at 6:41 AM, Soren Hansen wrote:

> 2011/9/26 Pitucha, Stanislaw Izaak <stanislaw.pitucha at hp.com>:
>> The pain starts when your max memory usage crosses what you have available.
>> Check http://dev.mysql.com/doc/refman/5.1/en/memory-use.html - especially comments which calculate the needed memory for N connections for both innodb and isam. (mysqltuner.pl will also calculate that for you)
>>
>> Hundreds of connections should be ok. Thousands... you should rethink it ;)
>
> Hm.. It doesn't take many racks full of blade servers to get into 4
> digit numbers of compute nodes. Certainly fewer than I was expecting
> to see in a garden variety Nova zone.
>
> --
> Soren Hansen        | http://linux2go.dk/
> Ubuntu Developer    | http://www.ubuntu.com/
> OpenStack Developer | http://www.openstack.org/
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
This email may include confidential information. If you received it in error, please delete it.


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20110926/5a6d7410/attachment.html>


More information about the Openstack mailing list