[Openstack] [Scaling][Orchestration] Zone changes. WAS: [Question #185840]: Multi-Zone finally working on ESSEX but cant "nova list" (KeyError: 'uuid') + doubts

Sandy Walsh sandy.walsh at RACKSPACE.COM
Thu Jan 26 16:40:32 UTC 2012


Thanks Blake ... all very valid points.

Based on our discussions yesterday (the ink is still wet on the whiteboard) we've been kicking around numbers in the following ranges:

500-1000 hosts per zone (zone = single nova deployment. 1 db, 1 rabbit)
25-100 instances per host (minimum flavor)
3s api response time fully loaded (over that would be considered a failure). 'nova list' being the command that can bring down the house. But also 'nova boot' is another concern. We're always trying to get more async operations in there.

Hosts per zone is a tricky one because we run into so many issues around network architecture, so your mileage may vary. Network is the limiting factor in this regard.

All of our design decisions are being made with these metrics in mind.

That said, we'd love to get more feedback on realistic metric expectations to ensure we're in the right church.

Hope this is what you're looking for?

-S


________________________________
From: Blake Yeager [blake.yeager at gmail.com]
Sent: Thursday, January 26, 2012 12:13 PM
To: Sandy Walsh
Cc: openstack at lists.launchpad.net
Subject: Re: [Openstack] [Scaling][Orchestration] Zone changes. WAS: [Question #185840]: Multi-Zone finally working on ESSEX but cant "nova list" (KeyError: 'uuid') + doubts

Sandy,

I am excited to hear about the work that is going on around communication between trusted zones and look forward to seeing what you have created.

In general, the scalability of Nova is an area where I think we need to put additional emphasis.  Rackspace has done a lot of work on zones, but they don't seem to be receiving a lot of support from the rest of the community.

The OpenStack mission statement indicates the mission of the project is: "To produce the ubiquitous Open Source cloud computing platform that will meet the needs of public and private cloud providers regardless of size, by being simple to implement and massively scalable."

I would challenge the community to ensure that scale is being given the appropriate focus in upcoming releases, especially Nova.  Perhaps we need to start by setting very specific scale targets for a single Nova zone in terms of nodes, instances, volumes, etc.  I did a quick search of the wiki but I didn't find anything about scale targets.  Does anyone know if something exists and I am just missing it?  Obviously scale will depend a lot on your specific hardware and configuration but we could start by saying with this minimum hardware spec and this configuration we want to be able to hit this scale.  Likewise it would be nice to publish some statistics about the scale that we believe a given release can operate at safely.  This would tie into some of the QA/Testing work that Jay & team are working on.

Does anyone have other thoughts about how we ensure we are all working toward building a massively scalable system?

-Blake

On Thu, Jan 26, 2012 at 9:20 AM, Sandy Walsh <sandy.walsh at rackspace.com<mailto:sandy.walsh at rackspace.com>> wrote:
Zones is going through some radical changes currently.

Specifically, we're planning to use direct Rabbit-to-Rabbit communication between trusted Zones to avoid the complication of changes to OS API, Keystone and novaclient.

To the user deploying Nova not much will change, there may be a new service to deploy (a Zones service), but that would be all. To a developer, the code in OS API will greatly simplify and the Distributed Scheduler will be able to focus on single zone scheduling (vs doing both zone and host scheduling as it does today).

We'll have more details soon, but we aren't planning on introducing the new stuff until we have a working replacement in place. The default Essex Scheduler now will largely be the same and the filters/weight functions will still carry forward, so any investments there won't be lost.

Stay tuned, we're hoping to get all this in a new blueprint soon.

Hope it helps,
Sandy

________________________________________
From: bounces at canonical.com<mailto:bounces at canonical.com> [bounces at canonical.com<mailto:bounces at canonical.com>] on behalf of Alejandro Comisario [question185840 at answers.launchpad.net<mailto:question185840 at answers.launchpad.net>]
Sent: Thursday, January 26, 2012 8:50 AM
To: Sandy Walsh
Subject: Re: [Question #185840]: Multi-Zone finally working on ESSEX but cant   "nova list" (KeyError: 'uuid') + doubts

Question #185840 on OpenStack Compute (nova) changed:
https://answers.launchpad.net/nova/+question/185840

   Status: Answered => Open

Alejandro Comisario is still having a problem:
Sandy, Vish !

Thanks for the replies ! let me get to the relevant points.

#1 I totally agree with you guys, the policy for spawning instances
maybe very special of each company strategy, but, as you can pass from
"Fill First" to "Spread First" just adding a "reverse=True" on
nova.scheduler.least_cost.weighted_sum" and
"nova.scheduler.distributed_scheduler._schedule" maybe its a harmless
addition to manipulate (since we are going to have a lot of zones across
datacenters, and many different departments are going to create many
instances to load-balance their applications, we really preffer
SpreadFirst to make sure hight availability of the pools )

#2 As we are going to test essex-3, i would like if you can tell me if
the zones code from Chris Behrens is going to be added on Final Essex /
Milestone 4, so we can keep testing other features, or you preffer us to
load this as a bug to be fixed since maybe the code that broke is not
going to have major changes.

Kindest regards !

--
You received this question notification because you are a member of Nova
Core, which is an answer contact for OpenStack Compute (nova).

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net<mailto:openstack at lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120126/2ff019fd/attachment.html>


More information about the Openstack mailing list