[Openstack-operators] Are cells what I want here?

Jonathan Proulx jon at jonproulx.com
Fri Jul 12 16:01:28 UTC 2013

On Fri, Jul 12, 2013 at 11:19 AM, Jay Pipes <jaypipes at gmail.com> wrote:

> On 07/12/2013 10:36 AM, Jonathan Proulx wrote:
>> I need one set of nodes to schedule with a 1:1 physical:virtual ratio,
>> an other using an over committed ratio (I'm thinking 8:1 in my
>> environment) and in a perfect world a third that does bare metal
>> provisioning both for TripleO purposes and for providing other baremetal
>> systems directly to my user base.  Each would need it's own set of quotas.
> It is that last requirement -- having separate quotas -- that makes both
> cells and host aggregates inappropriate here. Neither cells nor host
> aggregates allow you to manage quotas separate from the tenant's compute
> quotas.

Ouch, I really though I was going to get that one.

This deployment is in a research lab and we don't have any internal billing
mechanisms for compute resources.  In a more commercial use case I could
just bill more for the more valuable resources, and likely not worry so
much about quotas, hmmm...

> I think host aggregates is the way to go here, not cells, unless you are
> trying to solve scaling issues (doesn't sound like that's the case). But in
> either case, you will need to give up on the separate quota requirement --
> either that, or create an entirely separate deployment zone for each of
> your "types" of nodes. That will give you per-type quotas (because each
> deployment zone would have a separate nova database and associated quota
> data set. But at that point, I'll welcome you to the wonderful world of
> shared identity and image databases :)

No I'm not trying to solve scaling, I have one rack with 768 cores (double
that virtually with HT) I'd like multiply that by 3 or 5 in the near
(12months?) future but even at that I don't think I'm pushing any current
scaling boundaries.

I've looked at host aggregates and I like their simplicity, but there
doesn't seem to be a direct way to have a different cpu_allocation_ratio
per node or compute_fill_first_cost_fn_weight (for 1:1 nodes I want to fill
depth first so big chunks are available for big flavors, but for 8:1 nodes
I wand to fill breadth first to reduce likely contention).

If I can't get my pony without making a bunch of independent deployments
then using magic to glue them back together, I can probably solve at least
the cpu_allocation_ratio by replacing the scheduler CoreFilter presumably I
could also hack  nova.scheduler.least_cost.compute_fill_first_cost_fn

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130712/9dfa4486/attachment.html>

More information about the OpenStack-operators mailing list