[Openstack-operators] Are cells what I want here?
Jay Pipes
jaypipes at gmail.com
Fri Jul 12 18:43:17 UTC 2013
Really interesting solution, Chris. Thanks for sharing! I had not
thought of that, but it certainly makes sense on paper.
Best,
-jay
On 07/12/2013 02:22 PM, Chris Behrens wrote:
>
> Agree with Jay that I'm not sure that cells is the right thing here.
> But I don't necessarily agree that cells has to address only scale
> issues, either. :) It's certainly easier to use cells to set up
> different scheduling policies in different cells. And for your quota
> problem, cells is going to come the closest to what you want.
>
> It was said that your quota issue is not solved by cells… but I'm not
> actually sure that is true. This is certainly not the normal way I
> would configure cells, but I suppose it's possible to do this:
>
> 1) Use the NoopQuotaDriver in your API cell.
> 2) Use the normal DbQuotaDriver in childs cells.
>
> This is actually the opposite of how you normally configure cells. But
> the above configuration will give you different quota tracking per cell
> since each cell has its own DB. And it gives you *0* quota tracking at
> the API cell. This means that any quota related API calls will not
> work, etc, but you might find that everything else works. I suppose
> this is a use case that could be considered wrt cells. An alternative
> to NoopQuotaDriver in the API cell is to just configure unlimited quotas
> there. :)
>
> Anyway, I only recommend trying this if you want to live slightly
> dangerously. :)
>
> - Chris
>
>
> On Jul 12, 2013, at 9:01 AM, Jonathan Proulx <jon at jonproulx.com
> <mailto:jon at jonproulx.com>> wrote:
>
>> On Fri, Jul 12, 2013 at 11:19 AM, Jay Pipes <jaypipes at gmail.com
>> <mailto:jaypipes at gmail.com>> wrote:
>>
>> On 07/12/2013 10:36 AM, Jonathan Proulx wrote:
>>
>>
>> I need one set of nodes to schedule with a 1:1
>> physical:virtual ratio,
>> an other using an over committed ratio (I'm thinking 8:1 in my
>> environment) and in a perfect world a third that does bare metal
>> provisioning both for TripleO purposes and for providing other
>> baremetal
>> systems directly to my user base. Each would need it's own
>> set of quotas.
>>
>>
>> It is that last requirement -- having separate quotas -- that
>> makes both cells and host aggregates inappropriate here. Neither
>> cells nor host aggregates allow you to manage quotas separate from
>> the tenant's compute quotas.
>>
>>
>> Ouch, I really though I was going to get that one.
>>
>> This deployment is in a research lab and we don't have any internal
>> billing mechanisms for compute resources. In a more commercial use
>> case I could just bill more for the more valuable resources, and
>> likely not worry so much about quotas, hmmm...
>>
>> I think host aggregates is the way to go here, not cells, unless
>> you are trying to solve scaling issues (doesn't sound like that's
>> the case). But in either case, you will need to give up on the
>> separate quota requirement -- either that, or create an entirely
>> separate deployment zone for each of your "types" of nodes. That
>> will give you per-type quotas (because each deployment zone would
>> have a separate nova database and associated quota data set. But
>> at that point, I'll welcome you to the wonderful world of shared
>> identity and image databases :)
>>
>>
>> No I'm not trying to solve scaling, I have one rack with 768 cores
>> (double that virtually with HT) I'd like multiply that by 3 or 5 in
>> the near (12months?) future but even at that I don't think I'm pushing
>> any current scaling boundaries.
>>
>> I've looked at host aggregates and I like their simplicity, but there
>> doesn't seem to be a direct way to have a different
>> cpu_allocation_ratio per node or compute_fill_first_cost_fn_weight
>> (for 1:1 nodes I want to fill depth first so big chunks are available
>> for big flavors, but for 8:1 nodes I wand to fill breadth first to
>> reduce likely contention).
>>
>> If I can't get my pony without making a bunch of independent
>> deployments then using magic to glue them back together, I can
>> probably solve at least the cpu_allocation_ratio by replacing the
>> scheduler CoreFilter presumably I could also hack
>> nova.scheduler.least_cost.compute_fill_first_cost_fn
>>
>> -Jon
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> <mailto:OpenStack-operators at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
More information about the OpenStack-operators
mailing list