[openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

Andrew Laski andrew at lascii.com
Wed Dec 16 21:38:44 UTC 2015


On 12/16/15 at 12:40pm, James Penick wrote:
>>Affinity is mostly meaningless with baremetal. It's entirely a
>>virtualization related thing. If you try and group things by TOR, or
>>chassis, or anything else, it's going to start meaning something entirely
>>different than it means in Nova,
>
>I disagree, in fact, we need TOR and power affinity/anti-affinity for VMs
>as well as baremetal. As an example, there are cases where certain compute
>resources move significant amounts of data between one or two other
>instances, but you want to ensure those instances are not on the same
>hypervisor. In that scenario it makes sense to have instances on different
>hypervisors, but on the same TOR to reduce unnecessary traffic across the
>fabric.

I think the point was that affinity/anti-affinity as it's defined today 
within Nova does not have any real meaning for baremetal.  The scope is 
a single host and baremetal won't have two instances on the same host so 
by default you have anti-affinity and asking for affinity doesn't make 
sense.

There's a WIP spec proposing scoped policies for server groups that I 
think addresses the case you outlined 
https://review.openstack.org/#/c/247654/.  It's affinity/anti-affinity 
at a different level.  It may help the discussion to differentiate 
between the general concept of affinity/anti-affinity which could 
apply to many different scopes and the current Nova definition of those 
concepts which has a very specific scope.


>
>>and it would probably be better to just
>>make lots of AZ's and have users choose their AZ mix appropriately,
>>since that is the real meaning of AZ's.
>
>Yes, at some level certain things should be expressed in the form of an AZ,
>power seems like a good candidate for that. But , expressing something like
>a TOR as an AZ in an environment with hundreds of thousands of physical
>hosts, would not scale. Further, it would require users to have a deeper
>understanding of datacenter toplogy, which is exactly the opposite of why
>IaaS exists.
>
>The whole point of a service-oriented infrastructure is to be able to give
>the end user the ability to boot compute resources that match a variety of
>constraints, and have those resources selected and provisioned for them. IE
>"Give me 12 instances of m1.blah, all running Linux, and make sure they're
>spread across 6 different TORs and 2 different power domains in network
>zone Blah."


I think the above spec covers this.  The difference to me is that AZs 
require the user to think about absolute placements while the spec 
offers a means to think about relative placements.


>
>
>
>
>
>
>
>On Wed, Dec 16, 2015 at 10:38 AM, Clint Byrum <clint at fewbar.com> wrote:
>
>> Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:
>> > Nobody is talking about running a compute per flavor or capability. All
>> > compute hosts will be able to handle all ironic nodes. We *do* still
>> > need to figure out how to handle availability zones or host aggregates,
>> > but I expect we would pass along that data to be matched against. I
>> > think it would just be metadata on a node. Something like
>> > node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
>> > you. Ditto for host aggregates - add the metadata to ironic to match
>> > what's in the host aggregate. I'm honestly not sure what to do about
>> > (anti-)affinity filters; we'll need help figuring that out.
>> >
>>
>> Affinity is mostly meaningless with baremetal. It's entirely a
>> virtualization related thing. If you try and group things by TOR, or
>> chassis, or anything else, it's going to start meaning something entirely
>> different than it means in Nova, and it would probably be better to just
>> make lots of AZ's and have users choose their AZ mix appropriately,
>> since that is the real meaning of AZ's.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list