<tt><font size=2>Mike Spreitzer <mspreitz@us.ibm.com> wrote on 01/10/2013
06:58:10 AM:<br>
> Alex Glikson <GLIKSON@il.ibm.com> wrote on 09/29/2013 03:30:35
PM:<br>
> > Mike Spreitzer <mspreitz@us.ibm.com> wrote on 29/09/2013
08:02:00 PM:<br>
> > <br>
> > > Another reason to prefer host is that we have other resources
to <br>
> > > locate besides compute. <br>
> > <br>
> > Good point. Another approach (not necessarily contradicting)
could <br>
> > be to specify the location as a property of host aggregate rather
<br>
> > than individual hosts (and introduce similar notion in Cinder,
and <br>
> > maybe Neutron). This could be an evolution/generalization of
the <br>
> > existing 'availability zone' attribute, which would specify a
more <br>
> > fine-grained location path (e.g., <br>
> > 'az_A:rack_R1:chassis_C2:node_N3'). We briefly discussed this
<br>
> > approach at the previous summit (see 'simple implementation'
under <br>
> > </font></tt><a href=https://etherpad.openstack.org/HavanaTopologyAwarePlacement><tt><font size=2>https://etherpad.openstack.org/HavanaTopologyAwarePlacement</font></tt></a><tt><font size=2>)
-- but <br>
> > unfortunately I don't think we made much progress with the actual
<br>
> > implementation in Havana (would be good to fix this in Icehouse).
<br>
> <br>
> Thanks for the background. I can still see the etherpad, but
the <br>
> old summit proposal to which it points is gone. </font></tt>
<br>
<br><tt><font size=2>The proposal didn't have much details -- the main
tool used at summit sessions is the etherpad.</font></tt>
<br><tt><font size=2><br>
> The etherpad proposes an API, and leaves open the question of <br>
> whether it backs onto a common service. I think that is a key
<br>
> question. In my own group's work, this sort of information is
<br>
> maintained in a shared database. I'm not sure what is the right
<br>
> approach for OpenStack.</font></tt>
<br>
<br><tt><font size=2>IMO, it does make sense to have a service which maintains
the physical topology. Tuskar sounds like a good candidate. Then it can
'feed' Nova/Cinder/Neutron with the relevant aggregation entities (like
host aggregates in Nova) and their attributes, to be used by the scheduler
within each of them. Alternatively, this could be done by the administrator
(manually, or using other tools).</font></tt>
<br>
<br><tt><font size=2>Regards,</font></tt>
<br><tt><font size=2>Alex</font></tt>
<br><tt><font size=2><br>
> <br>
> Thanks, <br>
> Mike</font></tt>