[openstack-dev] [TripleO][Tuskar] Domain Model Locations

Clint Byrum clint at fewbar.com
Fri Jan 17 04:59:09 UTC 2014


Excerpts from Jay Pipes's message of 2014-01-12 11:40:41 -0800:
> On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:
> > >> So, it's not as simple as it may initially seem :)
> > >
> > > Ah, I should have been clearer in my statement - my understanding is that
> > > we're scrapping concepts like Rack entirely.
> > 
> > That was my understanding as well. The existing Tuskar domain model was 
> > largely placeholder/proof of concept and didn't necessarily reflect 
> > exactly what was desired/expected.
> 
> Hmm, so this is a bit disappointing, though I may be less disappointed
> if I knew that Ironic (or something else?) planned to account for
> datacenter inventory in a more robust way than is currently modeled.
> 
> If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
> that an enterprise would use to deploy bare-metal hardware in a
> continuous fashion, then the modeling of racks, and the attributes of
> those racks -- location, power supply, etc -- are a critical part of the
> overall picture.
> 

To be clear, the goal first is to have them be the deployment tooling that
_somebody_ would use in production. "Enterprise" is pretty amorphous. If
I'm running a start-up but it is a start-up that puts all of its money
into a 5000 node public cloud, am I enterprise?

Nothing in the direction that has been laid out precludes Tuskar and
Ironic from consuming one of the _many_ data center inventory management
solutions and CMDB's that exist now.

If there is a need for OpenStack to grow one, I think we will. Lord
knows we've reinvented half the rest of the things we needed. ;-)

For now I think Tuskar should focus on feeding multiple groups into Nova,
and Nova and Ironic should focus on making sure they can handle multiple
group memberships for compute resources and schedule appropriately. Do
that and it will be relatively straight forward to adapt to racks, pods,
power supplies, or cooling towers.

> As an example of why something like power supply is important... inside
> AT&T, we had both 8kW and 16kW power supplies in our datacenters. For a
> 42U or 44U rack, deployments would be limited to a certain number of
> compute nodes, based on that power supply.
> 
> The average power draw for a particular vendor model of compute worker
> would be used in determining the level of compute node packing that
> could occur for that rack type within a particular datacenter. This was
> a fundamental part of datacenter deployment and planning. If the tooling
> intended to do bare-metal deployment of OpenStack in a continual manner
> does not plan to account for these kinds of things, then the chances
> that tooling will be used in enterprise deployments is diminished.
> 

Right the math can be done in advance and racks/psus/boxes grouped
appropriately. Packing is one of those things that we need a "wholistic"
scheduler for to be fully automated. I'm not convinced that is even a
mid-term win, when there are so many big use-cases that can be handled
with so much less complexity.

> And, as we all know, when something isn't used, it withers. That's the
> last thing I want to happen here. I want all of this to be the
> bare-metal deployment tooling that is used *by default* in enterprise
> OpenStack deployments, because the tooling "fits" the expectations of
> datacenter deployers.
> 
> It doesn't have to be done tomorrow :) It just needs to be on the map
> somewhere. I'm not sure if Ironic is the place to put this kind of
> modeling -- I thought Tuskar was going to be that thing. But really,
> IMO, it should be on the roadmap somewhere.

I agree, however I think the primitive capabilities, informed by helpful
use-cases such as the one you describe above, need to be understood
before we go off and try to model a UI around them.



More information about the OpenStack-dev mailing list