[openstack-dev] [TripleO][Tuskar] Domain Model Locations

Dougal Matthews dougal at redhat.com
Thu Jan 9 20:52:05 UTC 2014

I'm glad we are hashing this out as I think there is still some debate 
around if Tuskar will need a database at all.

One thing to bear in mind, I think we need to make sure the terminology 
matches that described in the previous thread. I think it mostly does 
here but I'm not sure the Tuskar models do.

A few comments below.

On 09/01/14 17:22, Jay Dobies wrote:
> = Nodes =
> A node is a baremetal machine on which the overcloud resources will be
> deployed. The ownership of this information lies with Ironic. The Tuskar
> UI will accept the needed information to create them and pass it to
> Ironic. Ironic is consulted directly when information on a specific node
> or the list of available nodes is needed.
> = Resource Categories =
> A specific type of "thing" that will be deployed into the overcloud.

nit - Wont they be deployed into undercloud to form the overcloud?

> These are static definitions that describe the entities the user will
> want to add to the overcloud and are owned by Tuskar. For Icehouse, the
> categories themselves are added during installation for the four types
> listed in the wireframes.
> Since this is a new model (as compared to other things that live in
> Ironic or Heat), I'll go into some more detail. Each Resource Category
> has the following information:
> == Metadata ==
> My intention here is that we do things in such a way that if we change
> one of the original 4 categories, or more importantly add more or allow
> users to add more, the information about the category is centralized and
> not reliant on the UI to provide the user information on what it is.
> ID - Unique ID for the Resource Category.
> Display Name - User-friendly name to display.
> Description - Equally self-explanatory.
> == Count ==
> In the Tuskar UI, the user selects how many of each category is desired.
> This stored in Tuskar's domain model for the category and is used when
> generating the template to pass to Heat to make it happen.
> These counts are what is displayed to the user in the Tuskar UI for each
> category. The staging concept has been removed for Icehouse. In other
> words, the wireframes that cover the "waiting to be deployed" aren't
> relevant for now.
> == Image ==
> For Icehouse, each category will have one image associated with it. Last
> I remember, there was discussion on whether or not we need to support
> multiple images for a category, but for Icehouse we'll limit it to 1 and
> deal with it later.

+1, that matches my recollection.

> Metadata for each Resource Category is owned by the Tuskar API. The
> images themselves are managed by Glance, with each Resource Category
> keeping track of just the UUID for its image.
> = Stack =
> There is a single stack in Tuskar, the "overcloud". The Heat template
> for the stack is generated by the Tuskar API based on the Resource
> Category data (image, count, etc.). The template is handed to Heat to
> execute.
> Heat owns information about running instances and is queried directly
> when the Tuskar UI needs to access that information.
> ----------
> Next steps for me are to start to work on the Tuskar APIs around
> Resource Category CRUD and their conversion into a Heat template.
> There's some discussion to be had there as well, but I don't want to put
> too much into one e-mail.
> Thoughts?

There are a number of other models in the tuskar code[1], do we need to 
consider these now too?


More information about the OpenStack-dev mailing list