[openstack-dev] [TripleO][Tuskar] Domain Model Locations

Imre Farkas ifarkas at redhat.com
Fri Jan 10 09:45:13 UTC 2014


Thanks Jay, this is a very useful summary! Some comments inline:

On 01/09/2014 06:22 PM, Jay Dobies wrote:
> I'm trying to hash out where data will live for Tuskar (both long term
> and for its Icehouse deliverables). Based on the expectations for
> Icehouse (a combination of the wireframes and what's in Tuskar client's
> api.py), we have the following concepts:
>
>
> = Nodes =
> A node is a baremetal machine on which the overcloud resources will be
> deployed. The ownership of this information lies with Ironic. The Tuskar
> UI will accept the needed information to create them and pass it to
> Ironic. Ironic is consulted directly when information on a specific node
> or the list of available nodes is needed.
>
>
> = Resource Categories =
> A specific type of "thing" that will be deployed into the overcloud.
> These are static definitions that describe the entities the user will
> want to add to the overcloud and are owned by Tuskar. For Icehouse, the
> categories themselves are added during installation for the four types
> listed in the wireframes.
>
> Since this is a new model (as compared to other things that live in
> Ironic or Heat), I'll go into some more detail. Each Resource Category
> has the following information:
>
> == Metadata ==
> My intention here is that we do things in such a way that if we change
> one of the original 4 categories, or more importantly add more or allow
> users to add more, the information about the category is centralized and
> not reliant on the UI to provide the user information on what it is.
>
> ID - Unique ID for the Resource Category.
> Display Name - User-friendly name to display.
> Description - Equally self-explanatory.
>
> == Count ==
> In the Tuskar UI, the user selects how many of each category is desired.
> This stored in Tuskar's domain model for the category and is used when
> generating the template to pass to Heat to make it happen.
>
> These counts are what is displayed to the user in the Tuskar UI for each
> category. The staging concept has been removed for Icehouse. In other
> words, the wireframes that cover the "waiting to be deployed" aren't
> relevant for now.
>
> == Image ==
> For Icehouse, each category will have one image associated with it. Last
> I remember, there was discussion on whether or not we need to support
> multiple images for a category, but for Icehouse we'll limit it to 1 and
> deal with it later.
>
> Metadata for each Resource Category is owned by the Tuskar API. The
> images themselves are managed by Glance, with each Resource Category
> keeping track of just the UUID for its image.
>
>
> = Stack =
> There is a single stack in Tuskar, the "overcloud".
A small nit here: in the long term Tuskar will support multiple overclouds.

 > The Heat template
> for the stack is generated by the Tuskar API based on the Resource
> Category data (image, count, etc.). The template is handed to Heat to
> execute.
>
> Heat owns information about running instances and is queried directly
> when the Tuskar UI needs to access that information.
>
> ----------
>
> Next steps for me are to start to work on the Tuskar APIs around
> Resource Category CRUD and their conversion into a Heat template.
> There's some discussion to be had there as well, but I don't want to put
> too much into one e-mail.
>
>
> Thoughts?

There's few pieces of concepts which I think is missing from the list:
- overclouds: after Heat successfully created the stack, Tuskar needs to 
keep track whether it applied the post configuration steps (Keystone 
initialization, registering services, etc) or not. It also needs to know 
the name of the stack (only 1 stack named 'overcloud' for Icehouse).
- service endpoints of an overcloud: eg. Tuskar-ui in the undercloud 
will need the url of the overcloud Horizon. The overcloud Keystone owns 
the information about this (after post configuration is done) and Heat 
owns the information about the overcloud Keystone.
- user credentials for an overcloud: it will be used by Heat during 
stack creation, by Tuskar during post configuration, by Tuskar-ui 
querying various information (eg. running vms on a node) and finally by 
the user logging in to the overcloud Horizon. Now it can be found in the 
Tuskar-ui settings file [1].

Imre

[1] 
https://github.com/openstack/tuskar-ui/blob/master/local_settings.py.example#L351 




More information about the OpenStack-dev mailing list