[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

Tzu-Mainn Chen tzumainn at redhat.com
Fri Dec 6 20:26:02 UTC 2013


Thanks for the comments!  Responses inline:

> Disclaimer: I'm very new to the project, so apologies if some of my
> questions have been already answered or flat out don't make sense.
> 
> As I proofread, some of my comments may drift a bit past basic
> requirements, so feel free to tell me to take certain questions out of
> this thread into specific discussion threads if I'm getting too detailed.
> 
> > --------------------------------
> >
> > *** Requirements are assumed to be targeted for Icehouse, unless marked
> > otherwise:
> >     (M) - Maybe Icehouse, dependency on other in-development features
> >     (F) - Future requirement, after Icehouse
> >
> > * NODES
> >     * Creation
> >        * Manual registration
> >           * hardware specs from Ironic based on mac address (M)
> >           * IP auto populated from Neutron (F)
> >        * Auto-discovery during undercloud install process (M)
> >     * Monitoring
> >         * assignment, availability, status
> >         * capacity, historical statistics (M)
> >     * Management node (where triple-o is installed)
> >         * created as part of undercloud install process
> >         * can create additional management nodes (F)
> >      * Resource nodes
> >          * searchable by status, name, cpu, memory, and all attributes from
> >          ironic
> >          * can be allocated as one of four node types
> 
> It's pretty clear by the current verbiage but I'm going to ask anyway:
> "one and only one"?

Yep, that's right!

> >              * compute
> >              * controller
> >              * object storage
> >              * block storage
> >          * Resource class - allows for further categorization of a node
> >          type
> >              * each node type specifies a single default resource class
> >                  * allow multiple resource classes per node type (M)
> 
> My gut reaction is that we want to bite this off sooner rather than
> later. This will have data model and API implications that, even if we
> don't commit to it for Icehouse, should still be in our minds during it,
> so it might make sense to make it a first class thing to just nail down now.

That is entirely correct, which is one reason it's on the list of requirements.  The
forthcoming API design will have to account for it.  Not recreating the entire data
model between releases is a key goal :)


> >              * optional node profile for a resource class (M)
> >                  * acts as filter for nodes that can be allocated to that
> >                  class (M)
> 
> To my understanding, once this is in Icehouse, we'll have to support
> upgrades. If this filtering is pushed off, could we get into a situation
> where an allocation created in Icehouse would no longer be valid in
> Icehouse+1 once these filters are in place? If so, we might want to make
> it more of a priority to get them in place earlier and not eat the
> headache of addressing these sorts of integrity issues later.

That's true.  The problem is that to my understanding, the filters we'd
need in nova-scheduler are not yet fully in place.

I also think that this is an issue that we'll need to address no matter what.
Even once filters exist, if a user applies a filter *after* nodes are allocated,
we'll need to do something clever if the already-allocated nodes don't meet the
filter criteria.

> >          * nodes can be viewed by node types
> >                  * additional group by status, hardware specification
> >          * controller node type
> >             * each controller node will run all openstack services
> >                * allow each node to run specified service (F)
> >             * breakdown by workload (percentage of cpu used per node) (M)
> >      * Unallocated nodes
> 
> Is there more still being flushed out here? Things like:
>   * Listing unallocated nodes
>   * Unallocating a previously allocated node (does this make it a
> vanilla resource or does it retain the resource type? is this the only
> way to change a node's resource type?)
>   * Unregistering nodes from Tuskar's inventory (I put this under
> unallocated under the assumption that the workflow will be an explicit
> unallocate before unregister; I'm not sure if this is the same as
> "archive" below).

Ah, you're entirely right.  I'll add these to the list.

> >      * Archived nodes (F)
> 
> Can you elaborate a bit more on what this is?

To be honest, I'm a bit fuzzy about this myself; Jarda mentioned that there was
an OpenStack service in the process of being planned that would handle this
requirement.  Jarda, can you detail a bit?

Thanks again for the comments!


Mainn



More information about the OpenStack-dev mailing list