[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

Robert Collins robertc at robertcollins.net
Sat Dec 7 00:59:59 UTC 2013

Thanks for doing this!

On 6 December 2013 15:31, Tzu-Mainn Chen <tzumainn at redhat.com> wrote:
> Hey all,
> I've attempted to spin out the requirements behind Jarda's excellent wireframes (http://lists.openstack.org/pipermail/openstack-dev/2013-December/020944.html).
> Hopefully this can add some perspective on both the wireframes and the needed changes to the tuskar-api.
> All comments are welcome!
> Thanks,
> Tzu-Mainn Chen
> --------------------------------
> *** Requirements are assumed to be targeted for Icehouse, unless marked otherwise:
>    (M) - Maybe Icehouse, dependency on other in-development features
>    (F) - Future requirement, after Icehouse

Note that everything in this section should be Ironic API calls.

>    * Creation
>       * Manual registration
>          * hardware specs from Ironic based on mac address (M)

Ironic today will want IPMI address + MAC for each NIC + disk/cpu/memory stats

>          * IP auto populated from Neutron (F)

Do you mean IPMI IP ? I'd say IPMI address managed by Neutron here.

>       * Auto-discovery during undercloud install process (M)
>    * Monitoring
>        * assignment, availability, status
>        * capacity, historical statistics (M)

Why is this under 'nodes'? I challenge the idea that it should be
there. We will need to surface some stuff about nodes, but the
underlying idea is to take a cloud approach here - so we're monitoring
services, that happen to be on nodes. There is room to monitor nodes,
as an undercloud feature set, but lets be very very specific about
what is sitting at what layer.

>    * Management node (where triple-o is installed)

This should be plural :) - TripleO isn't a single service to be
installed - We've got Tuskar, Ironic, Nova, Glance, Keystone, Neutron,

>        * created as part of undercloud install process
>        * can create additional management nodes (F)
>     * Resource nodes

                        ^ nodes is again confusing layers - nodes are
what things are deployed to, but they aren't the entry point

>         * searchable by status, name, cpu, memory, and all attributes from ironic
>         * can be allocated as one of four node types

Not by users though. We need to stop thinking of this as 'what we do
to nodes' - Nova/Ironic operate on nodes, we operate on Heat

>             * compute
>             * controller
>             * object storage
>             * block storage
>         * Resource class - allows for further categorization of a node type
>             * each node type specifies a single default resource class
>                 * allow multiple resource classes per node type (M)

Whats a node type?

>             * optional node profile for a resource class (M)
>                 * acts as filter for nodes that can be allocated to that class (M)

I'm not clear on this - you can list the nodes that have had a
particular thing deployed on them; we probably can get a good answer
to being able to see what nodes a particular flavor can deploy to, but
we don't want to be second guessing the scheduler..

>         * nodes can be viewed by node types
>                 * additional group by status, hardware specification

*Instances* - e.g. hypervisors, storage, block storage etc.

>         * controller node type

Again, need to get away from node type here.

>            * each controller node will run all openstack services
>               * allow each node to run specified service (F)
>            * breakdown by workload (percentage of cpu used per node) (M)
>     * Unallocated nodes

This implies an 'allocation' step, that we don't have - how about
'Idle nodes' or something.

>     * Archived nodes (F)
>         * Will be separate openstack service (F)
>    * multiple deployments allowed (F)
>      * initially just one
>    * deployment specifies a node distribution across node types

I can't parse this. Deployments specify how many instances to deploy
in what roles (e.g. 2 control, 2 storage, 4 block storage, 20
hypervisors), some minor metadata about the instances (such as 'kvm'
for the hypervisor, and what undercloud flavors to deploy on).

>       * node distribution can be updated after creation
>    * deployment configuration, used for initial creation only

Can you enlarge on what you mean here?

>       * defaulted, with no option to change
>          * allow modification (F)
>    * review distribution map (F)
>    * notification when a deployment is ready to go or whenever something changes

Is this an (M) ?

>    * Heat template generated on the fly
>       * hardcoded images
>          * allow image selection (F)

We'll be spinning images up as part of the deployment, I presume - so
this is really M, isn't it? or do you mean 'allow supplying images
rather than building just in time' ? Or --- I dunno, but lets get some
clarity here.

>       * pre-created template fragments for each node type
>       * node type distribution affects generated template
>    * nova scheduler allocates nodes
>       * filters based on resource class and node profile information (M)

What does this mean?

Sorry for having so many questions :)


Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud

More information about the OpenStack-dev mailing list