[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

marios@redhat.com mandreou at redhat.com
Mon Dec 9 15:41:56 UTC 2013


On 06/12/13 04:31, Tzu-Mainn Chen wrote:
> Hey all,
> 
> I've attempted to spin out the requirements behind Jarda's excellent wireframes (http://lists.openstack.org/pipermail/openstack-dev/2013-December/020944.html).
> Hopefully this can add some perspective on both the wireframes and the needed changes to the tuskar-api.
> 
> All comments are welcome!
> 
> Thanks,
> Tzu-Mainn Chen
> 
> --------------------------------
> 
> *** Requirements are assumed to be targeted for Icehouse, unless marked otherwise:
>    (M) - Maybe Icehouse, dependency on other in-development features
>    (F) - Future requirement, after Icehouse
> 
> * NODES
>    * Creation
>       * Manual registration
>          * hardware specs from Ironic based on mac address (M)
>          * IP auto populated from Neutron (F)
>       * Auto-discovery during undercloud install process (M)
>    * Monitoring
>        * assignment, availability, status
>        * capacity, historical statistics (M)
>    * Management node (where triple-o is installed)
>        * created as part of undercloud install process
>        * can create additional management nodes (F)
>     * Resource nodes
>         * searchable by status, name, cpu, memory, and all attributes from ironic
>         * can be allocated as one of four node types
>             * compute
>             * controller
>             * object storage
>             * block storage
>         * Resource class - allows for further categorization of a node type
>             * each node type specifies a single default resource class
>                 * allow multiple resource classes per node type (M)
>             * optional node profile for a resource class (M)
>                 * acts as filter for nodes that can be allocated to that class (M)
>         * nodes can be viewed by node types
>                 * additional group by status, hardware specification
>         * controller node type
>            * each controller node will run all openstack services
>               * allow each node to run specified service (F)
>            * breakdown by workload (percentage of cpu used per node) (M)
>     * Unallocated nodes
>     * Archived nodes (F)
>         * Will be separate openstack service (F)
> 
> * DEPLOYMENT
>    * multiple deployments allowed (F)
>      * initially just one
>    * deployment specifies a node distribution across node types
>       * node distribution can be updated after creation
>    * deployment configuration, used for initial creation only
>       * defaulted, with no option to change
>          * allow modification (F)
>    * review distribution map (F)
>    * notification when a deployment is ready to go or whenever something changes
> 
> * DEPLOYMENT ACTION
>    * Heat template generated on the fly
>       * hardcoded images
>          * allow image selection (F)
>       * pre-created template fragments for each node type
>       * node type distribution affects generated template

sorry am a bit late to the discussion - fyi:

^^^^ there are two sides to these previous points 1) temp solution using
merge.py from tuskar and the tripleo-heat-templates repo. (Icehouse,
imo) and 2) doing it 'properly' with the merge functionality pushed into
heat. (F, imo).

For 1) various bits are in play: fyi/if interested:

 /#/c/56947/ (Make merge.py invokable), /#/c/58823/ (Make merge.py
installable) and /#/c/52045/ (WIP : sketch of what using merge.py looks
like for tuskar) this last one needs updating and thought. Also
/#/c/58229/ and /#/c/57210/ which need some more thought,



>    * nova scheduler allocates nodes
>       * filters based on resource class and node profile information (M)
>    * Deployment action can create or update
>    * status indicator to determine overall state of deployment
>       * status indicator for nodes as well
>       * status includes 'time left' (F)
> 
> * NETWORKS (F)
> * IMAGES (F)
> * LOGS (F)
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




More information about the OpenStack-dev mailing list