[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

marios@redhat.com mandreou at redhat.com
Mon Dec 9 16:14:49 UTC 2013


On 09/12/13 18:01, Jay Dobies wrote:
>> I believe we are still 'fighting' here with two approaches and I believe
>> we need both. We can't only provide a way 'give us resources we will do
>> a magic'. Yes this is preferred way - especially for large deployments,
>> but we also need a fallback so that user can say - no, this node doesn't
>> belong to the class, I don't want it there - unassign. Or I need to have
>> this node there - assign.
> 
> +1 to this. I think there are still a significant amount of admins out
> there that are really opposed to magic and want that fine-grained
> control. Even if they don't use it that frequently, in my experience
> they want to know it's there in the event they need it (and will often
> dream up a case that they'll need it).

+1 to the responses to the 'automagic' vs 'manual' discussion. The
latter is in fact only really possible in small deployments. But that's
not to say it is not a valid use case. Perhaps we need to split it
altogether into two use cases.

At least we should have a level of agreement here and register
blueprints for both: for Icehouse the auto selection of which services
go onto which nodes (i.e. allocation of services to nodes is entirely
transparent). For post Icehouse allow manual allocation of services to
nodes. This last bit may also coincide with any work being done in
Ironic/Nova scheduler which will make this allocation prettier than the
current force_nodes situation.


> 
> I'm absolutely for pushing the magic approach as the preferred use. And
> in large deployments that's where people are going to see the biggest
> gain. The fine-grained approach can even be pushed off as a future
> feature. But I wouldn't be surprised to see people asking for it and I'd
> like to at least be able to say it's been talked about.
> 
>>>> - As an infrastructure administrator, Anna wants to be able to view
>>>> the history of nodes that have been in a deployment.
>>> Why? This is super generic and could mean anything.
>> I believe this has something to do with 'archived nodes'. But correct me
>> if I am wrong.
>>
>> -- Jarda
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list