[openstack-dev] [Neutron] Flavor Framework

Eugene Nikanorov enikanorov at mirantis.com
Fri Feb 28 12:17:24 UTC 2014


HI Gary,

My initial plan was to let cloud admin decide which parameters should
flavor have.
As an alternative, we can have specific set of parameters for each service
and cloud admin will only specify their values for particular flavor.
Another option could be to have a set of tags in the flavor, which are then
matched to tags published by driver.
We're yet to discuss it.

Thanks,
Eugene.


On Fri, Feb 28, 2014 at 1:31 PM, Gary Duan <garyduan at gmail.com> wrote:

> Hi, Eugene,
>
> What are the parameters that will be part of flavor definition? As I am
> thinking of it now, the parameter could be performance and capacity
> related, for example, throughput, max. session number and so on; or
> capability related, for example, HA, L7 switching.
>
> Compared to # of CPU and memory size in Nova flavor, these parameters
> don't seem to have exact definitions across different implementations. Or,
> you think it is not something we need worry about. It's totally operator's
> decision of how to rate different drivers?
>
> Thanks,
> Gary
>
>
> On Thu, Feb 27, 2014 at 10:19 PM, Eugene Nikanorov <
> enikanorov at mirantis.com> wrote:
>
>> Hi Jay,
>>
>> Thanks for looking into this.
>>
>>
>>> 1) I'm not entirely sure that a provider attribute is even necessary to
>>> expose in any API. What is important is for a scheduler to know which
>>> drivers are capable of servicing a set of attributes that are grouped
>>> into a "flavor".
>>>
>> Well, provider becomes read-only attribute and for admin only (jsut to
>> see which driver actually handles the resources), not too much of API
>> visibility.
>>
>>
>>> 2) I would love to see the use of the term "flavor" banished from
>>> OpenStack APIs. Nova has moved from flavors to "instance types", which
>>> clearly describes what the thing is, without the odd connotations that
>>> the word "flavor" has in different languages (not to mention the fact
>>> that flavor is spelled flavour in non-American English).
>>>
>>> How about using the term "load balancer type", "VPN type", and "firewall
>>> type" instead?
>>>
>> Oh... I don't have strong opinion on the name of the term.
>> "Flavor" was used several time in our discussions and is short.
>> "*Instance* Type" however seems also fine. Another option is probably a
>> "Service Offering".
>>
>>
>>>
>>> 3) I don't believe the FlavorType (public or internal) attribute of the
>>> flavor is useful. We want to get away from having any vendor-specific
>>> attributes or objects in the APIs (yes, even if they are "hidden" from
>>> the normal user). See point #1 for more about this. A scheduler should
>>> be able to match a driver to a request simply by matching the set of
>>> required capabilities in the requested flavor (load balancer type) to
>>> the set of capabilities advertised by the driver.
>>>
>> ServiceType you mean? If you're talking about ServiceType then it mostly
>> for the user to filter flavors (I'm using short term for now) by service
>> type. Say, when user wants to create new loadbalancer, horizon will show
>> only flavors related to the lb.
>> That could be also solved by having different names live you suggested
>> above: "Lb type", "VPN type", etc.
>> On other hand that would be similar objects with different names - does
>> it make much sense?
>>
>> I'm not sure what you think 'vendor-specific' attributes are, I don't
>> remember to have plan of exposing any kind of vendor-related parameters.
>> The parameters that flavor represents are capabilities of the service in
>> terms that user care about: latency, throughput, topology, technology, etc.
>>
>>
>>
>>> 4) A minor point... I think it would be fine to group the various
>>> "types" into a single database table behind the scenes (like you have in
>>> the Object model section). However, I think it is useful to have the
>>> public API expose a /$servie-types resource endpoint for each service
>>> itself, instead of a generic /types (or /flavors) endpoint. So, folks
>>> looking to set up a load balancer would call GET /balancer-types, or
>>> call neutron balancer-type-list, instead of calling
>>> GET /types?service=load-balancer or neutron flavor-list
>>> --service=load-balancer
>>>
>> I'm fine with this suggestion.
>>
>>
>>>
>>> 5) In the section on Scheduling, you write "Scheduling is a process of
>>> choosing provider and a backend for the resource". As mentioned above, I
>>> think this could be changed to something like this: "Scheduling is a
>>> process of matching the set of requested capabilities -- the flavor
>>> (type) -- to the set of capabilities advertised by a driver for the
>>> resource". That would put Neutron more in line with how Nova handles
>>> this kind of thing.
>>>
>> I agree, I actually meant this and nova example is how I think it should
>> work.
>> But more important is what is the result of scheduling.
>> We discussed that yesterday with Mark and I think we got so the point
>> where we could not find agreement for now.
>> In my opinion the result of scheduling is binding resource to the driver
>> (at least)
>> So further calls to the resource go to the same driver because of that
>> binding.
>> That's pretty much the same how agent scheduling works.
>>
>> By the way I'm thinking about getting rid of 'provider' term and using
>> 'driver' instead. Currently 'provider' is just a user-facing representation
>> of the driver. Once we introduce flavors/service types/etc, we can use term
>> 'driver' for implementation means.
>>
>> Thanks,
>> Eugene.
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140228/ffbe82ba/attachment.html>


More information about the OpenStack-dev mailing list