[openstack-dev] [Neutron] Flavor Framework

Eugene Nikanorov enikanorov at mirantis.com
Tue Mar 4 10:07:55 UTC 2014


Thanks for you interest, folks.

Salvatore, I think we're mostly model this with load balancing examples
because firstly we're working on lbaas and secondly - lbaas already has
providers/drivers and knowing limitations of that, we are trying to
understand how to do Flavors better.
For sure we plan to make the framework generic.

Regarding catalog vs scheduler - I think we're planning scheduling rather
then catalog.
> In the latter case the selection of a "flavour" will be more like
expressing a desired configuration, and this sort of "scheduler"
> will then pick the driver which offer the closest specification, or
reject the request if no driver is available (which might happen if > the
driver is there but has no capacity).

Yes, that is how I see it working.

On some previous Jay's comment:
> > Well, provider becomes read-only attribute and for admin only (jsut to
> > see which driver actually handles the resources), not too much of API
> > visibility.

> I'd very much prefer to keep the provider/driver name out of the public
> API entirely. I don't see how it is needed.
Yep, like network segmentation id (which is implementation detail) is not
visible to the user,
provider/driver will only be visible to admin.

Driver attribute of the resource just represents the binding between
resource and the driver that handles REST calls.
I think it must be useful for the admin to know that.

Thanks,
Eugene.



On Tue, Mar 4, 2014 at 1:11 PM, Salvatore Orlando <sorlando at nicira.com>wrote:

> Hi,
>
> I read this thread and I think this moves us in the right direction of
> moving away from provider mapping, and, most importantly, abstracting away
> backend-specific details.
>
> I was however wondering if "flavours" (or "service offerings") will act
> more like a catalog or a scheduler.
> The difference, in mu opinion, is the following:
> In the first case, specifying of an item from the catalog will uniquely
> identify the backend which will implement the service. For instance if you
> select "Gold
>  or "GoldwithSSL" then your load balancer will be implemented using the
> backend driver "Iota", whereas if you select "Copper" it will be
> implemented using driver "Epsilon".
> In the latter case the selection of a "flavour" will be more like
> expressing a desired configuration, and this sort of "scheduler" will then
> pick the driver which offer the closest specification, or reject the
> request if no driver is available (which might happen if the driver is
> there but has no capacity).
>
> From my perspective, it would also be important to not focus exclusively
> on one service (I've read mostly about load balancing here), but provide a
> solution, and then a PoC implementation, which will apply to Firewall and
> VPN services as well.
>
> Salvatore
>
> PS: I'm terrible at names; So far I think we've been using mostly
> "flavour" and "service offering". Regardless of what makes sense one has to
> consider also uniformity with similar concepts across openstack projects.
>
>
> On 4 March 2014 00:33, Samuel Bercovici <SamuelB at radware.com> wrote:
>
>>  Hi,
>>
>>
>>
>> The discussion about advanced services and scheduling was primarily
>> around choosing backbends based on capabilities.
>>
>> AFAIK, the Nova flavor specify capacity.
>>
>> So I think that using the term "flavor" might not match what is intended.
>>
>> A better word might be "capability" or "group of capabilities".
>>
>>
>>
>> Is the following what we want to achieve?
>>
>> ·         A tenant creates a vip and requires high-available with
>> advanced L7 and SSL capabilities for production.
>>
>> ·         Another tenant creates a vip that requires advanced L7 and SSL
>> capabilities for development.
>>
>>
>>
>> The admin or maybe even the tenant might group such capabilities (ha, L7,
>> SSL) and name them advanced-adc and another group of capabilities (no-ha,
>> L7, SSL) and name them adc-for-testing.
>>
>>
>>
>> This leads to an abbreviation of:
>>
>> ·         Tenant creates a vip that requires advanced-adc.
>>
>> ·         Tenant creates a vip the requires adc-for-testing.
>>
>>
>>
>> Regards,
>>
>>                 -Sam.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *From:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>> *Sent:* Thursday, February 27, 2014 12:12 AM
>> *To:* OpenStack Development Mailing List
>> *Subject:* [openstack-dev] [Neutron] Flavor Framework
>>
>>
>>
>> Hi neutron folks,
>>
>>
>>
>> I know that there are patches on gerrit for VPN, FWaaS and L3 services
>> that are leveraging Provider Framework.
>>
>> Recently we've been discussing more comprehensive approach that will
>> allow user to choose service capabilities rather than vendor or provider.
>>
>>
>>
>> I've started creating design draft of Flavor Framework, please take a
>> look:
>>
>> https://wiki.openstack.org/wiki/Neutron/FlavorFramework
>>
>>
>>
>> It also now looks clear to me that the code that introduces providers for
>> vpn, fwaas, l3 is really necessary to move forward to Flavors with one
>> exception: providers should not be exposed to public API.
>>
>> While provider attribute could be visible to administrator (like
>> segmentation_id of network), it can't be specified on creation and it's not
>> available to a regular user.
>>
>>
>>
>> Looking forward to get your feedback.
>>
>>
>>
>> Thanks,
>>
>> Eugene.
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140304/a0fed5e6/attachment.html>


More information about the OpenStack-dev mailing list