[openstack-dev] [neutron] - L3 flavors and issues with usecasesfor multiple L3 backends

Germy Lure germy.lure at gmail.com
Wed Feb 3 09:52:36 UTC 2016


People need high performance but also xaaS integrated, slow and free but
also packet logged. And lots of back-ends have multiple characters.
According to the example described in this thread, those characters really
should be modeled as different flavors.
Indeed, I think people just want to know what features can those backends
provide and chose one of them to deploy her or his business. Flavor
sub-system can help people to choose easier.
So flavor should be understood by user, any change that facing to user
should introduce a NEW flavor. One vendor for one flavor, even every
version of a vendor for one flavor.

IMHO, no interruption, no rescheduling. Everything should be ready when
user creates a router, according to a flavor gets from neutron.

Thanks.
Germy


On Wed, Feb 3, 2016 at 12:01 PM, rzang <rui.zang at foxmail.com> wrote:

> Is it possible that the third router interface that the user wants to add
> will bind to a provider network that the chosen driver (for bare metal
> routers) can not access physically? Even though the chosen driver has the
> capability for that type of network? Is it a third dimension that needs to
> take into consideration besides flavors and capabilities? If this case is
> possible, it is a problem even we restrict all the drivers in the same
> flavor should have the same capability set.
>
>
> ------------------ Original ------------------
> *From: * "Kevin Benton";<blak111 at gmail.com>;
> *Send time:* Wednesday, Feb 3, 2016 9:43 AM
> *To:* "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev at lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [neutron] - L3 flavors and issues with
> usecasesfor multiple L3 backends
>
> So flavors are for routers with different behaviors that you want the user
> to be able to choose from (e.g. High performance, slow but free, packet
> logged, etc). Multiple drivers are for when you have multiple backends
> providing the same flavor (e.g. The high performance flavor has several
> drivers for various bare metal routers).
> On Feb 2, 2016 18:22, "rzang" <rui.zang at foxmail.com> wrote:
>
>> What advantage can we get from putting multiple drivers into one flavor
>> over strictly limit one flavor one driver (or whatever it is called).
>>
>> Thanks,
>> Rui
>>
>> ------------------ Original ------------------
>> *From: * "Kevin Benton";<blak111 at gmail.com>;
>> *Send time:* Wednesday, Feb 3, 2016 8:55 AM
>> *To:* "OpenStack Development Mailing List (not for usage questions)"<
>> openstack-dev at lists.openstack.org>;
>> *Subject: * Re: [openstack-dev] [neutron] - L3 flavors and issues with
>> usecases for multiple L3 backends
>>
>> Choosing from multiple drivers for the same flavor is scheduling. I
>> didn't mean automatically selecting other flavors.
>> On Feb 2, 2016 17:53, "Eichberger, German" <german.eichberger at hpe.com>
>> wrote:
>>
>>> Not that you could call it scheduling. The intent was that the user
>>> could pick the best flavor for his task (e.g. a gold router as opposed to a
>>> silver one). The system then would “schedule” the driver configured for
>>> gold or silver. Rescheduling wasn’t really a consideration…
>>>
>>> German
>>>
>>> From: Doug Wiegley <dougwig at parksidesoftware.com<mailto:
>>> dougwig at parksidesoftware.com>>
>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>>> <openstack-dev at lists.openstack.org<mailto:
>>> openstack-dev at lists.openstack.org>>
>>> Date: Monday, February 1, 2016 at 8:17 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev at lists.openstack.org<mailto:
>>> openstack-dev at lists.openstack.org>>
>>> Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use
>>> cases for multiple L3 backends
>>>
>>> Yes, scheduling was a big gnarly wart that was punted for the first
>>> pass. The intention was that any driver you put in a single flavor had
>>> equivalent capabilities/plumbed to the same networks/etc.
>>>
>>> doug
>>>
>>>
>>> On Feb 1, 2016, at 7:08 AM, Kevin Benton <blak111 at gmail.com<mailto:
>>> blak111 at gmail.com>> wrote:
>>>
>>>
>>> Hi all,
>>>
>>> I've been working on an implementation of the multiple L3 backends
>>> RFE[1] using the flavor framework and I've run into some snags with the
>>> use-cases.[2]
>>>
>>> The first use cases are relatively straightforward where the user
>>> requests a specific flavor and that request gets dispatched to a driver
>>> associated with that flavor via a service profile. However, several of the
>>> use-cases are based around the idea that there is a single flavor with
>>> multiple drivers and a specific driver will need to be used depending on
>>> the placement of the router interfaces. i.e. a router cannot be bound to a
>>> driver until an interface is attached.
>>>
>>> This creates some painful coordination problems amongst drivers. For
>>> example, say the first two networks that a user attaches a router to can be
>>> reached by all drivers because they use overlays so the first driver chosen
>>> by the framework works  fine. Then the user connects to an external network
>>> which is only reachable by a different driver. Do we immediately reschedule
>>> the entire router at that point to the other driver and interrupt the
>>> traffic between the first two networks?
>>>
>>> Even if we are fine with a traffic interruption for rescheduling, what
>>> should we do when a failure occurs half way through switching over because
>>> the new driver fails to attach to one of the networks (or the old driver
>>> fails to detach from one)? It would seem the correct API experience would
>>> be switch everything back and then return a failure to the caller trying to
>>> add an interface. This is where things get messy.
>>>
>>> If there is a failure during the switch back, we now have a single
>>> router's resources smeared across two drivers. We can drop the router into
>>> the ERROR state and re-attempt the switch in a periodic task, or maybe just
>>> leave it broken.
>>>
>>> How should we handle this much orchestration? Should we pull in
>>> something like taskflow, or maybe defer that use case for now?
>>>
>>> What I want to avoid is what happened with ML2 where error handling is
>>> still a TODO in several cases. (e.g. Any post-commit update or delete
>>> failures in mechanism drivers will not trigger a revert in state.)
>>>
>>> 1. https://bugs.launchpad.net/neutron/+bug/1461133
>>> 2. https://etherpad.openstack.org/p/<
>>> https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
>>> >neutron-modular-l3-router-plugin-use-cases<
>>> https://etherpad.openstack.org/p/neutron-modular-l3-router-plugin-use-cases
>>> >
>>>
>>> --
>>> Kevin Benton
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:
>>> OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160203/3f63b6df/attachment.html>


More information about the OpenStack-dev mailing list