[openstack-dev] [Quantum][LBaaS] Advanced Services Insertion
Dan Wendlandt
dan at nicira.com
Mon Nov 5 19:19:36 UTC 2012
Hi folks,
I haven't been able to follow this whole thread, but with two weeks left in
the Grizzly-1 milestone, which means realistically that for this work to
hit the G-1 target, code should be proposed in one week. I'd like to
encourage people to focus on the core functionality needed to start
introducing higher-level services like load-balancing, even if that means
delaying until a later release some other items that might fall under the
scope of "network services insertion".
Salvatore, are you able to identify the key open issues that are blocking
us from beginning implementation? Let's discuss this at the team meeting
today.
Dan
On Mon, Nov 5, 2012 at 8:12 AM, Salvatore Orlando <sorlando at nicira.com>wrote:
>
>
> On 2 November 2012 20:03, Sasha Ratkovic <sasharatkovic at juniper.net>wrote:
>
>>
>>
>> From: Salvatore Orlando <sorlando at nicira.com>
>> Reply-To: OpenStack Development Mailing List <
>> openstack-dev at lists.openstack.org>
>> Date: Friday, November 2, 2012 2:23 AM
>>
>> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org
>> >
>> Subject: Re: [openstack-dev] [Quantum][LBaaS] Advanced Services Insertion
>>
>>
>>
>> On 2 November 2012 05:52, Sasha Ratkovic <sasharatkovic at juniper.net>wrote:
>>
>>> Let me try to summarize the discussion below (possibly restating the
>>> obvious for some/most of the people). Salvatore and Eugene please validate
>>> and please propose terminology.
>>>
>>> 1. "Service insertion" is about expressing services available to
>>> tenant (service catalog)
>>>
>>> +2
>>
>>>
>>> 1. Associating it (service catalog) with a "router" gives a hint as
>>> to how/where the service will be implemented. This hint is only consumed by
>>> plugin, but this association is not mandatory (in the absence of it you get
>>> a "floating"mode) as some plugins may not require this hint.
>>>
>>> +1, but I'd say that any plugin does not necessarily need this hint,
>> as you may have a default one.
>>
>>>
>>> 1. There is a separate "device lifecycle" API dealing with plugging
>>> in physical/virtual devices and expressing their capabilities to support
>>> one or more services from the catalog. This is then consumed by plugin for
>>> mapping tenant API "service request" to device configurations. "Device
>>> lifecycle" API may be also invoked from within the plugin (for
>>> instantiating virtual devices).
>>>
>>> +2. Indeed I think that at this stage we're under no pressure at the
>> moment for defining and implementing this API, as we should focus on the
>> tenant API (as we're doing).
>>
>>>
>>> 1. Tenant API is about specifying a "service request" for one of the
>>> services from the catalog. It results in plugin configuring one or more
>>> physical/virtual devices, at the discretion of plugin and using whatever
>>> hints are specified during "service catalog insertion" and "device
>>> lifecycle" API calls. ( Actually my proposal at the summit was about tenant
>>> APIs, not service catalog insertion ).
>>>
>>> +2 - and yes I too was definitely under the impression that we were
>> both talking about tenant APIs.
>>
>>
>>> Now regarding:
>>>
>>> If "multi plugin" approach is chosen, with each plugin having its
>>>> own db (per definition above), it becomes extremely important to have some
>>>> way for plugins to have coherently manage resources that may have their
>>>> representations reside in both dbs. ( Say "port" in core API is sort of
>>>> extended in LBaaS as "pool member" - I.e. Adding/deleting the port impacts
>>>> obviously pool member). Is this a valid concern?
>>>>
>>>
>>>
>>> - Salvatore: I agree it's an important point, but probably not a
>>> huge concern. The plugins will be independent but not ignore each other. So
>>> I think it is still ok for a LB pool member to reference a port id from
>>> Quantum core. In Quantum core you can use the device_owner and the
>>> device_id fields to mark that port as used by a specific advanced service.
>>> We currently use those fields to prevent tenants from performing operations
>>> on the core API which will cause disruption to other services.
>>>
>>>
>>> - Eugene That's a good question. Deleting port could effectively
>>> disable a pool member while LBaaS will not know about it. We can
>>> think of some kind of NotifierAPI for such cases: plugins will subscribe to
>>> core resource changes and make corresponding changes in their DBs as well
>>> as changes in affected device configurations.
>>>
>>>
>>> Sasha: Having plugins "not ignore" each other or subscribe to
>>> notifications from each other may get progressively more complex as new
>>> services/plugins are introduced. The root cause of this issue is that
>>> "port" and "pool member" are essentially two representations of the same
>>> resource ( in RESTful terms ), resource being VM ("endpoint").
>>>
>>
>> I might argue that actual the "pool member" concept is a decorator of
>> the "port" concept, and hence it might make sense to have a "reference" to
>> it. The interesting point is that a pool member port is however a regular
>> tenant port, and before that port was used for Load Balancing, it was a
>> regular tenant port. So the device_owner concept won't probably work here.
>> I agree notifications are complex, and might get more and more complex as
>> the plugin grows. I am wondering whether this is the same case as a file
>> opened by several processes, and hence we could solve it adding a reference
>> count to resources.
>>
>>
>>
>> Decorator is good, and its semantics (decorates something) imply that
>> there is some underlying mechanism that ensures referential integrity. I
>> don't think there is such mechanism (yet) in quantum.
>>
>
> Nope, we do not have this mechanism.
>
>>
>>
>> This same VM is in quantum core given connectivity by plugging into
>>> network and in LBaaS it is indicated to be a member of a pool (group) which
>>> in turn is associated with VIF (LBPolicy/service). But with separate plugin
>>> databases they are two distinct resources, and additional integration
>>> overhead (notification mechanism above for example) is then introduced to
>>> actually keep them in sync.
>>>
>>
>> I agree. You will end up with a reference to a port-id which could not
>> be validated in terms of referential integrity. So when you delete the port
>> either you have some logic as the notifications proposed by Eugene, or
>> you'll end up with the LB service reporting a non-functional pool member,
>> which perhaps isn't 100% bad after all.
>>
>>
>>
>> If customers can live with that, so can I :)
>>
>
> Nevertheless, this is an interesting topic. I think we can isolate if from
> the main discussion on service insertion and probably come back at it later
> in the release cycle.
>
>>
>> Instead of integration happening through single common data service
>>> shared among plugins, (N interfaces for N plugins) we may end up with
>>> N*(N-1) interfaces between each of the plugins, introducing potentially
>>> more complexity and tighter coupling as plugins will have to understand
>>> each other's data models. Something to consider but hopefully not at a cost
>>> of interfering with the quick progress based on the great work of the LBaaS
>>> team.
>>>
>>
>> That would be just bad, and probably unnecessary.
>>
>>
>> What I meant is that quantum may end up having to implement these
>> dependencies - so "bad" may become necessary, unless the previous point
>> (customer doesn't care) obsoletes this issue.
>>
>
> My take is that each plugin exposes its own interface, and in theory each
> plugin can communicate with each other. However, even if in theory it a
> full mesh grid of interconnected plugins, each plugin should communicate
> only with lower level plugins. For instance a LB plugin should not have any
> need to communicate with other LB plugins, whereas it is very likely it
> will have to communicate with the core plugin. Quantum might also provide
> infrastructure to allow a plugin to know which are the other plugins it can
> communicate with. This is another very interesting topic; if we go down the
> multiple, coexisting, plugin route, this would be an important feature to
> add a few milestones down the line.
>
>>
>>
>>
>>>
>>> ( Food for thought: this same VM is going to be subject of security
>>> policies, and the question is how will security service model it in its
>>> APIs? . Adding new pool member will result in (at least) 3 calls: to Core,
>>> LBaaS and Security, each involving different representations of the same
>>> VM, and utilizing 6 subscriptions to keep them in sync. Hopefully I am
>>> missing a knowledge of some existing mechanism that makes this whole
>>> discussion "false alarm" )
>>>
>>
>> The only security mechanism we have so far are "security groups" which
>> have just been merged into the API. They're really just an attribute of the
>> port at the API level. At the moment, we're supposing that this service is
>> provided by the core plugin, which probably make sense. However, there is
>> an argument for separating it into its own plugin.
>>
>>
>> Got it. In the end what matters is if plugins share the data model/db
>> or not and in the current architecture they don't. With all associated
>> pluses and minuses.
>>
>> Is there published API for "security groups" APIs ( the presentation on
>> the blueprint has it, is that the one? )
>>
>
> I think Aaron Rosen is working on publishing the spec for security groups.
> The code is already merged. Security groups are actually an API which was
> approved at the Folsom summit, but we did not make in time to merge it
> before shipping the Folsom release.
>
>
>> Thanks!
>>
>>
>>
>>>
>>> Thanks,
>>> Sasha
>>>
>>>
>>> From: Salvatore Orlando <sorlando at nicira.com>
>>>
>>> Reply-To: OpenStack Development Mailing List <
>>> openstack-dev at lists.openstack.org>
>>> Date: Thursday, November 1, 2012 4:35 PM
>>>
>>> To: OpenStack Development Mailing List <
>>> openstack-dev at lists.openstack.org>
>>> Subject: Re: [openstack-dev] [Quantum][LBaaS] Advanced Services
>>> Insertion
>>>
>>> Sasha,
>>> thanks for contributing to the discussion! I will start with answering
>>> Eugene's questions. Some comments on the points you've raised can be found
>>> inline.
>>>
>>> Eugene: When I was talking about device management I primarily meant
>>> "informing LBaaS about LB device: where it is located and how it is
>>> connected".
>>> I still feel that's related to what you're talking about in (2). Our
>>> view is based on existing LBaaS workflow and I'm trying to map it to
>>> your proposal or understand the basic differences.
>>>
>>> Salvatore: While to some extent it is true that the "service type"
>>> ultimately determines the LB device, this does not happen directly. The
>>> service type identifies a "class" of devices, whereas the actual mapping of
>>> a logical load balancer to its concrete realization (which at some point
>>> must happen) is left to the plugin. From the tenant perspective, there is
>>> no interest in knowing any detail of the device serving the request
>>> (physical or virtual, vendor X vs vendor Y, etc.); all the tenant needs to
>>> know is that the chosen device satisfies its requirements.
>>>
>>> Eugene: So the questions will be:
>>> 1) Why do we want to extend router resource?
>>>
>>> Salvatore: We are not looking for extending the router resource. When
>>> we say that we insert an advanced service on a router, what we are really
>>> saying, to borrow Sasha's terminology, is that we are providing LB
>>> capabilities in a given L3 domain. The load balancer attached to a router
>>> will have access to the L3 domain of that router.
>>>
>>> Eugene: Why do we need to know whether router provides certain
>>> resource or not?
>>>
>>> Salvatore: Not sure I understand this question. The router does not
>>> provide any kind of resource; more specifically we are not proposing that
>>> advanced services should be provided as a sub-resource of a router. The
>>> point we're making is that if one chooses to attach a service in routed
>>> mode - this means that service will run where the default gateway for your
>>> logical topology is, and so all the other services attached in routed mode.
>>> Then the service type should be the same for all the services, and can
>>> easily be specified by the router itself.
>>>
>>> Concrete examples are a VM that provides a bunch of services, from L3
>>> forwarding to Load Balancing; as an example thing of the cloudstack's
>>> "Virtual Router"; another possible example is an integrated appliance such
>>> as an application delivery controller; the simple possible example is to
>>> think about Quantum's l3 agent augmented with more agents which perform
>>> firewall configuration with iptables and load balancing with haproxy.
>>>
>>> Eugene: Currently LBaaS can provide LB service if it has appropriate
>>> device, it doesn't depend on any router.
>>> Is it for adv svc quota management only?
>>>
>>> Salvatore: From my understanding the current LBaaS API proposal (
>>> http://wiki.openstack.org/Quantum/LBaaS/API_1.0) focuses exclusively on
>>> the tenant-side APIs, where device details are hidden, as one might expect.
>>> Unfortunately I've not been involved with LBaaS discussion as I should have
>>> so far, so I'm not aware of any API proposal or POC code for device
>>> management. Nevertheless I am confident that none of what we're proposing
>>> we'll add unwanted constraints on tenant-facing APIs
>>> .
>>> Eugene: 2) Why do we need to associate adv. service with a certain
>>> router?
>>>
>>> Salvatore: This is not necessary; it's an alternative to the
>>> "standalone", "out-of-path", or "floating" mode.
>>>
>>> Eugene: Did you mean we associate particular adv service appliance
>>> with a router?
>>> E.g. say we have device LB1 associated with router1, device LB2
>>> associated with router2, is that what you meant?
>>>
>>> Salvatore: Yes, but none of this would surface to tenant. Also the
>>> choice of LB1 and LB2 is up to the plugin, rather than to the Quantum API
>>> layer.
>>> For instance a plugin might decide to not allocate any device to a
>>> router until a load balancer has actually been associated with it; another
>>> plugin might instead spin up a "router VM" for each logical Quantum router
>>> and then configure advanced services such as load balancing on that VM.
>>>
>>> Eugene: If so, doest it makes sense to have many-to-many relationship
>>> between adv service appliances and routers?
>>> If that's what you meant, then it's a part of "device management",
>>> e.g. "informing LBaaS about LB device: where it is located and how it
>>> is connected".
>>> In this case when tenant request LB for certain network, Quantum will
>>> collect all available devices for the network (e.g. all devices attached to
>>> corresponding router) and chose the device to deploy LB for the tenant.
>>>
>>> Salvatore: I think your workflow is quite correct. I just was not
>>> regarding the process of selecting a particular device as "device
>>> management" as well.
>>> With the only exception that the strategy for finding available devices
>>> should be left to the plugin rather than to the Quantum service. Also I
>>> don't want to risk confusion between logical and physical realms. "all
>>> devices attached to the corresponding router" sounds like you're referring
>>> to your physical infrastructure, and that is not the case.
>>> So the same workflow you described, might be translated as:
>>> - tenant creates LB using Quantum LBaaS API
>>> - Quantum analyzes service type in tenant request
>>> - Quantum forwards request to appropriate plugin or driver (the or is
>>> mandatory as this is another design point still under review)
>>> - The selected plugin or driver chooses the appropriate device and
>>> associates tenant's logical LB with its concrete realization
>>> - The selected plugin or driver configures load balancing using
>>> device-specific interface
>>>
>>>
>>> On 1 November 2012 21:03, Sasha Ratkovic <sasharatkovic at juniper.net>wrote:
>>>
>>>> Eugene, Salvatore, thanks for great discussion!
>>>>
>>>> I am also looking forward to the answer regarding the semantics of
>>>> associating "service" to (logical) "router".
>>>>
>>>> I also have some questions/clarifications below inline (way below,
>>>> hence in green) , related to your earlier exchanges …
>>>>
>>>> Thanks,
>>>> Sasha
>>>>
>>>> From: Eugene Nikanorov <enikanorov at mirantis.com>
>>>> Reply-To: OpenStack Development Mailing List <
>>>> openstack-dev at lists.openstack.org>
>>>> Date: Thursday, November 1, 2012 5:20 AM
>>>> To: "openstack-dev at lists.openstack.org" <
>>>> openstack-dev at lists.openstack.org>
>>>> Subject: Re: [openstack-dev] [Quantum][LBaaS] Advanced Services
>>>> Insertion
>>>>
>>>> Hi Salvatore,
>>>>
>>>> Let me answer those to points.
>>>> *> 1) Defining how advanced service implementations (plugin or
>>>> drivers) will serve API requests, keeping in mind that for the same
>>>> kind of service there might be multiple implementations available at the
>>>> same time*
>>>> That part of "service insertion" seems to be quite clear.
>>>>
>>>> *> 2) Defining the APIs and related logic for defining how advanced
>>>> service fit in the logical topology (service insertion modes) and which
>>>> services (type and nature) should be available to tenants.*
>>>> When I was talking about device management I primarily meant "informing
>>>> LBaaS about LB device: where it is located and how it is connected".
>>>> I still feel that's related to what you're talking about in (2). Our
>>>> view is based on existing LBaaS workflow and I'm trying to map it to your
>>>> proposal or understand the basic differences.
>>>>
>>>> So the questions will be:
>>>> 1) Why do we want to extend router resource?
>>>> Why do we need to know whether router provides certain resource or not?
>>>> Currently LBaaS can provide LB service if it has appropriate device, it
>>>> doesn't depend on any router.
>>>> Is it for adv svc quota management only?
>>>>
>>>> 2) Why do we need to associate adv. service with a certain router?
>>>> Did you mean we associate particular adv service appliance with a
>>>> router?
>>>> E.g. say we have device LB1 associated with router1, device LB2
>>>> associated with router2, is that what you meant?
>>>> If so, doest it makes sense to have many-to-many relationship between
>>>> adv service appliances and routers?
>>>> If that's what you meant, then it's a part of "device management",
>>>> e.g. "informing LBaaS about LB device: where it is located and how it is
>>>> connected".
>>>> In this case when tenant request LB for certain network, Quantum will
>>>> collect all available devices for the network (e.g. all devices attached to
>>>> corresponding router) and chose the device to deploy LB for the tenant.
>>>>
>>>> Thanks,
>>>> Eugene.
>>>>
>>>>
>>>> On Wed, Oct 31, 2012 at 11:49 PM, Salvatore Orlando <
>>>> sorlando at nicira.com> wrote:
>>>>
>>>>> A few more comments inline!
>>>>>
>>>>> Salvatore
>>>>>
>>>>> On 31 October 2012 19:21, Eugene Nikanorov <enikanorov at mirantis.com
>>>>> > wrote:
>>>>>
>>>>>> Salvatore, thanks for detailed reply.
>>>>>>
>>>>>> But still there are unclear points for me :(
>>>>>>
>>>>>> 1. What does mean "advanced service is created". Don't we always
>>>>>> have all advanced services which are provided by plugins configured in
>>>>>> quantum.conf? I mean if we configured lbaas plugin in quantum.conf then
>>>>>> tenant may call corresponding tenant API to request LB service. Is my
>>>>>> understanding correct?
>>>>>>
>>>>>
>>>>> My mistake - english is not my language, unfortunately! I was
>>>>> referring to a tenant creating an instance of an advanced service, e.g.:
>>>>> POST /{whatever-the-prefix-is}/lb
>>>>>
>>>>>>
>>>>>> 2. What happens when tenant calls /routers/<r_id>/enable_service ?
>>>>>>
>>>>>
>>>>> This is the most optional bit of the service insertion specification.
>>>>> Assume we have a router associated with a service type which gives you
>>>>> LB & Firewall. This means you can in theory create Load Balancers and
>>>>> Firewall associated with that router. A tenant however might decide to turn
>>>>> off that capability; frankly I do not see this as something necessary - it
>>>>> was in the specification just for completeness.
>>>>>
>>>>>
>>>>>>
>>>>>> Once again, I've tried to map this "insertion" to the workflow of
>>>>>> how a tenant gets a service, but you're saying it's not about it, so I'm
>>>>>> trying to understand how services insertion affects our design and code at
>>>>>> all.
>>>>>>
>>>>>
>>>>> Oh well - as you were talking about "device management" I understood
>>>>> you were talking about managing the pool of resources, physical or virtual,
>>>>> which provide the service to the tenant. If this is actually what you
>>>>> meant, I can confirm service insertion has nothing to do with it.
>>>>> Service insertion tries to address the following problems
>>>>>
>>>>> 1) Defining how advanced service implementations (plugin or
>>>>> drivers) will serve APIrequests, keeping in mind that for the same
>>>>> kind of service there might be multiple implementations available at the
>>>>> same time
>>>>> 2) Defining the APIs and related logic for defining how advanced
>>>>> service fit in the logical topology (service insertion modes) and which
>>>>> services (type and nature) should be available to tenants.
>>>>>
>>>>> Let me explain the above two points with an example (sorry if that's
>>>>> boring)
>>>>>
>>>>> 1) When a requests for a LB comes in, we need to understand how to
>>>>> dispatch it in the Quantum's plugin layer so that ultimately results in the
>>>>> configuration of the appropriate type of device (e.g.: a NetScaler or an HA
>>>>> Proxy instance).
>>>>> 2) When a tenant creates a load balancer, we might want to associate
>>>>> it with a logical router (and hence the load balancer and the default
>>>>> gateway are the same thing) or we want to have it kind of standalone,
>>>>> acting independently from any logical router. Also when a LB is created we
>>>>> might specify a "service type" which could map to things such as Bronze,
>>>>> Silver, and Gold - which ultimately will lead to the selection of a
>>>>> provider versus another one.
>>>>>
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Eugene.
>>>>>>
>>>>>>
>>>>>> On Wed, Oct 31, 2012 at 9:14 PM, Salvatore Orlando <
>>>>>> sorlando at nicira.com> wrote:
>>>>>>
>>>>>>> Hi Eugene,
>>>>>>>
>>>>>>> thanks for your feedback.
>>>>>>> Please see my replies inline.
>>>>>>>
>>>>>>> On 31 October 2012 13:32, Eugene Nikanorov <enikanorov at mirantis.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Hi Salvatore,
>>>>>>>>
>>>>>>>> I'd like to give some feedback/questions based on yesterday's
>>>>>>>> meeting discussion and your renewed
>>>>>>>> http://wiki.openstack.org/Quantum/ServiceInsertion page.
>>>>>>>>
>>>>>>>> First of all, I think it's worth to fix the terminology just to
>>>>>>>> avoid any confusion:
>>>>>>>>
>>>>>>>> - extension (API extension) - set of REST calls
>>>>>>>> - plugin - code that implements certain API, works with quantum
>>>>>>>> database, pushes calls to agents
>>>>>>>> - core plugin - code that implements core API (networks, subnets,
>>>>>>>> ports, L3)
>>>>>>>> - agent - listens to commands from plugin, applies configuration to
>>>>>>>> particular device type, ex: ovs agent, L3 agent
>>>>>>>> - driver - code that applies conf to particular device type. That
>>>>>>>> is just another layer needed to support different device types. Example:
>>>>>>>> Loadbalancing agent may have several drivers to talk to different LB
>>>>>>>> devices.
>>>>>>>>
>>>>>>>
>>>>>>> Everything agreed, with the only exception that some flexibility
>>>>>>> might be allowed in the definition of agents/drivers. It is possible to
>>>>>>> think of a plugin which directly dispatches commands to drivers, rather
>>>>>>> than relying on agents. But this is probably just a detail.
>>>>>>>
>>>>>>
>>>>
>>>> And very important detail, I think. Given the mention of "layer" in
>>>> Eugene's terminology ( btw, thanks Eugene for initiating the "terminology"
>>>> discussion ), was the intention that agent would expose "device_type" APIs
>>>> (service resource abstraction) and drivers would map that to "device_type
>>>> _vendor" API? This essentially takes this "multi service" insertion
>>>> discussion into equally important "multi vendor" discussion.
>>>>
>>>
>>> In a nutshell, the goal is to make Quantum both multi service and
>>> multi vendor. The terminology between agent and drivers needs to be
>>> explained better, and I would look like to use Quantum DHCP as an example.
>>> DHCP directly depends from the Quantum core plugin. It has an agent, which
>>> receives notification from the core plugin, and retrieves detailed
>>> information from it (such as IP allocation ranges); the agent then uses
>>> drivers for performing the actual configuration that will bring the DHCP
>>> service up. At the moment we only have a dnsmasq driver, but it is my
>>> understanding that a driver for ISC DHCP is underway too.
>>>
>>>
>>>>
>>>>
>>>>
>>>>>>>> Some thoughts on the Service Insertion proposal:
>>>>>>>>
>>>>>>>> 1. It seems that multiplugin approach is the right way to move
>>>>>>>> further compared to "mixin" approach where we inject and modify code of the
>>>>>>>> core plugin.
>>>>>>>> This will preserve plugin independency while require some changes
>>>>>>>> to infrastructure (plugin loading, extension management).
>>>>>>>>
>>>>>>>
>>>>>>> I agree on the changes to the infrastructure. I'd like to
>>>>>>> understand a little bit better what degree o plugin independency you'd like
>>>>>>> to achieve.
>>>>>>>
>>>>>>
>>>> If "multi plugin" approach is chosen, with each plugin having its
>>>> own db (per definition above), it becomes extremely important to have some
>>>> way for plugins to have coherently manage resources that may have their
>>>> representations reside in both dbs. ( Say "port" in core API is sort of
>>>> extended in LBaaS as "pool member" - I.e. Adding/deleting the port impacts
>>>> obviously pool member). Is this a valid concern?
>>>>
>>>
>>> I agree it's an important point, but probably not a huge concern. The
>>> plugins will be independent but not ignore each other. So I think it is
>>> still ok for a LB pool member to reference a port id from Quantum core. In
>>> Quantum core you can use the device_owner and the device_id fields to mark
>>> that port as used by a specific advanced service. We currently use those
>>> fields to prevent tenants from performing operations on the core API which
>>> will cause disruption to other services.
>>>
>>>
>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> 2. Having several implementations of the same service type.
>>>>>>>> If all services of the certain type implement the same calls, then
>>>>>>>> something should allow to route the call to particular plugin.
>>>>>>>> The options include:
>>>>>>>> 1) passing particular service impl as a url parameter
>>>>>>>> 2) having a prefix in uri for certain svc type:
>>>>>>>> /lb_svc/lbaas_impl1/call.json, /lb_svc/lbaas_impl2/call.json
>>>>>>>> 3) having (tenant, service implementation) assosiation in DB
>>>>>>>> that will allow to route a call automatically. But this makes 1 to 1
>>>>>>>> relation, e.g. tenant will have only 1 impl of service available
>>>>>>>>
>>>>>>>
>>>>>>> In the list above there's no mention of the service_type concept
>>>>>>> we discuss at the summit and which is also presented on the
>>>>>>> serviceinsertion wiki page. Do you deem it totally unreasonable?
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> My preference is (2): first of all, it "splits" whole API between
>>>>>>>> core API and Adv Svc API, and also does so for different service type
>>>>>>>> implementations.
>>>>>>>> Although URIs may not be so short as we want them, that could
>>>>>>>> prevent from naming collisions between different service types.
>>>>>>>>
>>>>>>>
>>>>>>> I have some concerns about this approach, and the length of the
>>>>>>> URI is not one of them. The particular implementation of a load balancing
>>>>>>> plugin is directly surfaced, meaning that a user cannot transparently move
>>>>>>> from one plugin to another without having to rewrite their client
>>>>>>> applications - as this proposal implies that /lb_svc/lbaas_impl1 might have
>>>>>>> different spec from /lb_svc/lbaas_impl2. Also, one might wonder what's the
>>>>>>> point in this case of going through quantum at all: one can just send the
>>>>>>> calls to an endpoint which performs URL rewriting and then forwards the
>>>>>>> call to appropriate final endpoint. Finally, it is my opinion that this
>>>>>>> solution is tantamount to saying that you want multiple LBaaS solutions at
>>>>>>> the same time in the same cloud; I won't argue that this is not a
>>>>>>> reasonable use case, but I'd prefer Quantum to present a single LBaaS
>>>>>>> interface rather than harbouring multiple solutions.
>>>>>>>
>>>>>>
>>>> Is is viable to have these different service implementations
>>>> "advertise" properties of that implementation, via an attribute (TBD) of
>>>> the service object (VIF in case of LBaaS) so that tenant uses that
>>>> information to chose implementation? In other words, choosing that property
>>>> of the "service" object indicates the hint as to which implementation to
>>>> use. Plugin then does the routing, and there are no APIs/URIs per
>>>> implementation.
>>>>
>>>
>>> I like the idea that different providers might advertise special
>>> properties of their implementation. In Quantum we use API extensions for
>>> plugin-specific features, and it is my opinion that this mechanisms can be
>>> extended (with some work) to advanced services. The tenant could query the
>>> "service type" to see what extensions are supported.
>>> Then using them as a mechanism for selecting the appropriate provider is
>>> something we can discuss. Surely if only "provider X" supports extension
>>> "fooX" and then it makes sense that if you use fooX in your request you
>>> want to use provider X. But there will still be cases of ambiguity, where a
>>> requests might be served by several providers... and I do not know how we
>>> should handle them.
>>>
>>>
>>>
>>>
>>>>
>>>>
>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> 3. Service Insertion:
>>>>>>>> I was thinking about routed/floating-mode insertion and there is a
>>>>>>>> certain thing I don't understand: the workflow.
>>>>>>>> It seems that the whole thing is somehow close to what we used to
>>>>>>>> call "device management" in mirantis implementaion of lbaas, but it doesn't
>>>>>>>> look like solving all device management tasks.
>>>>>>>>
>>>>>>>
>>>>>>> Actually, I don't think it is related at all to device management
>>>>>>> - assuming that I understand what you mean by device management. We are
>>>>>>> using this terminology to talk about insertion in the logical model, not
>>>>>>> the physical model. I think I clarified in the specification that we don't
>>>>>>> want to address device management in the service insertion blueprint. It is
>>>>>>> my opinion that this task is specific to particular advances services. For
>>>>>>> the LBaaS case, it necessarily requires a knowledge of physical and virtual
>>>>>>> appliance that I honestly do not have... so I'll gladly leave it to the
>>>>>>> load balancing experts :)
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> So in our implementation of LBaaS the workflow was as following:
>>>>>>>> 1) admin creates the device. Essentially it's just an instruction
>>>>>>>> to LBaaS of where is the device (it's address), which type is it and
>>>>>>>> credentials to manage it.
>>>>>>>>
>>>>>>> 2) tenant creates VIP. During this operation LBaaS chooses the
>>>>>>>> most appropriate device from the list of available and makes appropriate
>>>>>>>> device configuration
>>>>>>>>
>>>>>>>
>>>>>>> I think it is a reasonable workflow and goes back to the
>>>>>>> discussion around "capabilities" we were having during last week's call. I
>>>>>>> hope we agree that this is specific to LBaaS - I don't think service
>>>>>>> insertion has anything to do with it, as it looks just after the "logical"
>>>>>>> topology.
>>>>>>>
>>>>>>
>>>>>>>
>>>>>>>> If we're talking about workflow within Quantum it could look like
>>>>>>>> following (scenario 1 - shared HW device):
>>>>>>>> 1) admin creates the device. The same as in lbaas - address, type,
>>>>>>>> credentials
>>>>>>>> 2) tenant creates VIP: Quantum LBaaS plugin chooses the device,
>>>>>>>> configures connectivity between the device and tenant network (possibly
>>>>>>>> with l3 router configuration),
>>>>>>>> configures loadbalancer according to provided VIP parameters,
>>>>>>>> possibly assigns floating IP from external network
>>>>>>>>
>>>>>>>
>>>>>>> The solutions for "stiching" a device to a quantum network vary
>>>>>>> with the nature of the device, and with the way in which the service is
>>>>>>> inserted in the logical topology. I think what you said above is valid; in
>>>>>>> addition to that you might consider also other possibilities:
>>>>>>> 1) using nova to plug services in quantum - which would work nicely
>>>>>>> for physical appliances
>>>>>>> 2) leveraging an underlying layer-2 agent (which would be the same
>>>>>>> thing the l3 agent does)
>>>>>>> 3) using the "provider networks" capability. This capability, in a
>>>>>>> nutshell, will allow you map a Quantum network, regardless of the
>>>>>>> technology Quantum uses.
>>>>>>> 4) having a "translator" across service types (for instance GRE to
>>>>>>> VLAN or VXLAN to VLAN) - we don't have yet this on Quantum, but Kyle from
>>>>>>> Cisco has a very interesting blueprint on this topic:
>>>>>>> https://blueprints.launchpad.net/quantum/+spec/quantum-network-segment-translation
>>>>>>>
>>>>>>>>
>>>>>>>> If we're talking about private balancer with Quantum, then:
>>>>>>>> 1) tenant creates the device. This could be a launch of VM with HA
>>>>>>>> Proxy within tenant for instance.
>>>>>>>> 2) tenant creates VIP: LBaaS configures loadbalancer according to
>>>>>>>> provided VIP parameters, possibly assigns floating IP from external
>>>>>>>> network. No other actions required
>>>>>>>>
>>>>>>>
>>>>>>> Yeah... I agree there might be a case where a tenant creates a
>>>>>>> device. This is why we never make a clear distinction between tenant and
>>>>>>> admin APIs, but leave it configurable through the policy engine.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> It would be great if you explain how service assignments for
>>>>>>>> routers maps to device management scenarios and what exact workflow will
>>>>>>>> be.
>>>>>>>>
>>>>>>>
>>>>>>> I hope my explanation is satisfactory - which basically is that
>>>>>>> service insertion and device management have nothing in common.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Eugene.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121105/8330f120/attachment.html>
More information about the OpenStack-dev
mailing list