[openstack-dev] [Quantum][LBaaS] Selecting an LBaaS device given a service type
Salvatore Orlando
sorlando at nicira.com
Sat Dec 8 10:59:54 UTC 2012
Hi,
Some answers inline.
Salvatore
On 5 December 2012 03:01, Youcef Laribi <Youcef.Laribi at eu.citrix.com> wrote:
> Peter,
>
>
>
> No, there is no device selection at the API level. What we are discussing in
> this thread is the “scheduling” problem (i.e. how does the LBaaS service
> choose a device to create the VIP on, and whether there should be a common
> scheduler for all vendors, and how can a vendor provide custom scheduling or
> alternatively provide input in a scheduling decision, given the variety of
> devices capabilities/limitations).
>
>
>
> To get back to your question, the user (API) only specifies a “service type”
> (regular, premium, etc.) when creating a VIP. If she doesn’t specify one, a
> default one is used.
This is what the draft patch I've pushed yesterday aims to do.
>
>
>
> In a separate API (admin API), an admin can beforehand create service types
> and register device types (drivers) against these “service_types”, so
> service type “regular” can contain {“LB”: “ha_proxy”} for example.
>
Yeah it's pretty much like that.
If you look at the patch the request body is as follows:
{ "service_type":
{ "name": "xxx",
"description": "yyy",
"enabled": True,
"default": False,
"service_definitions": [
{"service": "LB",
"plugin": "abc",
"driver": "123"},
{"service": "wootwoot",
"plugin": "mootmoot",
"driver": "lotlot"}
]
>
>
> What is not clear currently to me, and I need Salvatore to clarify this
> point is whether a “service type” can contain several entries for the same
> category like “LB”, and whether the same service type can contain a mixture
> of categories, so for example a service type “fast” would be defined as
> follows: {“LB”: “vendor1”, “FW”: “vendor2”, “LB”: “vendor3”}. This at least
> is what is suggested in the blueprint for service insertion here:
> http://wiki.openstack.org/Quantum/ServiceInsertion
We can definitely have a mixture of categories; for instance a
comprehensive service type that give you options for load balancing,
firewall, and other services.
At this stage I am not proposing multiple solutions for the same kind
of service in a single service type. This option would indeed
necessarily imply that there is some sort of logic in the quantum
service for discriminating between the two options, and I would like
to avoid it; or at least I would like to keep it out of what is
expected to be delivered for the Grizzly release.
I think that the goal of a service type is to specify a plugin and
possibly a driver for serving a request; the driver is however
optional. In that case, within the plugin layer 'scheduling' mechanism
might select the appropriate driver according to parameters such as
requested features and device capabilities.
>
>
>
> Thanks
>
> Youcef
>
>
>
> From: Mellquist, Peter [mailto:peter.mellquist at hp.com]
> Sent: Tuesday, December 4, 2012 5:38 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Quantum][LBaaS] Selecting an LBaaS device
> given a service type
>
>
>
> Youcef,
>
>
>
> Is the current proposal for the user /admin to select a device type through
> the API? I am hoping that ‘devices’ are abstracted through the APIs and
> instead the API allows selection of the LBaaS service offerings ( regular,
> premium, etc ) with a default when not specified at all.
>
>
>
> Thanks,
>
> Peter.
>
>
>
> From: Eugene Nikanorov [mailto:enikanorov at mirantis.com]
> Sent: Tuesday, December 04, 2012 2:16 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Quantum][LBaaS] Selecting an LBaaS device
> given a service type
>
>
>
> Hi Youcef,
>
>
>
> see my comments inline:
>
> On Wed, Dec 5, 2012 at 1:29 AM, Youcef Laribi <Youcef.Laribi at eu.citrix.com>
> wrote:
>
> There is some misunderstandings going on in this thread (nothing unusual
> hereJ), but since we have some code to play with, let’s use this to be more
> specific by what we mean.
>
>
>
> Looking at the code that Eugene sent, I’m a bit confused, because we seem to
> be talking now not only about an “LB” scheduler, but a scheduler framework
> for all service types (LB, firewall, etc.).
>
>
>
> get_device_for_resource(resource)
>
> return
> scheduling_drivers[resource.service_type].get_device_for_resource(resource)
>
>
>
> device_info = get_device_for_resource(resource)
>
>
>
> It seems that for each service_type, we suggest having a “scheduler”, which
> is confusingly called "scheduling_driver" (I imagine this has nothing to do
> with the vendor-specific LB drivers, right?).
>
> "Scheduling driver" is a term taken from nova. Probably, since we're already
> using "driver" in other context, we may call it "pluggable scheduling
> algorithm".
>
>
>
> So, if a service type contains service definitions for LB and Firewall, we
> will have the same “scheduler_driver”?
>
> In fact, that depends on how generic the scheduling algorithm can be. If it
> can be even service type-agnostic - then yes (in fact, "chance scheduler"
> which picks devices randomly could be such a candidate).
>
> But I think it would be overgeneralization.
>
> So, lets just think that each service type will have it's own set of
> scheduling algorithms (one is configured at a time).
>
> This way we may write scheduling algorithm for LB without thinking about
> other service types, and yet fully to add other schedulers for other service
> types.
>
>
>
>
>
> Even if I assume that my service type has been created so it only contains
> one “LB” service definition (and no other service definitions), and
> therefore it will nicely map to a scheduler that only does “LB” scheduling
> like the one included
>
>
>
> class LBScheduler:
>
> device_handlers = [list of classes for each device type, which can match
> VIP requirements to device caps and status]
>
> get_device_for_vip(vip)
>
> for device in devices:
>
> if device_handlers[device.type].is_good(device, vip):
>
> return device
>
>
>
> In the above code, the method get_device_for _vip(), uses a concept of
> “device_handler” for each “device type” (driver). Isn’t this
> device_handler(), vendor-specific code?
>
> Exactly! That's where vendor plugs in his code. Decision is still made at
> more generic level while involving device-specific code.
>
>
>
> If the above assumption is correct, then the vendor *is* actually taking
> part in the scheduling decision, which is already an improvement on the
> previous proposal.
>
> Well, in fact, I meant that from the very beginning :)
>
> It's just that one vendor (driver) can't make decision by itself.
>
>
>
> But does it mean we are suggesting to have vendor-specific code scattered in
> several places, some of it in the scheduler, some of it in the agent/driver?
> I don’t like this. I thought that the LB plugin should be completely
> vendor-agnostic, and all vendor-specific code should be grouped in the
> agent/driver component.
>
> I agree with you regarding code locations. But I didn't mean we need to
> write device-specific code in scheduler.
>
> Just let device_handler be a part of driver library.
>
> It's just a device-specific code, that resides in a driver but is used
> within a scheduler component.
>
>
>
> That may look like the following in scheduler.conf:
>
> [LoadBalancer]
>
> device_handlers = vendorXDriver.deviceTypeA.HandlerClass1,
> vendorYDriver.deviceTypeB.HandlerClass2
>
>
>
> Where HandlerClass could be:
>
>
>
> class HaproxyHandlerClass
>
> def get_device_type()
>
> return "HAPROXY"
>
>
>
> def is_good(device, vip):
>
> return ...
>
>
>
> Scheduler then make use of it:
>
>
>
> def load_handlers()
>
> for handler_class in handlers:
>
> class = import_class(handler_class)
>
> inst = class()
>
> device_handlers[inst.get_device_type()] = inst
>
>
>
> That's, in fact, the same way plugins are currently loaded in quantum.
>
> Hope the idea became clearer!
>
>
>
> Thanks,
>
> Eugene.
>
>
>
>
>
>
>
> Thanks,
>
> Youcef
>
>
>
>
>
> From: Eugene Nikanorov [mailto:enikanorov at mirantis.com]
> Sent: Tuesday, December 4, 2012 5:08 AM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Quantum][LBaaS] Selecting an LBaaS device
> given a service type
>
>
>
> Hi Salvatore,
>
>
>
> Thanks for detailed reply.
>
> I'm going to explain my idea in more detain with pseudocode.
>
> See my comments inline.
>
> On Tue, Dec 4, 2012 at 2:52 PM, Salvatore Orlando <sorlando at nicira.com>
> wrote:
>
> My only remark is that in my opinion having a 'global' LB scheduler that
> will work across all drivers is definitely valuable, but probably not
> necessary. If I were to set the priority of this feature, I would put it on
> "wishlist" for Grizzly. And this for the several reasons:
>
> - as already said in this thread, it is not easy to model features and
> device capabilities in an agnostic way.
>
>
>
> Capabilities are not device-agnostic, for sure.
>
> Some part of device database model is common and generic, and some may be
> stored in "extra" fields which are used by device-specific code.
>
> I'm trying to design scheduler as extensible device-agnostic (and, in fact,
> service-type-agnostic) framework, where we will implement scheduling logic
> for LB and, going deeper, some device-specific aspects of LB scheduling.
>
> Once again, I think that LB scheduling algorithm should be generic and also,
> configurable (e.g. you may write your own algorithm and make scheduler use
> it).
>
> This, first of all, framework will allow us to make some stub like
> scheduling on first available device, e.g., something like this:
>
>
>
> get_device_for_vip(vip)
>
> for device in devices:
>
> if is_good(device, vip):
>
> return device
>
>
>
> When I say, service type and device agnostic framework, I mean that all
> logic is hidden by just a few generic calls like the following:
>
>
>
> device_info = get_device_for_resource(resource)
>
>
>
> which could be implemented as:
>
>
>
> get_device_for_resource(resource)
>
> return
> scheduling_drivers[resource.service_type].get_device_for_resource(resource)
>
>
>
> In turn, scheduling_driver.get_device_for_resource(resource) may be our LB
> scheduling:
>
>
>
> class LBScheduler:
>
> device_handlers = [list of classes for each device type, which can match
> VIP requirements to device caps and status]
>
> get_device_for_vip(vip)
>
> for device in devices:
>
> if device_handlers[device.type].is_good(device, vip):
>
> return device
>
>
>
> You may notice that scheduler component itself is quite thin layer which
> serves several purposes:
>
> 1) extensible and configurable:
>
> - you add service types like you add plugins to quantum
>
> - you add drivers to let generic algorithm have better understanding of
> particular device of particular type.
>
> 2) synchronous. Choosing device is one fast synch. operation.
>
> 3) The code itself is just routing resource to corresponding logic.
>
>
>
> I think these are both good features and easy to implement.
>
> That may save lots of refactoring and redesigning when it'll come to other
> advanced services.
>
>
>
>
>
> - drivers apparently will be more than a simple "actuator", but will have
> their own logic. I can see for instance at least three different drivers
> families: i) hardware load balancers, ii) contextualized hardware load
> balancers (hw appliances where you create virtual LB appliances), and iii)
> virtualized load balancers, that could be spawn, for instance, using nova.
> What would be the criteria for choosing a virtual appliance versus
> allocating a VIP on a hardware one?
>
> That's what I'm trying to avoid: some drivers will be "simple actuators",
> some will have their own logic.
>
> Regarding the question about criteria: good question :) But it, in my
> opinion, a bit unrelated to the scheduling architecture, e.g. it is actual
> whichever choice we make.
>
>
>
> - In this Grizzly release we won't probably have a huge amount of drivers.
> Or probably we'll have the drivers, but Quantum LB service, being
> experimental, will probably be deployed with no more than one or two
> drivers.
>
> Another interesting point in my opinion is that this scheduling logic is
> part of the LB plugin we're implementing for Grizzly, not part of the DB
> model supporting the tenant API. There will be, of course, model classes for
> device management, but they (and all the logic for managing them) should be
> separate from the modules which implement the API.
>
> That's for sure. In fact we've currently thinking of scheduling and device
> management as separate mandatory plugin which will provide it's
> functionality to other advanced service plugins.
>
>
>
> My argument here is not that we should not have a global scheduler; I'm just
> saying I have the impression that there are some important details which are
> not yet completely fleshed out.
>
> I understand that, and i just want to make this details affect particular
> code (drivers, algorithms), but not whole architecture.
>
>
>
> Nevertheless, driver-level scheduling is valuable too, and probably easier
> to implement. I wouldn't disregard, in the long run, having a two-step
> process:
>
> Step 1 - Quantum LB plugin schedules drivers according either to
> service_type required by the user or request features
>
> Step 2 - Driver selects device according to capabilities
>
> Questions here:
>
> 1) If driver selects device, is it mandatory for all drivers to provide such
> functionality?
>
> 2) Where device database is stored?
>
> If it is mandatory for driver to be able to select a device, should device
> database be driver-specific, e.g. each driver has it's own?
>
> Will drivers access single database remotely? Remember we decided that
> drivers run within agent, and there could be several agents running. Driver
> of which agent instance should be responsible for scheduling?
>
> If it's not mandatory for driver, then some scheduling logic will be in
> generic scheduler, some in drivers.
>
>
>
> In fact, any option of above brings tons of coding and testing complexity
> when we start to answer these questions.
>
>
>
>
>
>> but to Sam’s point, a common scheduler might not have enough visibility or
>> understanding of device specifics/limitations in order to “correctly” pick
>> the right device.
>
> Saying this you assume that driver has such understanding, and even that
> might not be the case.
>
>
>
> This boils down to defining what a driver is. If it has to be a simple
> "actuator" (I don't remember the name it had in Atlas), then it makes
> perfectly sense to do the scheduling in the service, as the driver just
> executes the LB operation.
>
> In fact, we decided that once, that driver is simple and synchronous that
> maps generic LB model to device-specific.
>
> But by saying that "driver may not have understanding" I mean that in some
> cases we need an extended status of the device to know its "rating" in
> scheduling. Example: N of deployed VIPs (may be hard to find out for the
> driver), current connections, preconfigured device limits; some of these
> could be got from device, some are known at device DB.
>
>
>
> In fact, in order to avoid that, scheduler should contain:
>
> 1) all necessary logic to make a decision (logic may be device-specific,
> e.g. different for different kinds of devices, or even different instances
> of the same device type). In that case scheduling becomes simple fast
> operation: read data from DB - make choice - write to DB.
>
> 2) active device monitoring: that is needed for "visibility and
> understanding", it can be device-specific. It is performed by scheduler
> using it's device database and using device-specific code from the drivers
> (but code is running under scheduler process or plugin).
>
>
>
> The idea behind such scheme is the same as scheduling in nova. Unlike nova
> we don't have devices reporting their status to the scheduler, so we need to
> poll them proactively.
>
>
>
> I am not sure I agree on this statement. Scheduling in nova is a decision
> which takes into account a limited set of capabilities, and then picks the
> first node with enough resources. It does not select the "best" one - though
> I concede you can just replace the scheduling algorithm with another that
> select the best node. However, it assumes all nodes are identical. Instead
> here we're not distinguishing only on capabilities but also on features. And
> the concept of capability too might be quite different across drivers.
>
> Scheduling in nova is done by scheduling drivers (which in that context
> means pluggable scheduling algorithms), you know. There is "chance" driver
> that picks host randomly, there is "least_cost" driver that looks into
> node's load and status and applies more complex algorithm to make a
> decision.
>
> That is exactly what I'd like to see in our scheduler. Make framework that
> will allow primitive implementation while leaving door open to more complex
> ones.
>
>
>
> However, it's probably down to me not understanding how you are planning to
> design this scheduler. For instance how it would select between creating a
> VIP on a physical load balancer, or spanning a virtual appliance and create
> the VIP on it?
>
> It's a good question. Regarding this particular choice: my idea is that
> scheduler doesn't make such decision at all.
>
> It doesn't operate on yet non-existing devices. In order to insert VM LB
> into consideration, user needs to launch it and register as a device.
>
> You may argue that it is additional actions user will need to take. But what
> is on other hand?
>
> If we let scheduler to make such decision, that it should also be capable of
> doing the following:
>
> - consider if there is already a VM LB in tenant's network that may be used
>
> - spawn an instance of VM LB (tenant should also provide image id to do so,
> it should also has its reflection in tenant API, which we don't have at the
> moment)
>
>
>
> Second point alone has two disadvantages:
>
> - scheduling can't be synchronous operation. That will affect whole
> architecture, complicating it.
>
> - user will need to pass some device-management-specific info (image id) to
> device-management unaware Tenant API.
>
>
>
> Sorry for the long email, you're probably tired of reading it :)
>
> But I think this is important discussion and worth covering in upcoming
> LBaaS meetings; may be it's worth to setup a meeting on irc for that topic
> specifically.
>
>
>
> Thanks,
>
> Eugene.
>
>
>
>
>
>
>
> What do you think?
>
>
>
> Thanks,
>
> Eugene.
>
>
>
>
>
> On Fri, Nov 30, 2012 at 7:45 PM, Ilya Shakhat <ishakhat at mirantis.com> wrote:
>
> Sam, Youcef,
>
>
>
> Your point makes sense. I tried to make "scheduler" common, but it really
> looks like driver should participate in decision making.
>
>
>
> Thanks,
>
> Ilya
>
>
>
>
>
> 2012/11/30 Samuel Bercovici <SamuelB at radware.com>
>
>
>
> Ilya,
>
>
>
> I concur with Youcef.
>
>
>
> -Sam.
>
>
>
>
>
> From: Youcef Laribi [mailto:Youcef.Laribi at eu.citrix.com]
> Sent: Friday, November 30, 2012 3:57 AM
>
>
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Quantum][LBaaS] Selecting an LBaaS device
> given a service type
>
>
>
> Ilya,
>
>
>
> Let’s first separate device on-boarding and management from the “scheduler”
> discussion. These are separate functions in the system, and we’ll keep
> scheduler as the component that picks the driver/device (and we can argue
> separately and decide whether this is a common component to all vendors or a
> vendor-specific component, whether it resides in the plugin or in the
> driver, etc.).
>
>
>
> Now to come back to the scheduler discussion, it might seem that a scheduler
> can be common to all drivers would work fine, but to Sam’s point, a common
> scheduler might not have enough visibility or understanding of device
> specifics/limitations in order to “correctly” pick the right device. For
> example, some vendors have a limit of vlans per interface, or cannot support
> overlapping IPs, other vendor devices are meshed together in a cluster or a
> pool and there are optimal ways to distribute VIPs or networks in those
> setups, that a common scheduler wouldn’t understand. That’s why I previously
> said that the scheduler (“placement component”) should pick the driver, and
> let the driver pick a specific device, that way each vendor is responsible
> for their own allocation strategy on their devices. Or at least the driver
> should have an input into the scheduler decision, so the scheduler doesn’t
> pick the wrong device.
>
>
>
> On the admin/operator APIs used for device on-boarding and management, we
> need to initiate a separate thread, and discuss whether this be implemented
> as a separate plugin than the LBaaS plugin, or we extend the LBaaS plugin to
> also support a provider/admin API? And what is the role of LBaaS
> agent/driver in the device on-boarding process.
>
>
>
> Thanks
>
> Youcef
>
>
>
>
>
>
>
>
>
> From: Ilya Shakhat [mailto:ishakhat at mirantis.com]
> Sent: Thursday, November 29, 2012 7:34 AM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Quantum][LBaaS] Selecting an LBaaS device
> given a service type
>
>
>
> Hi,
>
>
>
> Just a small summary of our discussion. We have the following components:
>
> *aaS plugins - do the logic related to services. Plugins know service data
> model only and don't hold information about devices. When Plugin needs to
> deploy any changes, it calls Scheduler.
> Scheduler ("placement component") - binds services to devices. It has API to
> manage devices (similar to provider api in old LBaaS). Scheduler knows how
> to find device by service_type and has DB to store them. When it gets
> request from Plugin, it finds corresponding device and forwards request to
> Agent
> Agent - dispatches commands to drivers. Agent holds collection of drivers
> and knows how to dispatch message to them
> Drivers - translate service model to device-specific.
>
> Both Scheduler and Agent are common for all types of services. The logic
> related to load balancing is implemented as drivers.
>
>
>
> Please see http://wiki.openstack.org/Quantum/LBaaS/Architecture/Scheduler
> for details on how components interact and what the typical workflow will
> be. Comments are welcome :)
>
>
>
> Thanks,
>
> Ilya
>
>
>
> 2012/11/28 Eugene Nikanorov <enikanorov at mirantis.com>
>
> Hi Youcef,
>
>
>
> Please see my comments inline.
>
> On Wed, Nov 28, 2012 at 2:14 AM, Youcef Laribi <Youcef.Laribi at eu.citrix.com>
> wrote:
>
> Changing the subject line (was: Progress on lbaas-plugin-api-crud)…
>
>
>
> Hi Eugene,
>
>
>
> Let’s make sure we agree on the assumptions:
>
>
>
> - LBaaS Plugin has a set of drivers (vendor-specific). Drivers run
> in the LBaaS agent process.
>
> Agreed.
>
> - Each driver (provider in Salvatore’s terminology) is registered
> against a service type (yes, service type can include LB drivers, firewall
> drivers, etc.).
>
> Agreed.
>
> - There can be several LBaaS drivers registered against the same
> service type (e.g. “high-performance LB” service type).
>
> That probably needs to be clarified in more detail, but it does make sense.
> As far as I understand there is exactly 1 driver per service type, but there
> could be several service types referencing the same driver (like you
> mentioned, "high-perf-lb", "low-cost-lb", etc)
>
>
>
> If these assumptions are incorrect or need to be clarified further, let’s
> start by doing this first J
>
>
>
> Now, let’s imagine we have a component in the system whose job is to pick a
> driver/provider (device type) and a device
>
> (device id) given a certain service type. We will call this component the
> “placement component” (it’s not necessarily a separate process like the
> scheduler, and can be part of the plugin, the agent or the driver, it
> doesn’t matter for this discussion at this stage).
>
> I'd still prefer to call it a scheduler even though it will be a part of our
> plugin or separate component.
>
>
>
> The Placement Component needs to choose a device that can load-balance
> traffic coming from network A (where the VIP is) to VMs residing on Network
> B (pool’s network). In order to do this, the Placement Component needs to be
> aware of the capabilities of each driver/provider and can follow a certain
> strategy of device allocation that might take into account some of the
> following constraints.
>
>
>
> - Some device types are physical appliances, others are virtual appliances
> running on Nova. The driver might prefer one or the other if both satisfy
> the service type.
>
> Agreed.
>
>
>
> - Some device types have a fixed number of devices (e.g. physical
> appliances), while other devices can be created at will whenever needed
> (e.g. HA-Proxy VMs).
>
> Agreed.
>
>
>
> - Some device types can host a high number of VIPs, others can host a
> smaller number.
>
> Agreed. Typically such factors are accounted during scheduling process.
>
>
>
>
>
> - Given a choice between multiple device types that satisfy the same
> service type, preference could be given to a device that is already wired to
> network A and network B.
>
> Not sure that this is necessary, but that could be an option.
>
>
>
>
>
> - Given a choice between several equivalent devices (possibly of different
> device types), the least loaded one is chosen.
>
>
>
> - A placement policy could be to group all VIPs belonging to the same
> tenant on the same device whenever possible.
>
>
>
> - A placement policy could be to group all VIPs belonging to the same
> network on the same device.
>
>
>
> All these are legitimate placement strategies/algorithms, and our placement
> component might be very basic or very sophisticated, but we can hide this
> from the rest of the system.
>
> Nova has different scheduling drivers for this. We can use same approach as
> well.
>
>
>
> Now let's assume that Placement component working through some combination
> of these rules, has finally chosen a driver/provider (e.g. HA-Proxy) and a
> specific device (HA-Proxy device 1) or it decided to create a new device in
> a driver (spawned new HA-Proxy VM, which is now HA-Proxy device 2). Now it
> needs to wire the chosen device to Quantum Network A and Network B (if it's
> not already wired to these networks). This requires the Placement Component
> to call Quantum to do the wiring (we need to figure out the interface
> between the 2). If the device is a Nova VM, then this is easy as it's done
> like for any other VM. If the device is physical then this depends on the L2
> switch technology used in the Quantum service (VLAN, Linux-Bridge, etc.):
> the physical device (or a proxy of it) needs to run a Quantum L2 agent in
> order to wire the device correctly.
>
> Agreed.
>
>
>
> After all this is done, the device is ready to be configured with a VIP. The
> Placement Component can return the driver, device_id (and possibly other
> config data, like the address chosen for the VIP) to the LBaaS plugin, which
> proceeds to call the LBaaS agent in order to create the VIP on this device.
>
> Agreed.
>
>
>
> If we can understand what are the tasks of the “placement component” and the
> interactions this component needs to have with other components, then it’s
> easier to figure out where it should run.
>
> Recently we discussed an idea of separate plugin performing device
> management and scheduling which will be a utility plugin for other service
> plugins (not only lbaas).
>
> I think we'll need at least some simple form of this component within our
> lbaas efforts.
>
>
>
> Youcef
>
>
>
> From: Eugene Nikanorov [mailto:enikanorov at mirantis.com]
> Sent: Monday, November 26, 2012 10:11 PM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Quantum][LBaaS] Progress on
> lbaas-plugin-api-crud
>
>
>
> Hi Youcef,
>
>
>
> Driver doesn't "choose" device-specific info, driver is device-specific
> itself.
>
> When we send request to the agent, we need to specify which device to use.
>
>
>
> So once the user have chosen device type via service_type on VIP creation,
> Quantum not only should associate VIP with device type, but also it should
> choose particular instance of that device type to deploy the VIP.
>
> The process of choosing the instance is called scheduling. Unlike nova it's
> unreasonable for LBaaS to have separate scheduler service, thus it makes
> sense to have them built in the plugin.
>
> I think we should not do this on agent since it doesn't have (and should not
> have) device database.
>
> Not should it access quantum's database directly.
>
>
>
> So overall workflow will look like the following:
>
> 1. Add a device (type, physical info) to device registry (this is a part of
> Provider API. Call to Quantum made by cloud provider in case of shared
> devices, or by tenant in case of private VM balancers)
>
> 2. Create a VIP, specifying service type (=device type) (call by tenant),
>
> 3. Choose device of specified type, associate the VIP with the device (made
> by Quantum/Plugin)
>
> 4. Send message with (logical VIP info, device_type, physical device info)
> to LBaaS Agent (made by Quantum/Plugin)
>
> 5. Communicate with particular device using driver according to device_type
> (LBaaS Agent)
>
>
>
> Any CRUD request processed by Agent should be supplied by device type and
> device parameters.
>
>
>
> You may think of alternative approach where device registry is held by the
> Agent or even driver, but this approach has the next disadvantages:
>
> - Scheduling goes to Agent or Driver and thus Agent/Driver should store
> VIP-device association while VIP is a "foreign" object for the Agent/Driver.
>
> - If we go with multiple agents for large deployments, we'll need to sync
> their device databases
>
> - Device locking will be complicated.
>
> - If Agents will have non-intersecting sets of devices in their registries
> than scheduling will be complicated or not possible.
>
>
>
> Please share you thoughts on this.
>
>
>
> Thanks,
>
> Eugene.
>
>
>
>
>
> On Tue, Nov 27, 2012 at 3:38 AM, Youcef Laribi <Youcef.Laribi at eu.citrix.com>
> wrote:
>
> Hi Eugene, Leon,
>
>
>
> Could we have the LBaaS plugin choose the “driver” based on service_type
> info, and then it’s the driver which choose the “device”? The driver can
> obviously have its own DB model where it stores device-specific info.
>
>
>
> Youcef
>
>
>
> From: Dan Wendlandt [mailto:dan at nicira.com]
> Sent: Monday, November 26, 2012 9:13 AM
> To: Leon Cui
> Cc: OpenStack Development Mailing List; Salvatore Orlando
> Subject: Re: [openstack-dev] 答复: 答复: 答复: 答复: 答复: Progress on
> lbaas-plugin-api-crud
>
>
>
>
>
> On Mon, Nov 26, 2012 at 9:03 AM, Leon Cui <lcui at vmware.com> wrote:
>
> Hi Eugene,
>
> When did you change get merged into master? I did rebase on last Friday
> which supposed to be your latest code, but anyway I’m planning to do it
> again today.
>
>
>
> Thanks for your reminder that I need to include device mgmt. into DB model.
> Need to look at Salvatore’s change on ServiceType.
>
>
>
> It seems to me that each LB plugin should be able to define its own DB
> models for "device mgmt" (e.g., device address/credentials/etc.), as
> different plugins may have different strategies for how they manage devices.
> The usual model is that plugins can define additional models/tables to
> manage entities that are specific to that plugin. This is similar to how we
> didn't back the notion of a "vlan" into the DB model for "core plugins",
> since not all plugins will use vlans. If you don't go down this route, you
> end up with a messy DB model as everyone keeps adding columns for items that
> only a particular plugin needs to track.
>
>
>
> Dan
>
>
>
>
>
>
>
> Thanks
>
> Leon
>
> 发件人: Eugene Nikanorov [mailto:enikanorov at mirantis.com]
> 发送时间: 2012年11月26日 4:29
>
>
> 收件人: Leon Cui
> 抄送: Ilya Shakhat; Sachin Thakkar; Oleg Bondarev; Salvatore Orlando; Dan
> Wendlandt
>
> 主题: Re: 答复: 答复: 答复: 答复: Progress on lbaas-plugin-api-crud
>
>
>
> Hi Leon,
>
>
>
> Thanks for sending me the patch.
>
> I've looked at it briefly, there is one major thing I was able to identify:
>
> In order to couple things together (plugin, agent, drivers), we need to add
> device management at least to DB model.
>
> In particular, each vip should have a reference to the device (which has a
> type and address/credentials).
>
> This information is passed in each agent notification message.
>
> This part is missing in current design blueprints but i think we need to add
> it before we put the code on review.
>
> Probably it will also depend on Salvatore's ServiceTypes part.
>
>
>
> Also I see that your patch is based on some of my outdated patches.
>
> My code was recently merged into the master so you can rebase on master
> using only Oleg's patch.
>
>
>
> Thanks,
>
> Eugene.
>
> On Fri, Nov 23, 2012 at 2:40 PM, Leon Cui <lcui at vmware.com> wrote:
>
> Hi Eugene,
>
> I’m still waiting for approval as openstack contributor. For now I simply
> attached the patch file that you might want to take a look first. Once I
> got the approval, I’ll try to post the view asap.
>
>
>
> Thanks
>
> Leon
>
> 发件人: Eugene Nikanorov [mailto:enikanorov at mirantis.com]
> 发送时间: 2012年11月20日 22:57
> 收件人: Leon Cui
> 抄送: Ilya Shakhat; Sachin Thakkar; Oleg Bondarev; Salvatore Orlando; Dan
> Wendlandt
> 主题: Re: 答复: 答复: 答复: Progress on lbaas-plugin-api-crud
>
>
>
> Leon,
>
>
>
> I'll take agent and rpc parts.
>
> I have registered
> https://blueprints.launchpad.net/quantum/+spec/lbaas-agent-and-rpc to track
> this.
>
>
>
> Thanks,
>
> Eugene.
>
> On Tue, Nov 20, 2012 at 2:16 PM, Leon Cui <lcui at vmware.com> wrote:
>
> Hi Eugene,
>
> Thanks for your suggestion. It looks good to me. I’ll work out the UT
> first, and then align the class model to the diagram as you suggested.
>
>
>
> Thanks
>
> Leon
>
> 发件人: Eugene Nikanorov [mailto:enikanorov at mirantis.com]
> 发送时间: 2012年11月20日 17:32
>
>
> 收件人: Leon Cui
> 抄送: Ilya Shakhat; Sachin Thakkar; Oleg Bondarev; Salvatore Orlando
>
> 主题: Re: 答复: 答复: Progress on lbaas-plugin-api-crud
>
>
>
> replying to all...
>
>
>
> Leon,
>
>
>
> I think tests/unit/test_db_plugin.py is right code to refer when writing
> unit tests for db code. The only thing is that unit tests written in
> test_db_plugin.py are a bit generic, e.g. the backend plugin is specified in
> particular plugin's UTs which inherit from QuantumDbPluginV2TestCase. I
> think UTs for balancer plugin may be more specific, testing
> LoadbalancerPluginDb class.
>
>
>
> Since you need dababase utility methods from QuantumDbPluginV2 then it's
> LoadbalancerPluginDb which should inherit from such QuantumDBBase (or
> whatever you call it), so overall diagram will look like:
>
>
>
> ServicePluginBase
>
> |
>
> LoadBalancerPluginBase
>
> |
>
> | QuantumDBBase
>
> | |
>
> LoadBalancerPlugin <---------- LoadBalancerPluginDb
>
>
>
> Thanks,
>
> Eugene.
>
>
>
> On Tue, Nov 20, 2012 at 1:04 PM, Leon Cui <lcui at vmware.com> wrote:
>
> Hi Eugene,
>
> Thanks for your suggestion. Please see my comments inline.
>
>
>
> One more question: I’m writing the unit test, mainly to verify the database
> functionalities for LB CRUD. Do you think tests/unit/test_db_plugin.py is
> the right test code that I should refer to? Any good suggestions on this
> front?
>
>
>
> Thanks
>
> Leon
>
> 发件人: Eugene Nikanorov [mailto:enikanorov at mirantis.com]
> 发送时间: 2012年11月20日 16:44
> 收件人: Leon Cui
> 抄送: Ilya Shakhat; Sachin Thakkar; Oleg Bondarev; Salvatore Orlando
> 主题: Re: 答复: Progress on lbaas-plugin-api-crud
>
>
>
> Hi Leon,
>
>
>
> A few thoughts on your diagram.
>
>
>
> Please consider the following:
>
> 1) If you want something from QuantumDbPluginV2 and you feel it may be
> common plugin functionality - you need to extract it to a separate class,
> something like QuantumPluginBase, and inherit QuantumDBPluginV2 from this
> class, ServicePluginBase should inherit from that class as well.
>
> [Leon] I need some dababase utility methods from QuantumDbPluginV2.
> Abstract to a separate class could be a good idea. But I’m not sure if it’s
> a good idea to let ServicePluginBase to inherit from this class.
> ServicePluginBase is an abstract class for service plugin service (quantum
> manager) to use.
>
>
>
> 2) LoadBalancerPluginBase imho should inherit from ServicePluginBase
>
> [Leon] Why it needs to inherit from ServicePluginBase?
> LoadBalancerPluginBase defines the loadbalancer extension APIs. I think we
> just make sure LoadbalancerPlugin inherits from both classes as below:
>
> ServicePluginBase QuantumPluginDbBase LoadbalancerPluginBase
>
> | | |
>
> ----------------------------------------------
>
> |
>
> LoadbalancerPlugin ------ LoadbalancerPluginDb
>
>
>
> LoadbalancerPlugin will contain the LoadbalancerPluginDb instance for
> database access.
>
>
>
>
>
> 3) Depending on what you need from QuantumDbPluginV2/QuantumPluginBase, this
> may lead to the following inheritance sequence:
>
> QuantumPluginBase
>
> |
>
> ServicePluginBase
>
> |
>
> LoadBalancerPluginBase
>
> |
>
> LoadBalancerPluginDb
>
> |
>
> LoadBalancerPlugin
>
>
>
> Also, I think that LoadBalancerPlugin should not inherit
> LoadBalancerPluginDb.
>
> Unlike core plugins where it could make sense, I'd prefer to see
> LoadBalancerPluginDb to be a part of LoadBalancerPlugin.
>
> I mean LoadBalancerPlugin implements "has a" LoadBalancerPluginDb instead
> of "is a" relation.
>
> The reason for this is that LoadBalancerPlugin provides CRUD implementation
> which doesn't directly map to DB operations implemented in
> LoadBalancerPluginDb.
>
> E.g. my idea is:
>
> LoadBalancerPlugin - CRUD, validation, calling LoadBalancerPluginDb,
> sending/receiving messages to agent
>
> LoadBalancerPluginDb - DB access.
>
>
>
> Thanks,
>
> Eugene.
>
>
>
> On Tue, Nov 20, 2012 at 6:54 AM, Leon Cui <lcui at vmware.com> wrote:
>
> Hi Ilya,
>
> Right now I took Eugene’s change under review
> (https://review.openstack.org/#/c/15733/) and am developing the database
> access logic and plugin skeleton based on that service plugin mechanism. The
> class model is illustrated in the below diagram:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> LoadBalancerPlugin module is the main body of loadbalancer plugin which
> inherits from multiple classes:
>
> - ServicePluginBase: defines the abstract methods that a service
> plugin should implemented.
>
> - QuantumDbPluginV2: contains a set of generic quantum database
> access methods. I’m not sure if we really want to inherit from this class
> but I’d like to leverage the methods defined in this class.
>
> - LoadBalancerPluginDb: This the main part I’m coding on which wrap
> the Lbaas database model and CRUD operation against the database.
>
>
>
> My thought is that LoadBalancerPlugin will control the LBaaS CRUD API flow.
> For instance, “create_vip” method should first validate the input, update
> the database, send message to the LbAgent over AMQP channel, than update the
> database by setting the status from PENDING_CREATE to ACTIVE.
>
>
>
> I’m trying to write unit tests against the database access now which will
> take a while to complete. Meanwhile it would be great to have your help on
> coding the RPC interaction between plugin and agent.
>
>
>
> I don’t like blocking your part. What’s the best practice to collaborate
> with you on this? Maybe I can shelve my change to you somehow?
>
>
>
> Thanks
>
> Leon
>
> 发件人: Ilya Shakhat [mailto:ishakhat at mirantis.com]
> 发送时间: 2012年11月19日 22:08
> 收件人: Sachin Thakkar; Leon Cui
> 抄送: Eugene Nikanorov; Oleg Bondarev
> 主题: Progress on lbaas-plugin-api-crud
>
>
>
> Hi Sachin, Leo,
>
>
>
> Recently there was a thread related to LBaaS architecture
> (http://lists.openstack.org/pipermail/openstack-dev/2012-November/002646.html).
> How good is it aligned with your implementation? Do you need help in coding?
> (we may take Agent part)
>
>
>
> Thanks,
>
> Ilya
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt
>
> Nicira, Inc: www.nicira.com
>
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
More information about the OpenStack-dev
mailing list