[openstack-dev] [Quantum][LBaaS] Selecting an LBaaS device given a service type

Eugene Nikanorov enikanorov at mirantis.com
Tue Dec 4 13:08:00 UTC 2012


Hi Salvatore,

Thanks for detailed reply.
I'm going to explain my idea in more detain with pseudocode.
See my comments inline.

On Tue, Dec 4, 2012 at 2:52 PM, Salvatore Orlando <sorlando at nicira.com>wrote:

> My only remark is that in my opinion having a 'global' LB scheduler that
> will work across all drivers is definitely valuable, but probably not
> necessary. If I were to set the priority of this feature, I would put it on
> "wishlist" for Grizzly. And this for the several reasons:
> - as already said in this thread, it is not easy to model features and
> device capabilities in an agnostic way.
>

Capabilities are not device-agnostic, for sure.
Some part of device database model is common and generic, and some may be
stored in "extra" fields which are used by device-specific code.
I'm trying to design scheduler as extensible device-agnostic (and, in fact,
service-type-agnostic)  framework, where we will implement scheduling logic
for LB and, going deeper, some device-specific aspects of LB scheduling.
Once again, I think that LB scheduling algorithm should be generic and
also, configurable (e.g. you may write your own algorithm and make
scheduler use it).
This, first of all, framework will allow us to make some stub like
scheduling on first available device, e.g., something like this:

get_device_for_vip(vip)
   for device in devices:
      if is_good(device, vip):
         return device

When I say, service type and device agnostic framework, I mean that all
logic is hidden by just a few generic calls like the following:

device_info = get_device_for_resource(resource)

which could be implemented as:

get_device_for_resource(resource)
    return
scheduling_drivers[resource.service_type].get_device_for_resource(resource)

In turn, scheduling_driver.get_device_for_resource(resource) may be our LB
scheduling:

class LBScheduler:
  device_handlers = [list of classes for each device type, which can match
VIP requirements to device caps and status]
  get_device_for_vip(vip)
   for device in devices:
      if device_handlers[device.type].is_good(device, vip):
         return device

You may notice that scheduler component itself is quite thin layer which
serves several purposes:
1) extensible and configurable:
    - you add service types like you add plugins to quantum
    - you add drivers to let generic algorithm have better understanding of
particular device of particular type.
2) synchronous. Choosing device is one fast synch. operation.
3) The code itself is just routing resource to corresponding logic.

I think these are both good features and easy to implement.
That may save lots of refactoring and redesigning when it'll come to other
advanced services.


- drivers apparently will be more than a simple "actuator", but will have
> their own logic. I can see for instance at least three different drivers
> families: i) hardware load balancers, ii) contextualized hardware load
> balancers (hw appliances where you create virtual LB appliances), and iii)
> virtualized load balancers, that could be spawn, for instance, using nova.
> What would be the criteria for choosing a virtual appliance versus
> allocating a VIP on a hardware one?
>
That's what I'm trying to avoid: some drivers will be "simple actuators",
some will have their own logic.
Regarding the question about criteria: good question :) But it, in my
opinion, a bit unrelated to the scheduling architecture, e.g. it is actual
whichever choice we make.

- In this Grizzly release we won't probably have a huge amount of drivers.
> Or probably we'll have the drivers, but Quantum LB service, being
> experimental, will probably be deployed with no more than one or two
> drivers.
>
Another interesting point in my opinion is that this scheduling logic is
> part of the LB plugin we're implementing for Grizzly, not part of the DB
> model supporting the tenant API. There will be, of course, model classes
> for device management, but they (and all the logic for managing them)
> should be separate from the modules which implement the API.
>
That's for sure. In fact we've currently thinking of scheduling and device
management as separate mandatory plugin which will provide it's
functionality to other advanced service plugins.


> My argument here is not that we should not have a global scheduler; I'm
> just saying I have the impression that there are some important details
> which are not yet completely fleshed out.
>
 I understand that, and i just want to make this details affect particular
code (drivers, algorithms), but not whole architecture.

Nevertheless, driver-level scheduling is valuable too, and probably easier
> to implement. I wouldn't disregard, in the long run, having a two-step
> process:
> Step 1 - Quantum LB plugin schedules drivers according either to
> service_type required by the user or request features
> Step 2 - Driver selects device according to capabilities
>
Questions here:
1) If driver selects device, is it mandatory for all drivers to provide
such functionality?
2) Where device database is stored?
If it is mandatory for driver to be able to select a device, should device
database be driver-specific, e.g. each driver has it's own?
Will drivers access single database remotely? Remember we decided that
drivers run within agent, and there could be several agents running. Driver
of which agent instance should be responsible for scheduling?
If it's not mandatory for driver, then some scheduling logic will be in
generic scheduler, some in drivers.

In fact, any option of above brings tons of coding and testing complexity
when we start to answer these questions.


>> > but to Sam’s point, a common scheduler might not have enough
>> visibility or understanding of device specifics/limitations in order to
>> “correctly” pick the right device.
>> Saying this you assume that driver has such understanding, and even that
>> might not be the case.
>>
>
> This boils down to defining what a driver is. If it has to be a simple
> "actuator" (I don't remember the name it had in Atlas), then it makes
> perfectly sense to do the scheduling in the service, as the driver just
> executes the LB operation.
>
In fact, we decided that once, that driver is simple and synchronous that
maps generic LB model to device-specific.
But by saying that "driver may not have understanding" I mean that in some
cases we need an extended status of the device to know its "rating" in
scheduling. Example: N of deployed VIPs (may be hard to find out for the
driver), current connections, preconfigured device limits; some of these
could be got from device, some are known at device DB.


> In fact, in order to avoid that, scheduler should contain:
>> 1) all necessary logic to make a decision (logic may be device-specific,
>> e.g. different for different kinds of devices, or even different instances
>> of the same device type). In that case scheduling becomes simple fast
>> operation: read data from DB - make choice - write to DB.
>> 2) active device monitoring: that is needed for "visibility and
>> understanding", it can be device-specific. It is performed by scheduler
>> using it's device database and using device-specific code from the drivers
>> (but code is running under scheduler process or plugin).
>>
>> The idea behind such scheme is the same as scheduling in nova. Unlike
>> nova we don't have devices reporting their status to the scheduler, so we
>> need to poll them proactively.
>>
>
> I am not sure I agree on this statement. Scheduling in nova is a decision
> which takes into account a limited set of capabilities, and then picks the
> first node with enough resources. It does not select the "best" one -
> though I concede you can just replace the scheduling algorithm with another
> that select the best node. However, it assumes all nodes are identical.
> Instead here we're not distinguishing only on capabilities but also on
> features. And the concept of capability too might be quite different across
> drivers.
>
Scheduling in nova is done by scheduling drivers (which in that context
means pluggable scheduling algorithms), you know. There is "chance" driver
that picks host randomly, there is "least_cost" driver that looks into
node's load and status and applies more complex algorithm to make a
decision.
That is exactly what I'd like to see in our scheduler. Make framework that
will allow primitive implementation while leaving door open to more complex
ones.

However, it's probably down to me not understanding how you are planning to
> design this scheduler. For instance how it would select between creating a
> VIP on a physical load balancer, or spanning a virtual appliance and create
> the VIP on it?
>
 It's a good question. Regarding this particular choice: my idea is that
scheduler doesn't make such decision at all.
It doesn't operate on yet non-existing devices. In order to insert VM LB
into consideration, user needs to launch it and register as a device.
You may argue that it is additional actions user will need to take. But
what is on other hand?
If we let scheduler to make such decision, that it should also be capable
of doing the following:
- consider if there is already a VM LB in tenant's network that may be used
- spawn an instance of VM LB (tenant should also provide image id to do so,
it should also has its reflection in tenant API, which we don't have at the
moment)

Second point alone has two disadvantages:
- scheduling can't be synchronous operation. That will affect whole
architecture, complicating it.
- user will need to pass some device-management-specific info (image id) to
device-management unaware Tenant API.

Sorry for the long email, you're probably tired of reading it :)
But I think this is important discussion and worth covering in upcoming
LBaaS meetings; may be it's worth to setup a meeting on irc for that topic
specifically.

Thanks,
Eugene.


>
>>
>> What do you think?
>>
>> Thanks,
>> Eugene.
>>
>>
>> On Fri, Nov 30, 2012 at 7:45 PM, Ilya Shakhat <ishakhat at mirantis.com>wrote:
>>
>>> Sam, Youcef,
>>>
>>> Your point makes sense. I tried to make "scheduler" common, but it
>>> really looks like driver should participate in decision making.
>>>
>>> Thanks,
>>> Ilya
>>>
>>>
>>> 2012/11/30 Samuel Bercovici <SamuelB at radware.com>
>>>
>>>>  ** **
>>>>
>>>> Ilya,****
>>>>
>>>> ** **
>>>>
>>>> I concur with Youcef.****
>>>>
>>>> ** **
>>>>
>>>> -Sam.****
>>>>
>>>> ** **
>>>>
>>>> ** **
>>>>
>>>> *From:* Youcef Laribi [mailto:Youcef.Laribi at eu.citrix.com]
>>>> *Sent:* Friday, November 30, 2012 3:57 AM
>>>>
>>>> *To:* OpenStack Development Mailing List
>>>> *Subject:* Re: [openstack-dev] [Quantum][LBaaS] Selecting an LBaaS
>>>> device given a service type****
>>>>
>>>>  ** **
>>>>
>>>> Ilya,****
>>>>
>>>> ** **
>>>>
>>>> Let’s first separate device on-boarding and management from the
>>>> “scheduler” discussion. These are separate functions in the system, and
>>>> we’ll keep scheduler as the component that picks the driver/device (and we
>>>> can argue separately and decide whether this is a common component to all
>>>> vendors or a vendor-specific component, whether it resides in the plugin or
>>>> in the driver, etc.).****
>>>>
>>>> ** **
>>>>
>>>> Now to come back to the scheduler discussion, it might seem that a
>>>> scheduler can be common to all drivers would work fine, but to Sam’s point,
>>>> a common scheduler might not have enough visibility or understanding of
>>>> device specifics/limitations in order to “correctly” pick the right device.
>>>> For example, some vendors have a limit of vlans per interface, or cannot
>>>> support overlapping IPs, other vendor devices are meshed together in a
>>>> cluster or a pool and there are optimal ways to distribute VIPs or networks
>>>> in those setups, that a common scheduler wouldn’t understand. That’s why I
>>>> previously said that the scheduler (“placement component”) should pick the
>>>> driver, and let the driver pick a specific device, that way each vendor is
>>>> responsible for their own allocation strategy on their devices.   Or at
>>>> least the driver should have an input into the scheduler decision, so the
>>>> scheduler doesn’t pick the wrong device.****
>>>>
>>>> ** **
>>>>
>>>> On the admin/operator APIs used for device on-boarding and management,
>>>> we need to initiate a separate thread, and discuss whether this be
>>>> implemented as a separate plugin than the LBaaS plugin, or we extend the
>>>> LBaaS plugin to also support a provider/admin API? And what is the role of
>>>> LBaaS agent/driver in the device on-boarding process.****
>>>>
>>>> ** **
>>>>
>>>> Thanks****
>>>>
>>>> Youcef ****
>>>>
>>>>
>>>> ****
>>>>
>>>>  ****
>>>>
>>>> ** **
>>>>
>>>> ** **
>>>>
>>>> *From:* Ilya Shakhat [mailto:ishakhat at mirantis.com<ishakhat at mirantis.com>]
>>>>
>>>> *Sent:* Thursday, November 29, 2012 7:34 AM
>>>> *To:* OpenStack Development Mailing List
>>>> *Subject:* Re: [openstack-dev] [Quantum][LBaaS] Selecting an LBaaS
>>>> device given a service type****
>>>>
>>>> ** **
>>>>
>>>> Hi,****
>>>>
>>>> ** **
>>>>
>>>> Just a small summary of our discussion. We have the following
>>>> components:****
>>>>
>>>>    -  *aaS plugins - do the logic related to services. Plugins know
>>>>    service data model only and don't hold information about devices. When
>>>>    Plugin needs to deploy any changes, it calls Scheduler.****
>>>>    - Scheduler ("placement component") - binds services to devices. It
>>>>    has API to manage devices (similar to provider api in old LBaaS). Scheduler
>>>>    knows how to find device by service_type and has DB to store them. When it
>>>>    gets request from Plugin, it finds corresponding device and forwards
>>>>    request to Agent****
>>>>    - Agent - dispatches commands to drivers. Agent holds collection of
>>>>    drivers and knows how to dispatch message to them****
>>>>    - Drivers - translate service model to device-specific.****
>>>>
>>>> Both Scheduler and Agent are common for all types of services. The
>>>> logic related to load balancing is implemented as drivers. ****
>>>>
>>>> ** **
>>>>
>>>> Please see
>>>> http://wiki.openstack.org/Quantum/LBaaS/Architecture/Scheduler for
>>>> details on how components interact and what the typical workflow will be.
>>>> Comments are welcome :)****
>>>>
>>>> ** **
>>>>
>>>> Thanks,****
>>>>
>>>> Ilya****
>>>>
>>>> ** **
>>>>
>>>> 2012/11/28 Eugene Nikanorov <enikanorov at mirantis.com>****
>>>>
>>>> Hi Youcef,****
>>>>
>>>> ** **
>>>>
>>>> Please see my comments inline.****
>>>>
>>>> On Wed, Nov 28, 2012 at 2:14 AM, Youcef Laribi <
>>>> Youcef.Laribi at eu.citrix.com> wrote:****
>>>>
>>>> Changing the subject line (was: Progress on lbaas-plugin-api-crud)…****
>>>>
>>>>  ****
>>>>
>>>> Hi Eugene,****
>>>>
>>>>  ****
>>>>
>>>> Let’s make sure we agree on the assumptions:****
>>>>
>>>>  ****
>>>>
>>>> -          LBaaS Plugin has a set of drivers (vendor-specific).
>>>>  Drivers run in the LBaaS agent process.****
>>>>
>>>> Agreed. ****
>>>>
>>>>  -          Each driver (provider in Salvatore’s terminology) is
>>>> registered against a service type (yes, service type can include LB
>>>> drivers, firewall drivers, etc.).****
>>>>
>>>>  Agreed. ****
>>>>
>>>>  -          There can be several LBaaS drivers registered against the
>>>> same service type (e.g. “high-performance LB” service type).****
>>>>
>>>>  That probably needs to be clarified in more detail, but it does make
>>>> sense. As far as I understand there is exactly 1 driver per service type,
>>>> but there could be several service types referencing the same driver (like
>>>> you mentioned, "high-perf-lb", "low-cost-lb", etc)****
>>>>
>>>>  ****
>>>>
>>>>  If these assumptions are incorrect or need to be clarified further,
>>>> let’s start by doing this first J****
>>>>
>>>>  ****
>>>>
>>>> Now, let’s imagine we have a component in the system whose job is to
>>>> pick a driver/provider (device type) and a device ****
>>>>
>>>>  (device id) given a certain service type. We will call this component
>>>> the “placement component” (it’s not necessarily a  separate process like
>>>> the scheduler, and can be part of the plugin, the agent or the driver, it
>>>> doesn’t matter for this discussion at this stage).****
>>>>
>>>>  I'd still prefer to call it a scheduler even though it will be a part
>>>> of our plugin or separate component. ****
>>>>
>>>>   ****
>>>>
>>>> The Placement Component needs to choose a device that can load-balance
>>>> traffic coming from network A (where the VIP is) to VMs residing on Network
>>>> B (pool’s network). In order to do this, the Placement Component needs to
>>>> be aware of the capabilities of each driver/provider and can follow a
>>>> certain strategy of device allocation that might take into account some of
>>>> the following constraints. ****
>>>>
>>>>  ****
>>>>
>>>>   - Some device types are physical appliances, others are virtual
>>>> appliances running on Nova. The driver might prefer one or the other if
>>>> both satisfy the service type.****
>>>>
>>>>  Agreed.****
>>>>
>>>>  ****
>>>>
>>>>    - Some device types have a fixed number of devices (e.g. physical
>>>> appliances), while other devices can be created at will whenever needed
>>>> (e.g. HA-Proxy VMs).****
>>>>
>>>>  Agreed.****
>>>>
>>>>  ****
>>>>
>>>>    - Some device types can host a high number of VIPs, others can host
>>>> a smaller number.****
>>>>
>>>>  Agreed. Typically such factors are accounted during scheduling
>>>> process.****
>>>>
>>>> ** **
>>>>
>>>>   ****
>>>>
>>>>   - Given a choice between multiple device types that satisfy the same
>>>> service type, preference could be given to a device that is already wired
>>>> to network A and network B. ****
>>>>
>>>>  Not sure that this is necessary, but that could be an option.****
>>>>
>>>>  ****
>>>>
>>>>                  ****
>>>>
>>>>   - Given a choice between several equivalent devices (possibly of
>>>> different device types), the least loaded one is chosen.****
>>>>
>>>>                 ****
>>>>
>>>>   - A placement policy could be to group all VIPs belonging to the same
>>>> tenant on the same device whenever possible.****
>>>>
>>>>   ****
>>>>
>>>>   - A placement policy could be to group all VIPs belonging to the same
>>>> network on the same device.****
>>>>
>>>>  ****
>>>>
>>>> All these are legitimate placement strategies/algorithms, and our
>>>> placement component might be very basic or very sophisticated, but we can
>>>> hide this from the rest of the system.  ****
>>>>
>>>>  Nova has different scheduling drivers for this. We can use same
>>>> approach as well.****
>>>>
>>>> ** **
>>>>
>>>>  Now let's assume that Placement component working through some
>>>> combination of these rules, has finally chosen a driver/provider (e.g.
>>>> HA-Proxy) and a specific device (HA-Proxy device 1) or it decided to create
>>>> a new device in a driver (spawned new HA-Proxy VM, which is now HA-Proxy
>>>> device 2). Now it needs to wire the chosen device to Quantum Network A and
>>>> Network B (if it's not already wired to these networks).  This requires the
>>>> Placement Component to call Quantum to do the wiring (we need to figure out
>>>> the interface between the 2). If the device is a Nova VM, then this is easy
>>>> as it's done like for any other VM. If the device is physical then this
>>>> depends on the L2 switch technology used in the Quantum service (VLAN,
>>>> Linux-Bridge, etc.): the physical device (or a proxy of it) needs to run a
>>>> Quantum L2 agent in order to wire the device correctly.****
>>>>
>>>>  Agreed. ****
>>>>
>>>>   ****
>>>>
>>>> After all this is done, the device is ready to be configured with a
>>>> VIP. The Placement Component can return the driver, device_id (and possibly
>>>> other config data, like the address chosen for the VIP) to the LBaaS
>>>> plugin, which proceeds to call the LBaaS agent in order to create the VIP
>>>> on this device.****
>>>>
>>>>  Agreed. ****
>>>>
>>>>   ****
>>>>
>>>>  If we can understand what are the tasks of the “placement component”
>>>> and the interactions this component needs to have with other components,
>>>> then it’s easier to figure out where it should run.****
>>>>
>>>>  Recently we discussed an idea of separate plugin performing device
>>>> management and scheduling which will be a utility plugin for other service
>>>> plugins (not only lbaas). ****
>>>>
>>>> I think we'll need at least some simple form of this component within
>>>> our lbaas efforts.****
>>>>
>>>>   ****
>>>>
>>>> Youcef****
>>>>
>>>>  ****
>>>>
>>>> *From:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>>>> *Sent:* Monday, November 26, 2012 10:11 PM
>>>> *To:* OpenStack Development Mailing List
>>>> *Subject:* Re: [openstack-dev] [Quantum][LBaaS] Progress on
>>>> lbaas-plugin-api-crud****
>>>>
>>>>  ****
>>>>
>>>> Hi Youcef,****
>>>>
>>>>  ****
>>>>
>>>> Driver doesn't "choose" device-specific info, driver is device-specific
>>>> itself. ****
>>>>
>>>> When we send request to the agent, we need to specify which device to
>>>> use.****
>>>>
>>>>  ****
>>>>
>>>> So once the user have chosen device type via service_type on VIP
>>>> creation, Quantum not only should associate VIP with device type, but also
>>>> it should choose particular instance of that device type to deploy the VIP.
>>>> ****
>>>>
>>>> The process of choosing the instance is called scheduling. Unlike nova
>>>> it's unreasonable for LBaaS to have separate scheduler service, thus it
>>>> makes sense to have them built in the plugin.****
>>>>
>>>> I think we should not do this on agent since it doesn't have (and
>>>> should not have) device database.****
>>>>
>>>> Not should it access quantum's database directly.****
>>>>
>>>>  ****
>>>>
>>>> So overall workflow will look like the following:****
>>>>
>>>> 1. Add a device (type, physical info) to device registry (this is a
>>>> part of Provider API. Call to Quantum made by cloud provider in case of
>>>> shared devices, or by tenant in case of private VM balancers) ****
>>>>
>>>> 2. Create a VIP, specifying service type (=device type) (call by
>>>> tenant), ****
>>>>
>>>> 3. Choose device of specified type, associate the VIP with the device
>>>> (made by Quantum/Plugin)****
>>>>
>>>> 4. Send message with (logical VIP info, device_type, physical device
>>>> info) to LBaaS Agent (made by Quantum/Plugin)****
>>>>
>>>> 5. Communicate with particular device using driver according to
>>>> device_type (LBaaS Agent)****
>>>>
>>>>  ****
>>>>
>>>> Any CRUD request processed by Agent should be supplied by device type
>>>> and device parameters.****
>>>>
>>>>  ****
>>>>
>>>> You may think of alternative approach where device registry is held by
>>>> the Agent or even driver, but this approach has the next disadvantages:
>>>> ****
>>>>
>>>> - Scheduling goes to Agent or Driver and thus Agent/Driver should store
>>>> VIP-device association while VIP is a "foreign" object for the Agent/Driver.
>>>> ****
>>>>
>>>> - If we go with multiple agents for large deployments, we'll need to
>>>> sync their device databases****
>>>>
>>>> - Device locking will be complicated.****
>>>>
>>>> - If Agents will have non-intersecting sets of devices in their
>>>> registries than scheduling will be complicated or not possible.****
>>>>
>>>>  ****
>>>>
>>>> Please share you thoughts on this.****
>>>>
>>>>  ****
>>>>
>>>> Thanks,****
>>>>
>>>> Eugene.****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>> On Tue, Nov 27, 2012 at 3:38 AM, Youcef Laribi <
>>>> Youcef.Laribi at eu.citrix.com> wrote:****
>>>>
>>>> Hi Eugene, Leon,****
>>>>
>>>>  ****
>>>>
>>>> Could we have the LBaaS plugin choose the “driver” based on
>>>> service_type info, and then it’s the driver which choose the “device”? The
>>>> driver can obviously have its own DB model where it stores device-specific
>>>> info.****
>>>>
>>>>  ****
>>>>
>>>> Youcef****
>>>>
>>>>  ****
>>>>
>>>> *From:* Dan Wendlandt [mailto:dan at nicira.com]
>>>> *Sent:* Monday, November 26, 2012 9:13 AM
>>>> *To:* Leon Cui
>>>> *Cc:* OpenStack Development Mailing List; Salvatore Orlando
>>>> *Subject:* Re: [openstack-dev] 答复: 答复: 答复: 答复: 答复: Progress on
>>>> lbaas-plugin-api-crud****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>> On Mon, Nov 26, 2012 at 9:03 AM, Leon Cui <lcui at vmware.com> wrote:****
>>>>
>>>> Hi Eugene,****
>>>>
>>>> When did you change get merged into master? I did rebase on last Friday
>>>> which supposed to be your latest code, but anyway I’m planning to do it
>>>> again today. ****
>>>>
>>>>  ****
>>>>
>>>> Thanks for your reminder that I need to include device mgmt. into DB
>>>> model.  Need to look at Salvatore’s change on ServiceType.****
>>>>
>>>>  ****
>>>>
>>>> It seems to me that each LB plugin should be able to define its own DB
>>>> models for "device mgmt" (e.g., device address/credentials/etc.), as
>>>> different plugins may have different strategies for how they manage
>>>> devices.  The usual model is that plugins can define additional
>>>> models/tables to manage entities that are specific to that plugin.  This
>>>> is similar to how we didn't back the notion of a "vlan" into the DB model
>>>> for "core plugins", since not all plugins will use vlans.  If you
>>>> don't go down this route, you end up with a messy DB model as everyone
>>>> keeps adding columns for items that only a particular plugin needs to
>>>> track.  ****
>>>>
>>>>  ****
>>>>
>>>> Dan****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>   ****
>>>>
>>>> Thanks****
>>>>
>>>> Leon****
>>>>
>>>> *发件人**:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>>>> *发送时间:* 2012年11月26日 4:29****
>>>>
>>>>
>>>> *收件人:* Leon Cui
>>>> *抄送:* Ilya Shakhat; Sachin Thakkar; Oleg Bondarev; Salvatore Orlando;
>>>> Dan Wendlandt****
>>>>
>>>> *主题**:* Re: 答复: 答复: 答复: 答复: Progress on lbaas-plugin-api-crud****
>>>>
>>>>  ****
>>>>
>>>> Hi Leon,****
>>>>
>>>>  ****
>>>>
>>>> Thanks for sending me the patch.****
>>>>
>>>> I've looked at it briefly, there is one major thing I was able to
>>>> identify:****
>>>>
>>>> In order to couple things together (plugin, agent, drivers), we need to
>>>> add device management at least to DB model.****
>>>>
>>>> In particular, each vip should have a reference to the device (which
>>>> has a type and address/credentials). ****
>>>>
>>>> This information is passed in each agent notification message.****
>>>>
>>>> This part is missing in current design blueprints but i think we need
>>>> to add it before we put the code on review.****
>>>>
>>>> Probably it will also depend on Salvatore's ServiceTypes part.****
>>>>
>>>>  ****
>>>>
>>>> Also I see that your patch is based on some of my outdated patches. ***
>>>> *
>>>>
>>>> My code was recently merged into the master so you can rebase on master
>>>> using only Oleg's patch.****
>>>>
>>>>  ****
>>>>
>>>> Thanks,****
>>>>
>>>> Eugene.****
>>>>
>>>> On Fri, Nov 23, 2012 at 2:40 PM, Leon Cui <lcui at vmware.com> wrote:****
>>>>
>>>> Hi Eugene,****
>>>>
>>>> I’m still waiting for approval as openstack contributor.  For now I
>>>> simply attached the patch file that you might want to take a look first.
>>>> Once I got the approval, I’ll try to post the view asap.****
>>>>
>>>>  ****
>>>>
>>>> Thanks****
>>>>
>>>> Leon****
>>>>
>>>> *发件人**:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>>>> *发送时间:* 2012年11月20日 22:57
>>>> *收件人:* Leon Cui
>>>> *抄送:* Ilya Shakhat; Sachin Thakkar; Oleg Bondarev; Salvatore Orlando;
>>>> Dan Wendlandt
>>>> *主题:* Re: 答复: 答复: 答复: Progress on lbaas-plugin-api-crud****
>>>>
>>>>  ****
>>>>
>>>> Leon,****
>>>>
>>>>  ****
>>>>
>>>> I'll take agent and rpc parts.****
>>>>
>>>> I have registered
>>>> https://blueprints.launchpad.net/quantum/+spec/lbaas-agent-and-rpc to
>>>> track this.****
>>>>
>>>>  ****
>>>>
>>>> Thanks,****
>>>>
>>>> Eugene.****
>>>>
>>>> On Tue, Nov 20, 2012 at 2:16 PM, Leon Cui <lcui at vmware.com> wrote:****
>>>>
>>>> Hi Eugene,****
>>>>
>>>> Thanks for your suggestion.  It looks good to me.  I’ll work out the UT
>>>> first, and then align the class model to the diagram as you suggested.*
>>>> ***
>>>>
>>>>  ****
>>>>
>>>> Thanks****
>>>>
>>>> Leon****
>>>>
>>>> *发件人**:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>>>> *发送时间:* 2012年11月20日 17:32****
>>>>
>>>>
>>>> *收件人:* Leon Cui
>>>> *抄送:* Ilya Shakhat; Sachin Thakkar; Oleg Bondarev; Salvatore Orlando***
>>>> *
>>>>
>>>> *主题**:* Re: 答复: 答复: Progress on lbaas-plugin-api-crud****
>>>>
>>>>  ****
>>>>
>>>> replying to all...****
>>>>
>>>>  ****
>>>>
>>>> Leon,****
>>>>
>>>>  ****
>>>>
>>>> I think tests/unit/test_db_plugin.py  is right code to refer when
>>>> writing unit tests for db code. The only thing is that unit tests written
>>>> in test_db_plugin.py are a bit generic, e.g. the backend plugin is
>>>> specified in particular plugin's UTs which inherit
>>>> from QuantumDbPluginV2TestCase. I think UTs for balancer plugin may be more
>>>> specific, testing LoadbalancerPluginDb class.****
>>>>
>>>>  ****
>>>>
>>>> Since you need dababase utility methods from QuantumDbPluginV2 then
>>>> it's LoadbalancerPluginDb which should inherit from such QuantumDBBase (or
>>>> whatever you call it), so overall diagram will look like:****
>>>>
>>>>  ****
>>>>
>>>> ServicePluginBase****
>>>>
>>>>     |****
>>>>
>>>> LoadBalancerPluginBase ****
>>>>
>>>>     |****
>>>>
>>>>     |                                       QuantumDBBase ****
>>>>
>>>>     |                                            |****
>>>>
>>>> LoadBalancerPlugin  <---------- LoadBalancerPluginDb****
>>>>
>>>>  ****
>>>>
>>>> Thanks,****
>>>>
>>>> Eugene.****
>>>>
>>>>  ****
>>>>
>>>> On Tue, Nov 20, 2012 at 1:04 PM, Leon Cui <lcui at vmware.com> wrote:****
>>>>
>>>> Hi Eugene,****
>>>>
>>>> Thanks for your suggestion.  Please see my comments inline.****
>>>>
>>>>  ****
>>>>
>>>> One more question: I’m writing the unit test, mainly to verify the
>>>> database functionalities for LB CRUD.  Do you think
>>>> tests/unit/test_db_plugin.py is the right test code that I should refer
>>>> to?  Any good suggestions on this front?****
>>>>
>>>>  ****
>>>>
>>>> Thanks****
>>>>
>>>> Leon****
>>>>
>>>> *发件人**:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>>>> *发送时间:* 2012年11月20日 16:44
>>>> *收件人:* Leon Cui
>>>> *抄送:* Ilya Shakhat; Sachin Thakkar; Oleg Bondarev; Salvatore Orlando
>>>> *主题:* Re: 答复: Progress on lbaas-plugin-api-crud****
>>>>
>>>>  ****
>>>>
>>>> Hi Leon,****
>>>>
>>>>  ****
>>>>
>>>> A few thoughts on your diagram.****
>>>>
>>>>  ****
>>>>
>>>> Please consider the following:****
>>>>
>>>> 1) If you want something from QuantumDbPluginV2 and you feel it may be
>>>> common plugin functionality - you need to extract it to a separate class,
>>>> something like QuantumPluginBase, and inherit QuantumDBPluginV2 from this
>>>> class, ServicePluginBase should inherit from that class as well.****
>>>>
>>>> [Leon] I need some dababase utility methods from QuantumDbPluginV2. Abstract to a separate class could be a good idea. But I
>>>> ’m not sure if it’s a good idea to let ServicePluginBase to inherit
>>>> from this class. ServicePluginBase is an abstract class for service plugin
>>>> service (quantum manager) to use. ****
>>>>
>>>>  ****
>>>>
>>>> 2) LoadBalancerPluginBase imho should inherit from ServicePluginBase***
>>>> *
>>>>
>>>> [Leon] Why it needs to inherit from ServicePluginBase?
>>>> LoadBalancerPluginBase defines the loadbalancer extension APIs.  I think we
>>>> just make sure LoadbalancerPlugin inherits from both classes as below:*
>>>> ***
>>>>
>>>> ServicePluginBase    QuantumPluginDbBase   LoadbalancerPluginBase****
>>>>
>>>>        |                       |                      |****
>>>>
>>>>         ----------------------------------------------****
>>>>
>>>>                                |****
>>>>
>>>>                         LoadbalancerPlugin    ------
>>>> LoadbalancerPluginDb****
>>>>
>>>>               ****
>>>>
>>>> LoadbalancerPlugin will contain the LoadbalancerPluginDb instance for
>>>> database access.****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>> 3) Depending on what you need from QuantumDbPluginV2/QuantumPluginBase,
>>>> this may lead to the following inheritance sequence:****
>>>>
>>>> QuantumPluginBase****
>>>>
>>>>     |****
>>>>
>>>> ServicePluginBase****
>>>>
>>>>     |****
>>>>
>>>> LoadBalancerPluginBase ****
>>>>
>>>>     |****
>>>>
>>>> LoadBalancerPluginDb****
>>>>
>>>>     |****
>>>>
>>>> LoadBalancerPlugin****
>>>>
>>>>  ****
>>>>
>>>> Also, I think that LoadBalancerPlugin should not inherit
>>>> LoadBalancerPluginDb. ****
>>>>
>>>> Unlike core plugins where it could make sense, I'd prefer to see LoadBalancerPluginDb
>>>> to be a part of LoadBalancerPlugin.****
>>>>
>>>> I mean LoadBalancerPlugin implements "has a" LoadBalancerPluginDb instead of "is a" relation.
>>>>  ****
>>>>
>>>> The reason for this is that LoadBalancerPlugin provides CRUD
>>>> implementation which doesn't directly map to DB operations implemented in
>>>>  LoadBalancerPluginDb. ****
>>>>
>>>> E.g. my idea is:****
>>>>
>>>> LoadBalancerPlugin - CRUD, validation, calling LoadBalancerPluginDb,
>>>> sending/receiving messages to agent****
>>>>
>>>> LoadBalancerPluginDb - DB access.****
>>>>
>>>>  ****
>>>>
>>>> Thanks,****
>>>>
>>>> Eugene.****
>>>>
>>>>  ****
>>>>
>>>> On Tue, Nov 20, 2012 at 6:54 AM, Leon Cui <lcui at vmware.com> wrote:****
>>>>
>>>> Hi Ilya,****
>>>>
>>>> Right now I took Eugene’s change under review (
>>>> https://review.openstack.org/#/c/15733/) and am developing the
>>>> database access logic and plugin skeleton based on that service plugin
>>>> mechanism. The class model is illustrated in the below diagram:****
>>>>
>>>> [image: ServicePluginBase (quantum.plugins.services.service_base)
>>>> ,QuantumDbPluginV2 (quantum.db. db_base_plugin_v2),LoadBalancerPluginDb
>>>> (quantum.plugins.services.loadbalancer.loadbalancer_db),LoadBalancerPluginBase
>>>> (quantum.extensions.loadbalancer)]****
>>>>
>>>> [image: LoadBalancerPlugin
>>>> (quantum.plugins.services.loadbalancer.loadbalancerPlugin)]****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>
>>>> ****
>>>>
>>>> LoadBalancerPlugin module is the main body of loadbalancer plugin which
>>>> inherits from multiple classes:****
>>>>
>>>> -          ServicePluginBase: defines the abstract methods that a
>>>> service plugin should implemented.****
>>>>
>>>> -          QuantumDbPluginV2: contains a set of generic quantum
>>>> database access methods. I’m not sure if we really want to inherit
>>>> from this class but I’d like to leverage the methods defined in this
>>>> class.****
>>>>
>>>> -          LoadBalancerPluginDb: This the main part I’m coding on
>>>> which wrap the Lbaas database model and CRUD operation against the database.
>>>> ****
>>>>
>>>>  ****
>>>>
>>>> My thought is that LoadBalancerPlugin will control the LBaaS CRUD API
>>>> flow. For instance, “create_vip” method should first validate the input,
>>>> update the database, send message to the LbAgent over AMQP channel, than
>>>> update the database by setting the status from PENDING_CREATE to ACTIVE.
>>>> ****
>>>>
>>>>  ****
>>>>
>>>> I’m trying to write unit tests against the database access now which
>>>> will take a while to complete. Meanwhile it would be great to have your
>>>> help on coding the RPC interaction between plugin and agent.****
>>>>
>>>>  ****
>>>>
>>>> I don’t like blocking your part. What’s the best practice to
>>>> collaborate with you on this? Maybe I can shelve my change to you somehow?
>>>> ****
>>>>
>>>>  ****
>>>>
>>>> Thanks****
>>>>
>>>> Leon****
>>>>
>>>> *发件人**:* Ilya Shakhat [mailto:ishakhat at mirantis.com]
>>>> *发送时间:* 2012年11月19日 22:08
>>>> *收件人:* Sachin Thakkar; Leon Cui
>>>> *抄送:* Eugene Nikanorov; Oleg Bondarev
>>>> *主题:* Progress on lbaas-plugin-api-crud****
>>>>
>>>>  ****
>>>>
>>>> Hi Sachin, Leo, ****
>>>>
>>>>  ****
>>>>
>>>> Recently there was a thread related to LBaaS architecture (
>>>> http://lists.openstack.org/pipermail/openstack-dev/2012-November/002646.html).
>>>> How good is it aligned with your implementation? Do you need help in
>>>> coding? (we may take Agent part) ****
>>>>
>>>>  ****
>>>>
>>>> Thanks,****
>>>>
>>>> Ilya****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>>
>>>>
>>>> ****
>>>>
>>>>  ****
>>>>
>>>> --
>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>> Dan Wendlandt ****
>>>>
>>>> Nicira, Inc: www.nicira.com****
>>>>
>>>> twitter: danwendlandt
>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~****
>>>>
>>>>  ****
>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev****
>>>>
>>>>  ****
>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev****
>>>>
>>>>   ** **
>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev****
>>>>
>>>> ** **
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121204/d2993e93/attachment.html>


More information about the OpenStack-dev mailing list