[openstack-dev] [Manila] Driver modes, share-servers, and clustered backends

Deepak Shetty dpkshetty at gmail.com
Fri Jan 9 12:17:14 UTC 2015


Some of my comments inline.... prefixed with deepakcs

On Fri, Jan 9, 2015 at 6:43 AM, Li, Chen <chen.li at intel.com> wrote:

> Thanks for the explanations!
> Really helpful.
>
> My questions are added in line.
>
> Thanks.
> -chen
>
> -----Original Message-----
> From: Ben Swartzlander [mailto:ben at swartzlander.org]
> Sent: Friday, January 09, 2015 6:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Manila] Driver modes, share-servers, and
> clustered backends
>
> There has been some confusion on the topic of driver modes and
> share-server, especially as they related to storage controllers with
> multiple physical nodes, so I will try to clear up the confusion as much as
> I can.
>
> Manila has had the concept of "share-servers" since late icehouse. This
> feature was added to solve 3 problems:
> 1) Multiple drivers were creating storage VMs / service VMs as a side
> effect of share creation and Manila didn't offer any way to manage or even
> know about these VMs that were created.
> 2) Drivers needed a way to keep track of (persist) what VMs they had
> created
>
> ==> so, a corresponding relationship do exist between share server and
> virtual machines.
>

deepakcs: I also have the same Q.. is there a relation between share server
and service VM or not ? Is there any other way you can implement a share
server w/o creating a service VM ?
IIUC, some may say that the vserver created in case of netapp storage is eq
to share server ? If this is true, then we should have a notion of whether
share server is within Manila or outside Manila too, no ? If this is not
true, then does the netapp cluster_mode driver get classified as single_svm
mode driver ?


>
> 3) We wanted to standardize across drivers what these VMs looked like to
> Manila so that the scheduler and share-manager could know about them
>
> ==>Q, why scheduler and share-manager need to know them ?
>

deepakcs: I guess because these service VMs will be managed by Manila hence
they need to know about it


>
> It's important to recognize that from Manila's perspective, all a
> share-server is is a container for shares that's tied to a share network
> and it also has some network allocations. It's also important to know that
> each share-server can have zero, one, or multiple IP addresses and can
> exist on an arbitrary large number of physical nodes, and the actual form
> that a share-server takes is completely undefined.
>

deepakcs: I am confused about `can exist on an arbitrary large number of
physical nodes` - How is this true in case of generic driver, where service
VM is just a VM on one node. What does large number of physical nodes mean,
can you provide a real world example to understand this pls ?


>
> During Juno, drivers that didn't explicity support the concept of
> share-servers basically got a dummy share server created which acted as a
> giant container for all the shares that backend created. This worked okay,
> but it was informal and not documented, and it made some of the things we
> want to do in kilo impossible.
>
> ==> Q, what things are impossible?  Dummy share server solution make sense
> to me.
>

deepakcs: I looked at the stable/juno branch and I am not sure exactly to
which part of the code you refer to as "dummy server". can you pinpoint it
pls so that its clear for all. Are you referring to the ability of driver
to handle setup_server as a dummy server creation ? For eg: in glusterfs
case setup_server is no-op and I don't see how a dummy share server
(meanign service VM) is getting created from the code.



>
> To solve the above problem I proposed driver modes. Initially I proposed
> 3 modes:
> 1) single_svm
> 2) flat_multi_svm
> 3) managed_multi_svm
>
> Mode (1) was supposed to correspond to driver that didn't deal with share
> servers, and modes (2) and (3) were for drivers that did deal with share
> servers, where the difference between those 2 modes came down to networking
> details. We realized that (2) can be implemented as a special case of (3)
> so we collapsed the modes down to 2 and that's what's merged upstream now.
>
> ==> "driver that didn't deal with share servers "
>   =>
> https://blueprints.launchpad.net/manila/+spec/single-svm-mode-for-generic-driver
>   => This is where I get totally lost.
>   => Because for generic driver, it is "not create and delete share
> servers and its related network, but would still use a "share server"(the
> service VM) ".
>   => The share (the cinder volume) need to attach to an instance no matter
> what the driver mode is.
>   => I think "use" is some kind of "deal" too.
>

deepakcs: I partly agree with Chen above. If (1) doesn't deal with share
server, why even have 'svm' in it ? Also in *_multi_svm mode, what does
'multi' mean ? IIRC we provide the ability to manage share servers, 1 per
tenant, so how does multi fit into "1 share server per tenant" notion ? Or
am i completely wrong about it ?


>
> The specific names we settled on (single_svm and multi_svm) were perhaps
> poorly chosen, because "svm" is not a term we've used officially
> (unofficially we do talk about storage VMs and service VMs and the svm term
> captured both concepts nicely) and as some have pointed out, even multi and
> single aren't completely accurate terms because what we mean when we say
> single_svm is that the driver doesn't create/destroy share servers, it uses
> something created externally.
>

deepakcs: This is where i feel the notion of whether driver supporting
share server Vs where share server is created/managed makes sense. For eg:
In case of glusterfs we don't create/manage share server, so we fall into
single_svm mode, but per the definition above, it means that it uses share
servers created externally, which is incorrect because glusterfs doesn't
really have a concept of share server in the strictest sense (we don't
create a vserver before exporting a share, we just create a share and
export it ), so not sure if single_svm definition is correct in that sense.


>
> ==> If we use "svm" instead of "share server" in code, I'm ok with svm.
> I'd like mode name and code implementation is consistent.
>
> So one thing I want everyone to understand is that you can have a
> "single_svm" driver which is implemented by a large cluster of storage
> controllers, and you have have a "multi_svm" driver which is implemented a
> single box with some form of network and service virtualization. The two
> concepts are orthagonal.
>

deepakcs: What about concept of multi-tenancy as a driver mode... How do we
differentiate between drivers that are single Vs multi tenant ? This boils
down to documenting what we mean (and agree) by multi-tenancy in Manila. If
providing network segmentation is multi-tenant then the above driver modes
are fine, but without network seg too we can provide multi-tenatncy. For
eg: glusterfs native driver uses tenant specific certificates and allows
access to shares using TLS certs but doesn't use any share server. I would
debate that it would still be multi-tenant as i am able to get tenant
separation using certificates!


>
> The other thing we need to decide (hopefully at our upcoming Jan 15
> meeting) is whether to change the mode names and if so what to change them
> to. I've created the following etherpad with all of the suggestions I've
> heard so far and the my feedback on each:
> https://etherpad.openstack.org/p/manila-driver-modes-discussion
>

deepakcs: My thoughts on how a new driver developer should approach would
be:

1) Is the driver going to support single/multi {tenancy} access to shares ?
2) tenancy is implemented using network segmentation or somethign else
(certificates for eg) - tenancy_type maybe ?
3) Does it implement tenancy_type via Manila share servers or handles it
natively (in the storage backend)
4) tenancy_subtype (value depends on tenancy type, for network segmentation
its flat, vlan, gre etc)

thanx,
deepak
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150109/e4118abc/attachment.html>


More information about the OpenStack-dev mailing list