[openstack-dev] [Manila]Rename driver mode

Valeriy Ponomaryov vponomaryov at mirantis.com
Tue Jan 20 16:50:38 UTC 2015


We have come to decision with it:
http://eavesdrop.openstack.org/meetings/manila/2015/manila.2015-01-15-15.02.log.html
https://etherpad.openstack.org/p/manila-driver-modes-discussion
https://blueprints.launchpad.net/manila/+spec/rename-driver-modes

Here is change that implements this decision:
https://review.openstack.org/#/c/147821/

I ask people, who is able (driver maintainers?), to test share drivers for
presence of some breakage by this change.

On Thu, Jan 8, 2015 at 6:36 AM, Ben Swartzlander <ben at swartzlander.org>
wrote:

>
> On 01/07/2015 09:20 PM, Li, Chen wrote:
>
>  Update my proposal again:
>
>
>
> As a new bird for manila, I start using/learning manila with generic
> driver. When I reached driver mode,I became really confuing, because I
> can't stop myself jump into ideas:   share server == nova instance   &
> svm == share virtual machine == nova instance.
>
>
>
> Then I tried glusterFS, it is working under "single_svm_mode", I asked why
> it is "single" mode, the answer I get is " This is approach without usage
> of "share-servers""  ==>  without using "share-servers", then why "single"
> ??? More confusing ! :(
>
>
>
>
>
> Now I know, the mistake I made is ridiculous.
>
> Great thanks to vponomaryov & ganso, they made big effort helping me to
> figure out why I'm wrong.
>
>
>
>
>
> But, I don't think I'm the last one person making this mistake.
>
> So, I hope we can change the driver mode name less confusing and more easy
> to understand.
>
>
>
>
>
> First, "svm" should be removed, at least change it to ss (share-server),
> make it consistent with "share-server".
>
> I don't like single/multi, because that makes me think of numbers of
> share-servers, makes me want to ask:" if I create a share, that share need
> multi share-servers ? why ?"
>
>
>
> I agree the names we went with aren't the most obvious, and I'm open to
> changing them. Share-server is the name we have for virtual machines
> created by manila drivers so a name that refers to share servers rather
> than svms could make more sense.
>
>
>
>  Also, when I trying glusterFS (installed it following
> http://www.gluster.org/community/documentation/index.php/QuickStart),
> when I testing the GlusterFS volume, it said: "use one of the servers to
> mount the volume". Isn't that means using any server in the cluster can
> work and their work has no difference. So, is there a way to change
> glusterFS driver to add more than one "glusterfs_target", and all
> glusterfs_targets are replications for each other. Then when manila create
> a share, chose one target to use. This would distribute data traffic to the
> cluster, higher bandwidth, higher performance, right ? ==> This is
> "single_svm_mode", but obviously not "single".
>
>
>
>
>
> vponomaryov & ganso suggested basic_mode" and "advanced_mode", but I think
> basic/advanced is more driver perspective concept. Different driver might
> already have its own concept of basic advanced, beyong manila scope. This
> would make admin & driver programmer confusing.
>
>
> I really do not like basic/advanced. I think you summarized one reason why
> it's a bad choice. The relevant difference between the modes is whether the
> driver is able to create tenant-specific instances of a share filesystem
> server or whether tenants share access to a single server.
>
>  As "single_svm_mode" indicate driver just have "information" about
> "where" to go and "how", it is gotten by config opts and some special
> actions of drivers while "multi_svm_mode" need to create "where" and "how"
> with "infomation".
>
>
>
> My suggestion is
>
>    "single_svm_mode" ==> "static_mode"
>
>    "multi_svm_mode"  ==> "dynamic_mode".
>
>
>
> As "where" to go and "how" are "static" under "single_svm_mode", but
> "dynamically" create/delete by manila under "multi_svm_mode".\
>
>
> Static/dynamic is better than basic/advanced, but I still think we can do
> better. I will think about it and try to come up with another idea before
> the meeting tomorrow.
>
>  Also, about the share-server concept.
>
>
>
> "share-server" is a tenant point of view concept, it does not know if it
> is a VM or a dedicated hardware outside openstack because it is not visible
> to the tenant.
>
> Each share has its own "share-server", no matter how it get(get from
> configuration under single_svm_mode, get from manila under multi_svm_mode).
>
>
> I think I understand what you mean here, but in a more technical sense,
> share servers are something we hide from the tenant. When a tenant asks for
> a share to be created, it might get created on a server that already
> exists, or a new one might get created. The tenant has no control over
> this, and ideally shouldn't even know which decision manila made. The only
> thing we promise to the tenant is that they'll get a share. The intent of
> this design is to offer maximum flexibility to the driver authors, and to
> accommodate the widest variety of possible storage controller designs,
> without causing details about the backends to leak through the API layer
> and break the primary goal of Manila which is to provide a standardized API
> regardless of what the actual implementation is.
>
> We need to keep the above goals in mind when making decisions about share
> servers.
>
>  I get the wrong idea that about glusterFS has no share server based on
> https://github.com/openstack/manila/blob/master/manila/share/manager.py#L238,
> without reading driver code, isn't this saying: I create share without
> share-server. But, the truth is just share-server is not handled by manila,
> doesn't mean it not exist. E.g. in glusterFS, the share-server is
> "self.gluster_address".
>
>
>
> So, I suggest to edit ShareManager code to get share_server before
> create_share based on driver mode.
>
> Such as:
>
> http://paste.openstack.org/show/155930/
>
>
>
> This would affect all drivers, but I think it is worth for long term
> perspective.
>
>
>
> Hope to hear from you guys.
>
>
>
> Thanks.
>
> -chen
>
>
> _______________________________________________
> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomaryov at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150120/e28911d5/attachment.html>


More information about the OpenStack-dev mailing list