[openstack-dev] [manila] two level share scheduling
Jason Bishop
jason.bishop at gmail.com
Tue Feb 10 08:07:09 UTC 2015
Hi manila, I would like to broach the subject of share load balancing.
Currently the share server for an (in this case) NFS share that is newly
created is determined at share creation time. In this proposal, the share
server is determined "late binding style" at mount-time instead.
For the sake of discussion, lets call the proposed idea "two-level share
scheduling".
TL;DR remove share server from export_location in database and query the
driver for this at mount-time
First, a quick description of current behavior:
When a share is created (from scratch), the manila scheduler identifies a
share server from its list of backends and makes an api call to
create_share method in the appropriate driver. The driver executes the
required steps and returns the export_location which is then written to the
database.
For example, this create command:
$ manila create --name myshare --share-network
fb7ea7de-19fb-4650-b6ac-16f918e66d1d NFS 1
would result in this
$ manila list
+--------------------------------------+---------+------+-------------+-----------+-------------+---------------------------------------------------------------+---------------------------------+
| ID | Name | Size | Share Proto |
Status | Volume Type | Export location
| Host |
+--------------------------------------+---------+------+-------------+-----------+-------------+---------------------------------------------------------------+---------------------------------+
| 6d6f57f2-3ac5-46c1-ade4-2e9d48776e21 | myshare | 1 | NFS |
available | None |
10.254.0.3:/shares/share-6d6f57f2-3ac5-46c1-ade4-2e9d48776e21
| jasondevstack at generic1#GENERIC1 |
+--------------------------------------+---------+------+-------------+-----------+-------------+---------------------------------------------------------------+---------------------------------+
with this associated database record:
mysql> select * from shares\G
*************************** 1. row ***************************
created_at: 2015-02-10 07:06:21
updated_at: 2015-02-10 07:07:25
deleted_at: NULL
deleted: False
id: 6d6f57f2-3ac5-46c1-ade4-2e9d48776e21
user_id: 848b808e91e5462f985b6131f8a905e8
project_id: ed01cbf358f74ff08263f9672b2cdd01
host: jasondevstack at generic1#GENERIC1
size: 1
availability_zone: nova
status: available
scheduled_at: 2015-02-10 07:06:21
launched_at: 2015-02-10 07:07:25
terminated_at: NULL
display_name: myshare
display_description: NULL
snapshot_id: NULL
share_network_id: fb7ea7de-19fb-4650-b6ac-16f918e66d1d
share_server_id: c2602adb-0602-4128-9d1c-4024024a069a
share_proto: NFS
export_location: 10.254.0.3:
/shares/share-6d6f57f2-3ac5-46c1-ade4-2e9d48776e21
volume_type_id: NULL
1 row in set (0.00 sec)
Proposed scheme:
The proposal is simple in concept. Instead of the driver
(GenericShareDriver for example) returning both share server ip address and
path in share export_location, only the path is returned and saved in the
database. The binding of the share server ip address is only determined at
share mount time. In practical terms this means share server is determined
by an api call to the driver when _get_shares is called. The driver would
then have the option of determining which IP address from its basket of
addresses to return. In this way, each share mount event presents an
opportunity for the NFS traffic to be balanced over all available network
endpoints.
A possible signature for this new call might look like this (with the
GenericShareDriver having the simple implementation of return server[
'public_address']):
def get_share_server_address(self, ctx, share, share_server):
"""Return the IP address of a share server for given share, given
current ."""
# implementation dependent logic to determine IP address
address = self._myownloadfilter()
return address
Off the top of my head I see potential uses including:
o balance load over several glusterfs servers
o balance load over several NFS/CIFS share servers which have multiple
NICS
o balance load over several generic share servers which are exporting
read-only volumes (such as software repositories)
o i think isilon should also benefit but I will defer to somebody more
knowledgable on the subject
I see following cons:
o slow manila list performance
o very slow manila list performance if all share drivers are busy doing
long operations such as create/delete share
Interested in your thoughts
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150210/e7bac257/attachment.html>
More information about the OpenStack-dev
mailing list