[openstack-dev] [Manila]Question about gateway-mediated-with-ganesha

Deepak Shetty dpkshetty at gmail.com
Wed Feb 11 13:31:29 UTC 2015


On Tue, Feb 10, 2015 at 1:51 AM, Li, Chen <chen.li at intel.com> wrote:

>  Hi list,
>
>
>
> I’m trying to understand how manila use NFS-Ganesha, and hope to figure
> out what I need to do to use it if all patches been merged (only one patch
> is under reviewing,  right ?).
>
>
>
> I have read:
>
> https://wiki.openstack.org/wiki/Manila/Networking/Gateway_mediated
>
> https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha
>
>
>
> From documents, it is said, within Ganesha, multi-tenancy would be
> supported:
>
> *And later the Ganesha core would be extended to use the infrastructure
> used by generic driver to provide network separated multi-tenancy. The core
> would manage Ganesha service running in the service VMs, and the VMs
> themselves that reside in share networks.*
>
>
>
> ð  it is said : *extended to use the infrastructure used by generic
> driver to provide network separated multi-tenancy*
>
> So, when user create a share, a VM (share-server) would be created to run
> Ganesha-server.
>
> ð  I assume this VM should connect the 2 networks : user’s share-network
> and the network where Glusterfs cluster is running.
>
>
>
> But, in generic driver, it create a manila service network at beginning.
>
> When user create a share, a “subnet” would be created in manila service
> network corresponding to each user’s “share-network”:
>
> This means every VM(share-server) generic driver has created are living in
> different subnets, they’re not able to connect to each other.
>

When you say VM, its confusing, whether you are referring to service VM or
tenant VM. Since you are also saying share-server, I presume you mean
service VM!

IIUC each share-server VM (also called service VM) is serving all VMs
created by a tenant. In other words, generic driver creates 1 service VM
per tenant, and hence serves all the VMs (tenant VMs) created by that tenant
Manila experts on the list can correct me if I am wrong here. Generic
driver creates service VM (if not already present for that tenant) as part
of creating a new share and connect the tenant network to the service VM
network via neutron router (creates ports on the router which helps connect
the 2 different subnets), thus the tenant VMs can ping/access the service
VM. There is no question and/or need to have 2 service VMs talk to each
other, because they are serving different tenants, thus they need to be
isolated!



>
>
> If my understanding here is correct, the VMs that running Ganesha are
> living the different subnets too.
>
> ð  Here is my question:
>
> How VMs(share-servers) running Ganesha be able to connect to the single
> Glusterfs cluster ?
>


Typically GlusterFS will be deployed on storage nodes (by storage admin)
that are NOT part of openstack. So having the share-server talk/connect
with GlusterFS is equivalent to saying "Allow openstack VM to talk with
non-openstack nodes", in other words "Connect the neutron network to
non-neutron network (also called provider/host network)".

This is achieved by ensuring your openstack Network node is configured to
forward tenant traffic to provider network, which involves neutron skills
and some neutron black magic :)
To know what this involves, pls see section "Setup devstack networking to
allow Nova VMs access external/provider network" in my blog @
http://dcshetty.blogspot.in/2015/01/using-glusterfs-native-driver-in.html

This should be taken care by your openstack network admin who should
configure the openstack network node to allow this to happen, this isn't a
Manila / GlusterFS driver responsibility, rather its an openstack
deployment option thats taken care by the network admins during openstack
deployment.



*Disclaimer: I am not neutron expert, so feel free to correct/update me*
HTH,

thanx,
deepak
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150211/9c28967b/attachment.html>


More information about the OpenStack-dev mailing list