[openstack-dev] [tripleo][manila] Ganesha deployment

Jan Provaznik jprovazn at redhat.com
Tue Apr 11 14:50:15 UTC 2017


On Mon, Apr 10, 2017 at 6:55 PM, Ben Nemec <openstack at nemebean.com> wrote:
> I'm not really an expert on composable roles so I'll leave that to someone
> else, but see my thoughts inline on the networking aspect.
>
> On 04/10/2017 03:22 AM, Jan Provaznik wrote:
>>
>> 2) define a new VIP (for IP failover) and 2 networks for NfsStorage role:
>>     a) a frontend network between users and ganesha servers (e.g.
>> NfsNetwork name), used by tenants to mount nfs shares - this network
>> should be accessible from user instances.
>
>
> Adding a new network is non-trivial today, so I think we want to avoid that
> if possible.  Is there a reason the Storage network couldn't be used for
> this?  That is already present on compute nodes by default so it would be
> available to user instances, and it seems like the intended use of the
> Storage network matches this use case.  In a Ceph deployment today that's
> the network which exposes data to user instances.
>

Access to the ceph public network (StorageNetwork) is a big privilege
(from discussing this with ceph team), bigger than accessing only
ganesha nfs servers, so StorageNetwork should be exposed only when
really necessary.

>>     b) a backend network between ganesha servers ans ceph cluster -
>> this could just map to the existing StorageNetwork I think.
>
>
> This actually sounds like a better fit for StorageMgmt to me.  It's
> non-user-facing storage communication, which is what StorageMgmt is used for
> in the vanilla Ceph case.
>

If StorageMgmt is used for replication and internal ceph nodes
communication, I wonder if it's not too permissive access? Ganesha
servers should need access ti ceph public network only.

>> What i'm not sure at all is how network definition should look like.
>> There are following Overcloud deployment options:
>> 1) no network isolation is used - then both direct ceph mount and
>> mount through ganesha should work because StorageNetwork and
>> NfsNetwork are accessible from user instances (there is no restriction
>> in accessing other networks it seems).
>
>
> There are no other networks without network-isolation.  Everything runs over
> the provisioning network.  The network-isolation templates should mostly
> handle this for you though.
>
>> 2) network isolation is used:
>>     a) ceph is used directly - user instances need access to the ceph
>> public network (which is StorageNetwork in Overcloud) - how should I
>> enable access to this network? I filled a bug for this deployment
>> variant here [3]
>
>
> So does this mean that the current manila implementation is completely
> broken in network-isolation?  If so, that's rather concerning.
>

This affects deployments of manila with internal (=deployed by
TripleO) ceph backend.

> If I'm understanding correctly, it sounds like what needs to happen is to
> make the Storage network routable so it's available from user instances.
> That's not actually something TripleO can do, it's an underlying
> infrastructure thing.  I'm not sure what the security implications of it are
> either.
>
> Well, on second thought it might be possible to make the Storage network
> only routable within overcloud Neutron by adding a bridge mapping for the
> Storage network and having the admin configure a shared Neutron network for
> it.  That would be somewhat more secure since it wouldn't require the
> Storage network to be routable by the world.  I also think this would work
> today in TripleO with no changes.
>

This sounds interesting, I was searching for more info how bridge
mapping should be done in this case and how specific setup steps
should look like, but the process is still not clear to me, I would be
grateful for more details/guidance with this.

> Alternatively I guess you could use ServiceNetMap to move the public Ceph
> traffic to the public network, which has to be routable.  That seems like it
> might have a detrimental effect on the public network's capacity, but it
> might be okay in some instances.
>

I would rather avoid this option (both because of network traffic and
because of exposing ceph public network to everybody).

>>     b) ceph is used through ganesha - user instances need access to
>> ganesha servers (NfsNetwork in previous paragraph) - how should I
>> enable access to this network?
>
>
> I think the answer here will be the same as for vanilla Ceph.  You need to
> make the network routable to instances, and you'd have the same options as I
> discussed above.
>

Yes, it seems that using the mapping to provider network would solve
the existing problem when using ceph directly and when using ganesha
servers in future (it would be just matter of to which network is
exposed).

>>
>> The ultimate (and future) plan is to deploy ganesha-nfs in VMs (which
>> will run in Overcloud, probably managed by manila ceph driver), in
>> this deployment mode a user should have access to ganesha servers and
>> only ganesha server VMs should have access to ceph public network.
>> Ganesha VMs would run in a separate tenant so I wonder if it's
>> possible to manage access to the ceph public network (StorageNetwork
>> in Overcloud) on per-tenant level?
>
>
> This would suggest that the bridged Storage network approach is the best.
> In that case access to the ceph public network is controlled by the
> overcloud Neutron, so you would just need to only give access to it to the
> tenant running the Ganesha VMs.  User VMs would only get access to a
> separate shared network providing access to the public Ganesha API, and the
> Ganesha VMs would straddle both networks.
>
>>
>> Any thoughts and hints?
>>
>> Thanks, Jan
>>
>> [1] https://github.com/nfs-ganesha/nfs-ganesha/wiki
>> [2] https://github.com/ceph/ceph-ansible/tree/master/roles/ceph-nfs
>> [3] https://bugs.launchpad.net/tripleo/+bug/1680749
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list