CEPH/RDMA + Openstack

Volodymyr Litovka doka.ua at gmx.com
Wed Aug 28 09:27:08 UTC 2019


Hi Stig,

the main question is - whether you tested it with Openstack and all
components works? Because regarding Ceph itself - we're using Mellanox
ConnectX-4LX cards and found that Ceph works fine with RoCE in LAG
configuration.

> There is potential here

Yes, agree with you. That's why we're trying to be ready for future
improvements :) I guess that since Ceph supports RDMA officially, Redhat
(as owner of Ceph) will sell it and, thus, will improve support.

Thank you.

On 28.08.2019 11:42, Stig Telfer wrote:
> Hi there -
>
>> On 28 Aug 2019, at 08:58, Volodymyr Litovka <doka.ua at gmx.com> wrote:
>> does anyone have experience using RDMA-enabled CEPH with Openstack? How
>> stable is it? Whether all Openstack components (Nova, Cinder) works with
>> such specific configuration of Ceph? Any issues which can affect overall
>> system reliability?
> Last time I looked at this (with pre-release Nautilus, about 9 months ago), I had mixed results.
>
> There are four generally-available RDMA fabrics (Infiniband, RoCE, iWARP and OPA) and I had a go at all of them apart from iWARP.
>
> RoCE worked for me but IB and OPA were troublesome to get working.  There’s some work contributed for iWARP support that introduces the RDMA connection manager (RDMACM), which I found also helped for IB and OPA.
>
> There is potential here but in performance terms, I didn’t manage a thorough benchmarking and didn’t see conclusive proof of advantage.  Perhaps things have come on since I looked, but it wasn’t an obvious win at the time.  I’d love to have another pop at it, but for lack of time…
>
> Cheers,
> Stig
>
>

--
Volodymyr Litovka
   "Vision without Execution is Hallucination." -- Thomas Edison




More information about the openstack-discuss mailing list