[Openstack-operators] Shared storage HA question

Jacob Godin jacobgodin at gmail.com
Wed Jul 24 18:37:09 UTC 2013


Oh really, you've done away with Gluster all together? The fast backbone is
definitely needed, but I would think that was the case with any distributed
filesystem.

MooseFS looks promising, but apparently it has a few reliability problems.


On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua
<razique.mahroua at gmail.com>wrote:

> :-)
> Actually I had to remove all my instances running on it (especially the
> windows ones), yah unfortunately my network backbone wasn't fast enough to
> support the load induced by GFS - especially the numerous operations
> performed by the self-healing agents :(
>
> I'm currently considering MooseFS, it has the advantage to have a pretty
> long list of companies using it in production
>
> take care
>
>
> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin at gmail.com> a écrit :
>
> A few things I found were key for I/O performance:
>
>    1. Make sure your network can sustain the traffic. We are using a 10G
>    backbone with 2 bonded interfaces per node.
>    2. Use high speed drives. SATA will not cut it.
>    3. Look into tuning settings. Razique, thanks for sending these along
>    to me a little while back. A couple that I found were useful:
>       - KVM cache=writeback (a little risky, but WAY faster)
>       - Gluster write-behind-window-size (set to 4MB in our setup)
>       - Gluster cache-size (ideal values in our setup were 96MB-128MB)
>
> Hope that helps!
>
>
>
> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <
> razique.mahroua at gmail.com> wrote:
>
>> I had much performance issues myself with Windows instances, and I/O
>> demanding instances. Make sure it fits your env. first before deploying it
>> in production
>>
>> Regards,
>> Razique
>>
>> *Razique Mahroua** - **Nuage & Co*
>> razique.mahroua at gmail.com
>> Tel : +33 9 72 37 94 15
>>
>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>
>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin at gmail.com> a écrit :
>>
>> Hi Denis,
>>
>> I would take a look into GlusterFS with a distributed, replicated volume.
>> We have been using it for several months now, and it has been stable. Nova
>> will need to have the volume mounted to its instances directory (default
>> /var/lib/nova/instances), and Cinder has direct support for Gluster as of
>> Grizzly I believe.
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov at gmail.com>wrote:
>>
>>> Hi all,
>>>
>>> I have issue with creating shared storage for Openstack. Main idea is to
>>> create 100% redundant shared storage from two servers (kind of network RAID
>>> from two servers).
>>> I have two identical servers with many disks inside. What solution can
>>> any one provide for such schema? I need shared storage for running VMs (so
>>> live migration can work) and also for cinder-volumes.
>>>
>>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>>> any comments on this?
>>> Also I heard about Quadstor software and it can create network RAID and
>>> present it via iSCSI.
>>>
>>> Thanks.
>>>
>>> P.S. Glance uses swift and is setuped on another servers
>>>
>>> ______________________________**_________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.**openstack.org<OpenStack-operators at lists.openstack.org>
>>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
>>> openstack-operators<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130724/3d90331c/attachment.html>


More information about the OpenStack-operators mailing list