[Openstack-operators] Shared storage HA question

Razique Mahroua razique.mahroua at gmail.com
Wed Jul 24 18:47:23 UTC 2013


Not done yet, I'll still do some testing on it, but I don't expect much given the current topology



MooseFS lacks a decentralized meta-data server, but you can build an HA with an active/ passive master/metalogger, I've been able to run such setup for almost 2 years now, and not any issue so far

Le 24 juil. 2013 à 20:37, Jacob Godin <jacobgodin at gmail.com> a écrit :

> Oh really, you've done away with Gluster all together? The fast backbone is definitely needed, but I would think that was the case with any distributed filesystem.
> 
> MooseFS looks promising, but apparently it has a few reliability problems.
> 
> 
> On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua <razique.mahroua at gmail.com> wrote:
> :-)
> Actually I had to remove all my instances running on it (especially the windows ones), yah unfortunately my network backbone wasn't fast enough to support the load induced by GFS - especially the numerous operations performed by the self-healing agents :(
> 
> I'm currently considering MooseFS, it has the advantage to have a pretty long list of companies using it in production
> 
> take care
> 
> 
> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin at gmail.com> a écrit :
> 
>> A few things I found were key for I/O performance:
>> Make sure your network can sustain the traffic. We are using a 10G backbone with 2 bonded interfaces per node.
>> Use high speed drives. SATA will not cut it.
>> Look into tuning settings. Razique, thanks for sending these along to me a little while back. A couple that I found were useful:
>> KVM cache=writeback (a little risky, but WAY faster)
>> Gluster write-behind-window-size (set to 4MB in our setup)
>> Gluster cache-size (ideal values in our setup were 96MB-128MB)
>> Hope that helps!
>> 
>> 
>> 
>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <razique.mahroua at gmail.com> wrote:
>> I had much performance issues myself with Windows instances, and I/O demanding instances. Make sure it fits your env. first before deploying it in production
>> 
>> Regards,
>> Razique
>> 
>> Razique Mahroua - Nuage & Co
>> razique.mahroua at gmail.com
>> Tel : +33 9 72 37 94 15
>> 
>> <NUAGECO-LOGO-Fblan_petit.jpg>
>> 
>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin at gmail.com> a écrit :
>> 
>>> Hi Denis,
>>> 
>>> I would take a look into GlusterFS with a distributed, replicated volume. We have been using it for several months now, and it has been stable. Nova will need to have the volume mounted to its instances directory (default /var/lib/nova/instances), and Cinder has direct support for Gluster as of Grizzly I believe.
>>> 
>>> 
>>> 
>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov at gmail.com> wrote:
>>> Hi all,
>>> 
>>> I have issue with creating shared storage for Openstack. Main idea is to create 100% redundant shared storage from two servers (kind of network RAID from two servers).
>>> I have two identical servers with many disks inside. What solution can any one provide for such schema? I need shared storage for running VMs (so live migration can work) and also for cinder-volumes.
>>> 
>>> One solution is to install Linux on both servers and use DRBD + OCFS2, any comments on this?
>>> Also I heard about Quadstor software and it can create network RAID and present it via iSCSI.
>>> 
>>> Thanks.
>>> 
>>> P.S. Glance uses swift and is setuped on another servers
>>> 
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> 
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130724/caf65cab/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 61428_254798781315632_35282649_n.jpg
Type: image/jpg
Size: 24277 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130724/caf65cab/attachment-0001.jpg>


More information about the OpenStack-operators mailing list