[Openstack-operators] Shared storage HA question

Jacob Godin jacobgodin at gmail.com
Mon Jul 29 13:36:55 UTC 2013


Not a problem! It wasn't without its battles :P and I would still love to
see performance come up even more. Might start looking into SSDs..


On Mon, Jul 29, 2013 at 10:35 AM, Joe Topjian <joe.topjian at cybera.ca> wrote:

> Thanks, Jacob! Good to know that an all around high-speed Gluster
> environment should handle spikes of activity.
>
>
> On Fri, Jul 26, 2013 at 5:09 AM, Jacob Godin <jacobgodin at gmail.com> wrote:
>
>> Hi Joe,
>>
>> I'm using SAS drives currently. Gluster is configured as one volume, with
>> the thought being that the more spindles the better. If we were using SSDs,
>> this would probably be configured differently.
>>
>> I performed a speed test on a Windows instance and write speeds seemed
>> consistent. Essentially, I spun up a Linux instance, made a 5GB file and
>> popped it into Apache. Then, went to my Windows instance (on the same
>> virtual network) and grabbed the file. It downloaded consistently between
>> 240-300Mbps.
>>
>>
>> On Wed, Jul 24, 2013 at 4:08 PM, Joe Topjian <joe.topjian at cybera.ca>wrote:
>>
>>> Hi Jacob,
>>>
>>> Are you using SAS or SSD drives for Gluster? Also, do you have one large
>>> Gluster volume across your entire cloud or is it broke up into a few
>>> different ones? I've wondered if there's a benefit to doing the latter so
>>> distribution activity is isolated to only a few nodes. The downside to
>>> that, of course, is you're limited to what compute nodes instances can
>>> migrate to.
>>>
>>> I use Gluster for instance storage in all of my "controlled"
>>> environments like internal and sandbox clouds, but I'm hesitant to
>>> introduce it into production environments as I've seen the same issues that
>>> Razique describes -- especially with Windows instances. My guess is due to
>>> how NTFS writes to disk.
>>>
>>> I'm curious if you could report the results of the following test: in a
>>> Windows instance running on Gluster, copy a 3-4gb file to it from the local
>>> network so it comes in at a very high speed. When I do this, the first few
>>> gigs come in very fast, but then slows to a crawl and the Gluster processes
>>> on all nodes spike.
>>>
>>> Thanks,
>>> Joe
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin <jacobgodin at gmail.com>wrote:
>>>
>>>> Oh really, you've done away with Gluster all together? The fast
>>>> backbone is definitely needed, but I would think that was the case with any
>>>> distributed filesystem.
>>>>
>>>> MooseFS looks promising, but apparently it has a few reliability
>>>> problems.
>>>>
>>>>
>>>> On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua <
>>>> razique.mahroua at gmail.com> wrote:
>>>>
>>>>> :-)
>>>>> Actually I had to remove all my instances running on it (especially
>>>>> the windows ones), yah unfortunately my network backbone wasn't fast enough
>>>>> to support the load induced by GFS - especially the numerous operations
>>>>> performed by the self-healing agents :(
>>>>>
>>>>> I'm currently considering MooseFS, it has the advantage to have a
>>>>> pretty long list of companies using it in production
>>>>>
>>>>> take care
>>>>>
>>>>>
>>>>> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin at gmail.com> a écrit :
>>>>>
>>>>> A few things I found were key for I/O performance:
>>>>>
>>>>>    1. Make sure your network can sustain the traffic. We are using a
>>>>>    10G backbone with 2 bonded interfaces per node.
>>>>>    2. Use high speed drives. SATA will not cut it.
>>>>>    3. Look into tuning settings. Razique, thanks for sending these
>>>>>    along to me a little while back. A couple that I found were useful:
>>>>>       - KVM cache=writeback (a little risky, but WAY faster)
>>>>>       - Gluster write-behind-window-size (set to 4MB in our setup)
>>>>>       - Gluster cache-size (ideal values in our setup were 96MB-128MB)
>>>>>
>>>>> Hope that helps!
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <
>>>>> razique.mahroua at gmail.com> wrote:
>>>>>
>>>>>> I had much performance issues myself with Windows instances, and I/O
>>>>>> demanding instances. Make sure it fits your env. first before deploying it
>>>>>> in production
>>>>>>
>>>>>> Regards,
>>>>>> Razique
>>>>>>
>>>>>> *Razique Mahroua** - **Nuage & Co*
>>>>>> razique.mahroua at gmail.com
>>>>>> Tel : +33 9 72 37 94 15
>>>>>>
>>>>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>>>>
>>>>>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin at gmail.com> a écrit
>>>>>> :
>>>>>>
>>>>>> Hi Denis,
>>>>>>
>>>>>> I would take a look into GlusterFS with a distributed, replicated
>>>>>> volume. We have been using it for several months now, and it has been
>>>>>> stable. Nova will need to have the volume mounted to its instances
>>>>>> directory (default /var/lib/nova/instances), and Cinder has direct support
>>>>>> for Gluster as of Grizzly I believe.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov at gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> I have issue with creating shared storage for Openstack. Main idea
>>>>>>> is to create 100% redundant shared storage from two servers (kind of
>>>>>>> network RAID from two servers).
>>>>>>> I have two identical servers with many disks inside. What solution
>>>>>>> can any one provide for such schema? I need shared storage for running VMs
>>>>>>> (so live migration can work) and also for cinder-volumes.
>>>>>>>
>>>>>>> One solution is to install Linux on both servers and use DRBD +
>>>>>>> OCFS2, any comments on this?
>>>>>>> Also I heard about Quadstor software and it can create network RAID
>>>>>>> and present it via iSCSI.
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>> P.S. Glance uses swift and is setuped on another servers
>>>>>>>
>>>>>>> ______________________________**_________________
>>>>>>> OpenStack-operators mailing list
>>>>>>> OpenStack-operators at lists.**openstack.org<OpenStack-operators at lists.openstack.org>
>>>>>>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
>>>>>>> openstack-operators<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-operators mailing list
>>>>>> OpenStack-operators at lists.openstack.org
>>>>>>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>>
>>>
>>>
>>> --
>>> Joe Topjian
>>> Systems Architect
>>> Cybera Inc.
>>>
>>> www.cybera.ca
>>>
>>> Cybera is a not-for-profit organization that works to spur and support
>>> innovation, for the economic benefit of Alberta, through the use
>>> of cyberinfrastructure.
>>>
>>
>>
>
>
> --
> Joe Topjian
> Systems Architect
> Cybera Inc.
>
> www.cybera.ca
>
> Cybera is a not-for-profit organization that works to spur and support
> innovation, for the economic benefit of Alberta, through the use
> of cyberinfrastructure.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130729/1e010777/attachment.html>


More information about the OpenStack-operators mailing list