[Openstack-operators] Shared storage HA question

Stephane Boisvert stephane.boisvert at gameloft.com
Wed Jul 24 15:41:55 UTC 2013


No need to 'terminate' them ? just powering them off will do it



On 13-07-24 11:39 AM, Jacob Godin wrote:
> Hi Stephane,
>
> If you have any existing instances, you will need to completely power 
> them off and back on again for the change to take affect.
>
>
> On Wed, Jul 24, 2013 at 12:30 PM, Stephane Boisvert 
> <stephane.boisvert at gameloft.com 
> <mailto:stephane.boisvert at gameloft.com>> wrote:
>
>     Thanks for the quick answer.. I already did it but it seems not to
>     be taken in account... I'll test it again and open a new thread if
>     I fail.
>
>     Thanks Jacob
>
>
>
>     On 13-07-24 11:20 AM, Jacob Godin wrote:
>>     Hi Stephane,
>>
>>     This is actually done in Nova with the config
>>     directive disk_cachemodes="file=writeback"
>>
>>
>>     On Wed, Jul 24, 2013 at 11:47 AM, Stephane Boisvert
>>     <stephane.boisvert at gameloft.com
>>     <mailto:stephane.boisvert at gameloft.com>> wrote:
>>
>>         sorry to interfere in that thread but I did set  cache=true
>>         in my ceph config... but where I can set  cache=writeback ?
>>
>>
>>         thanks for your help
>>
>>         On 13-07-24 10:40 AM, Jacob Godin wrote:
>>>         A few things I found were key for I/O performance:
>>>
>>>          1. Make sure your network can sustain the traffic. We are
>>>             using a 10G backbone with 2 bonded interfaces per node.
>>>          2. Use high speed drives. SATA will not cut it.
>>>          3. Look into tuning settings. Razique, thanks for sending
>>>             these along to me a little while back. A couple that I
>>>             found were useful:
>>>               * KVM cache=writeback (a little risky, but WAY faster)
>>>               * Gluster write-behind-window-size (set to 4MB in our
>>>                 setup)
>>>               * Gluster cache-size (ideal values in our setup were
>>>                 96MB-128MB)
>>>
>>>         Hope that helps!
>>>
>>>
>>>
>>>         On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
>>>         <razique.mahroua at gmail.com
>>>         <mailto:razique.mahroua at gmail.com>> wrote:
>>>
>>>             I had much performance issues myself with Windows
>>>             instances, and I/O demanding instances. Make sure it
>>>             fits your env. first before deploying it in production
>>>
>>>             Regards,
>>>             Razique
>>>
>>>             *Razique Mahroua** - **Nuage & Co*
>>>             razique.mahroua at gmail.com <mailto:razique.mahroua at gmail.com>
>>>             Tel : +33 9 72 37 94 15
>>>
>>>
>>>             Le 24 juil. 2013 à 16:25, Jacob Godin
>>>             <jacobgodin at gmail.com <mailto:jacobgodin at gmail.com>> a
>>>             écrit :
>>>
>>>>             Hi Denis,
>>>>
>>>>             I would take a look into GlusterFS with a distributed,
>>>>             replicated volume. We have been using it for several
>>>>             months now, and it has been stable. Nova will need to
>>>>             have the volume mounted to its instances directory
>>>>             (default /var/lib/nova/instances), and Cinder has
>>>>             direct support for Gluster as of Grizzly I believe.
>>>>
>>>>
>>>>
>>>>             On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov
>>>>             <dloshakov at gmail.com <mailto:dloshakov at gmail.com>> wrote:
>>>>
>>>>                 Hi all,
>>>>
>>>>                 I have issue with creating shared storage for
>>>>                 Openstack. Main idea is to create 100% redundant
>>>>                 shared storage from two servers (kind of network
>>>>                 RAID from two servers).
>>>>                 I have two identical servers with many disks
>>>>                 inside. What solution can any one provide for such
>>>>                 schema? I need shared storage for running VMs (so
>>>>                 live migration can work) and also for cinder-volumes.
>>>>
>>>>                 One solution is to install Linux on both servers
>>>>                 and use DRBD + OCFS2, any comments on this?
>>>>                 Also I heard about Quadstor software and it can
>>>>                 create network RAID and present it via iSCSI.
>>>>
>>>>                 Thanks.
>>>>
>>>>                 P.S. Glance uses swift and is setuped on another
>>>>                 servers
>>>>
>>>>                 _______________________________________________
>>>>                 OpenStack-operators mailing list
>>>>                 OpenStack-operators at lists.openstack.org
>>>>                 <mailto:OpenStack-operators at lists.openstack.org>
>>>>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>>
>>>>             _______________________________________________
>>>>             OpenStack-operators mailing list
>>>>             OpenStack-operators at lists.openstack.org
>>>>             <mailto:OpenStack-operators at lists.openstack.org>
>>>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>>
>>>         _______________________________________________
>>>         OpenStack-operators mailing list
>>>         OpenStack-operators at lists.openstack.org  <mailto:OpenStack-operators at lists.openstack.org>
>>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>         -- 
>>         *Stéphane Boisvert*
>>         GNS-Shop Technical Coordinator
>>         5800 St-Denis suite 1001
>>         Montreal (QC), H2S 3L5
>>         *MSN:*stephane.boisvert at gameloft.com
>>         <mailto:stephane.boisvert at gameloft.com>
>>         *E-mail:*stephane.boisvert at gameloft.com
>>         <mailto:stephane.boisvert at gameloft.com>
>>
>>
>>         _______________________________________________
>>         OpenStack-operators mailing list
>>         OpenStack-operators at lists.openstack.org
>>         <mailto:OpenStack-operators at lists.openstack.org>
>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
>     -- 
>     *Stéphane Boisvert*
>     GNS-Shop Technical Coordinator
>     5800 St-Denis suite 1001
>     Montreal (QC), H2S 3L5
>     *MSN:*stephane.boisvert at gameloft.com
>     <mailto:stephane.boisvert at gameloft.com>
>     *E-mail:*stephane.boisvert at gameloft.com
>     <mailto:stephane.boisvert at gameloft.com>
>
>


-- 
*Stéphane Boisvert*
GNS-Shop Technical Coordinator
5800 St-Denis suite 1001
Montreal (QC), H2S 3L5
*MSN:*stephane.boisvert at gameloft.com
*E-mail:*stephane.boisvert at gameloft.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130724/4fd7fe71/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130724/4fd7fe71/attachment.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 8437 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130724/4fd7fe71/attachment-0001.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 8437 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130724/4fd7fe71/attachment-0002.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Inbox.jpg
Type: image/jpeg
Size: 8437 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130724/4fd7fe71/attachment.jpg>


More information about the OpenStack-operators mailing list