<div dir="ltr">Thanks for the information Mike, I would initially test with GlusterFS.</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 18, 2014 at 8:19 PM, Mike Smith <span dir="ltr"><<a href="mailto:mismith@overstock.com" target="_blank">mismith@overstock.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div style="direction:ltr;font-family:Tahoma;color:#000000;font-size:10pt">We run GFS2 for our shared instances volume and it has worked very well for us. We leverage CLVM and fiber connected SAN LUNs on the backend in order to do that. It's great for
live migrations, etc.
<div><br>
</div>
<div><span style="font-size:10pt">I'm told that GFS2 performance degrades significantly once you mount the filesystem on more than about 16 nodes.</span><span style="font-size:10pt"> </span>We are planning on moving to GlusterFS or Ceph in the future for
a couple of reasons:</div>
<div><br>
</div>
<div>- Ceph and Cluster scale-out more linearly</div>
<div>- We want to use more commodity-type hardware in remote data centers</div>
<div>- We don't any want cluster/quorum related issues to take down the shared storage</div>
<div><br>
</div>
<div>
<div>
<div>
<div><br>
</div>
<div>
<div>
<div style="font-family:Tahoma;font-size:13px">
<div><br>
</div>
<div>Mike Smith</div>
<div>Principal Engineer, Website Systems</div>
<div>Overstock.com</div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<div style="font-family:Times New Roman;color:#000000;font-size:16px">
<hr>
<div style="direction:ltr"><font face="Tahoma" color="#000000"><b>From:</b> yasith tharindu [<a href="mailto:yasithucsc@gmail.com" target="_blank">yasithucsc@gmail.com</a>]<br>
<b>Sent:</b> Thursday, September 18, 2014 4:06 AM<br>
<b>To:</b> Robert van Leeuwen<br>
<b>Cc:</b> <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
<b>Subject:</b> Re: [Openstack] What is the best network architecture for Openstack<br>
</font><br>
</div><div><div class="h5">
<div></div>
<div>
<div dir="ltr">Thanks for the reply Robert;
<div><br>
</div>
<div>Does GFS is stable now ? Are there Openstack production setups runs with GFS?</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 18, 2014 at 12:17 PM, Robert van Leeuwen <span dir="ltr">
<<a href="mailto:Robert.vanLeeuwen@spilgames.com" target="_blank">Robert.vanLeeuwen@spilgames.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span>> We have four servers and one SAS data store for the openstack deployment. All servers have SAS<br>
> interfaces. We are going to put one controller node and 3 compute nodes. We need to run the system with<br>
> live migration enabled.<br>
><br>
</span><span>> Is there are other options available to use the Compute nodes SAS interfaces too but live migration needed also<br>
> enabled.<br>
<br>
</span>The only way to directly access the same data on the SAS data store would be by running a clustered filesystem. (e.g. GFS)<br>
Clustered filesystems are quite a bit more complex then regular filesystems so I'm not sure if that would be the preferred route though.<br>
<br>
<br>
Cheers,<br>
Robert van Leeuwen<br>
<br>
<br>
<br>
<br>
<br>
<br>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
Thanks..<br>
Regards...<br>
<br>
Blog: <a href="http://www.yasith.info" target="_blank">http://www.yasith.info</a><br>
Twitter : <a href="http://twitter.com/yasithnd" target="_blank">http://twitter.com/yasithnd</a><br>
LinkedIn : <a href="http://www.linkedin.com/in/yasithnd" target="_blank">http://www.linkedin.com/in/yasithnd</a><br>
<div>GPG Key ID : <b>57CEE66E</b></div>
</div>
</div>
</div></div></div>
</div>
</div>
</div>
</div>
</div>
<br>
<hr>
<font face="Arial" color="Gray" size="1"><br>
CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or
the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in
error, please notify sender immediately by telephone or return email. Thank you.<br>
</font>
</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Thanks..<br>Regards...<br><br>Blog: <a href="http://www.yasith.info" target="_blank">http://www.yasith.info</a><br>Twitter : <a href="http://twitter.com/yasithnd" target="_blank">http://twitter.com/yasithnd</a><br>LinkedIn : <a href="http://www.linkedin.com/in/yasithnd" target="_blank">http://www.linkedin.com/in/yasithnd</a><br><div>GPG Key ID : <b>57CEE66E</b></div>
</div>