<html><head><meta http-equiv="Content-Type" content="text/html charset=iso-8859-1"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">+1 :)<div><br><div><div><br><div><div>Le 24 juil. 2013 à 21:08, Joe Topjian <<a href="mailto:joe.topjian@cybera.ca">joe.topjian@cybera.ca</a>> a écrit :</div><br class="Apple-interchange-newline"><blockquote type="cite"><div dir="ltr"><div>Hi Jacob,</div><div><br></div><div>Are you using SAS or SSD drives for Gluster? Also, do you have one large Gluster volume across your entire cloud or is it broke up into a few different ones? I've wondered if there's a benefit to doing the latter so distribution activity is isolated to only a few nodes. The downside to that, of course, is you're limited to what compute nodes instances can migrate to.<br>
</div><div><br></div><div><div>I use Gluster for instance storage in all of my "controlled" environments like internal and sandbox clouds, but I'm hesitant to introduce it into production environments as I've seen the same issues that Razique describes -- especially with Windows instances. My guess is due to how NTFS writes to disk.</div>
<div><br></div></div><div>I'm curious if you could report the results of the following test: in a Windows instance running on Gluster, copy a 3-4gb file to it from the local network so it comes in at a very high speed. When I do this, the first few gigs come in very fast, but then slows to a crawl and the Gluster processes on all nodes spike.</div>
<div><br></div><div>Thanks,</div><div>Joe</div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin <span dir="ltr"><<a href="mailto:jacobgodin@gmail.com" target="_blank">jacobgodin@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Oh really, you've done away with Gluster all together? The fast backbone is definitely needed, but I would think that was the case with any distributed filesystem.<div>
<br></div><div>MooseFS looks promising, but apparently it has a few reliability problems.</div>
</div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua <span dir="ltr"><<a href="mailto:razique.mahroua@gmail.com" target="_blank">razique.mahroua@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">:-)<div>Actually I had to remove all my instances running on it (especially the windows ones), yah unfortunately my network backbone wasn't fast enough to support the load induced by GFS - especially the numerous operations performed by the self-healing agents :(</div>
<div><br></div><div>I'm currently considering MooseFS, it has the advantage to have a pretty long list of companies using it in production</div><div><br></div><div>take care</div><div><br></div><div><br><div><div>Le 24 juil. 2013 à 16:40, Jacob Godin <<a href="mailto:jacobgodin@gmail.com" target="_blank">jacobgodin@gmail.com</a>> a écrit :</div>
<br><blockquote type="cite"><div><div dir="ltr">A few things I found were key for I/O performance:<div><ol><li>Make sure your network can sustain the traffic. We are using a 10G backbone with 2 bonded interfaces per node.</li>
<li>Use high speed drives. SATA will not cut it.</li>
<li>Look into tuning settings. Razique, thanks for sending these along to me a little while back. A couple that I found were useful:</li><ul><li>KVM cache=writeback (a little risky, but WAY faster)</li><li>Gluster write-behind-window-size (set to 4MB in our setup)</li>
<li>Gluster cache-size (ideal values in our setup were 96MB-128MB)</li></ul></ol><div>Hope that helps!</div></div><div><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote"><div>On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <span dir="ltr"><<a href="mailto:razique.mahroua@gmail.com" target="_blank">razique.mahroua@gmail.com</a>></span> wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div>I had much performance issues myself with Windows instances, and I/O demanding instances. Make sure it fits your env. first before deploying it in production<div>
<br></div><div>Regards,</div><div>Razique</div></div><div><br><div><div>
<span style="border-spacing:0px;text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:'Lucida Grande';word-spacing:0px"><span style="font-weight:normal;font-family:Helvetica"><b style="color:rgb(19,112,138)">Razique Mahroua</b></span><span style="font-weight:normal;font-family:Helvetica;color:rgb(19,112,138)"><b> - </b></span><span style="font-family:Helvetica"><span style="font-weight:normal;font-family:Helvetica"><b style="color:rgb(19,112,138)">Nuage & Co</b></span><span style="border-collapse:separate;font-family:Helvetica;font-style:normal;font-variant:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;border-spacing:0px;font-size:medium"><span style="border-spacing:0px;text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><span style="border-collapse:separate;font-variant:normal;letter-spacing:normal;line-height:normal;text-align:-webkit-auto;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;border-spacing:0px"><div style="font-style:normal;font-size:medium;font-family:Helvetica;font-weight:normal">
<font color="#13708a"><a href="mailto:razique.mahroua@gmail.com" target="_blank">razique.mahroua@gmail.com</a></font></div><div style="font-style:normal;font-size:medium;font-family:Helvetica"><font color="#13708a">Tel : <a href="tel:%2B33%209%2072%2037%2094%2015" value="+33972379415" target="_blank">+33 9 72 37 94 15</a></font></div>
</span></span></span></span></span><br style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;text-transform:none;font-size:medium;white-space:normal;font-family:Arial;word-spacing:0px">
</div><span><NUAGECO-LOGO-Fblan_petit.jpg></span>
</div><div>
<br><div><div>Le 24 juil. 2013 à 16:25, Jacob Godin <<a href="mailto:jacobgodin@gmail.com" target="_blank">jacobgodin@gmail.com</a>> a écrit :</div><div><br><blockquote type="cite"><div dir="ltr">Hi Denis,<div>
<br></div><div>I would take a look into GlusterFS with a distributed, replicated volume. We have been using it for several months now, and it has been stable. Nova will need to have the volume mounted to its instances directory (default /var/lib/nova/instances), and Cinder has direct support for Gluster as of Grizzly I believe.</div>
<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <span dir="ltr"><<a href="mailto:dloshakov@gmail.com" target="_blank">dloshakov@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
I have issue with creating shared storage for Openstack. Main idea is to create 100% redundant shared storage from two servers (kind of network RAID from two servers).<br>
I have two identical servers with many disks inside. What solution can any one provide for such schema? I need shared storage for running VMs (so live migration can work) and also for cinder-volumes.<br>
<br>
One solution is to install Linux on both servers and use DRBD + OCFS2, any comments on this?<br>
Also I heard about Quadstor software and it can create network RAID and present it via iSCSI.<br>
<br>
Thanks.<br>
<br>
P.S. Glance uses swift and is setuped on another servers<br>
<br>
______________________________<u></u>_________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.<u></u>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-operators</a><br>
</blockquote></div><br></div>
_______________________________________________<br>OpenStack-operators mailing list<br><a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</blockquote></div></div><br></div></div></div></blockquote></div><br></div>
</blockquote></div><br></div></div></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr">Joe Topjian<div>Systems Architect</div><div>Cybera Inc.</div><div><br></div><div><a href="http://www.cybera.ca/" target="_blank">www.cybera.ca</a></div>
<div><br></div><div><font color="#666666"><span>Cybera</span><span> is a not-for-profit organization that works to spur and support innovation, for the economic benefit of Alberta, through the use of cyberinfrastructure.</span></font></div>
</div>
</div>
</blockquote></div><br></div></div></div></body></html>