<div dir="ltr">From my some testing I did a couple of months ago, we decided to move to XFS to avoid the issue<div><br></div><div><div style="font-family:arial,sans-serif;font-size:13px"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I was poking around with after my file system inadvertently filled and found that in ex3/4 all of the inodes in the file system have to be zeroed prior to mkfs completing (unless the kernel is above 2.6.37 in which case the inode table is lazily initialized (in the background) after the first mount]). Initializing the inode table with the current default bytes-per-inode ratio of 16Ki and 256Bi (inode-size) per inode results in 16GiB of inodes per 1TiB of volume.<br>
</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
There appear to be two viable options to reducing the <span class="">format</span> time. <br><span style="font-family:arial,sans-serif;font-size:13px">1. We can increase the bytes-per-inode value during mkfs, at the cost of total files the file system can store. Moving this so that 1GiB per TiB are initialized (setting bytes-per-inode to 256Ki) would result in 4Mi inodes per TiB of disk (down from 64Mi) </span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">2. We can </span><span class="" style="font-family:arial,sans-serif;font-size:13px">format</span><span style="font-family:arial,sans-serif;font-size:13px"> all non-os partitions as XFS. Which does a lot less upfront allocation. By my observation, it appears to initialize around 700MiB per TiB</span></blockquote>
</div><div><br></div></div><div>The performance increase you saw is likely to the deferred updates that will still occur on the first mount of the device (but in the background). However this wont occur for RHEL4,5,6 as the kernel won't do it so they will still sit there and require the pre-allocation. You can short the inodes by making them larger as I initially tested which could be OK since most images are going many MByte and usualy many GByte</div>
<div><br></div><div>Just my feedback on what you were seeing. moving to ext4 preferred over ext3</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Dec 19, 2013 at 12:30 PM, Sean Dague <span dir="ltr"><<a href="mailto:sean@dague.net" target="_blank">sean@dague.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On 12/19/2013 03:21 PM, Robert Collins wrote:<br>
> The default ephemeral filesystem in Nova is ext3 (for Linux). However<br>
> ext3 is IMNSHO a pretty poor choice given ext4's existence. I can<br>
> totally accept that other fs's like xfs might be contentious - but is<br>
> there any reason not to make ext4 the default?<br>
><br>
> I'm not aware of any distro that doesn't have ext4 support - even RHEL<br>
> defaults to ext4 in RHEL5.<br>
><br>
> The reason I'm raising this is that making a 1TB ext3 ephemeral volume<br>
> does (way) over 5GB of writes due to zeroing all the inode tables, but<br>
> an ext4 one does less than 1% of the IO - 14m vs 7seconds in my brief<br>
> testing. (We were investigating why baremetal deploys were slow :)).<br>
><br>
> -Rob<br>
<br>
</div>Seems like a fine change to me. I assume that's all just historical<br>
artifact.<br>
<span class="HOEnZb"><font color="#888888"><br>
-Sean<br>
<br>
--<br>
Sean Dague<br>
Samsung Research America<br>
<a href="mailto:sean@dague.net">sean@dague.net</a> / <a href="mailto:sean.dague@samsung.com">sean.dague@samsung.com</a><br>
<a href="http://dague.net" target="_blank">http://dague.net</a><br>
<br>
</font></span><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br>If google has done it, Google did it right!
</div>