[Openstack-operators] large high-performance ephemeral storage

Joe Topjian joe at topjian.net
Wed Jun 13 14:59:07 UTC 2018


fio is fine with me. I'll lazily defer to your expertise on the right fio
commands to run for each case. :)

If we're going to test within the guest, that's going to introduce a new
set of variables, right? Should we settle on a standard flavor (maybe two
if we wanted to include both virtio and virtio-scsi) or should the results
make note of what local configuration was used?

On Wed, Jun 13, 2018 at 8:45 AM, Blair Bethwaite <blair.bethwaite at gmail.com>
wrote:

> Hey Joe,
>
> Thanks! So shall we settle on fio as a standard IO micro benchmarking
> tool? Seems to me the minimum we want is throughput and IOPs oriented tests
> for both the guest OS workload profile and the some sort of large working
> set application workload. For the latter it is probably best to ignore
> multiple files and focus solely on queue depth for parallelism, some sort
> of mixed block size profile/s, and some sort of r/w mix (where write <=50%
> to acknowledge this is ephemeral storage so hopefully something is using it
> soon after storing). Thoughts?
>
> Cheers,
> Blair
>
> On Thu., 14 Jun. 2018, 00:24 Joe Topjian, <joe at topjian.net> wrote:
>
>> Yes, you can! The kernel documentation for read/write limits actually
>> uses /dev/null in the examples :)
>>
>> But more seriously: while we have not architected specifically for high
>> performance, for the past few years, we have used a zpool of cheap spindle
>> disks and 1-2 SSD disks for caching. We have ZFS configured for
>> deduplication which helps for the base images but not so much for ephemeral.
>>
>> If you have a standard benchmark command in mind to run, I'd be happy to
>> post the results. Maybe others could do the same to create some type of
>> matrix?
>>
>> On Wed, Jun 13, 2018 at 8:18 AM, Blair Bethwaite <
>> blair.bethwaite at gmail.com> wrote:
>>
>>> Hi Jay,
>>>
>>> Ha, I'm sure there's some wisdom hidden behind the trolling here?
>>>
>>> Believe me, I have tried to push these sorts of use-cases toward volume
>>> or share storage, but in the research/science domain there is often more
>>> accessible funding available to throw at infrastructure stop-gaps than
>>> software engineering (parallelism is hard). PS: when I say ephemeral I
>>> don't necessarily mean they aren't doing backups and otherwise caring that
>>> they have 100+TB of data on a stand alone host.
>>>
>>> PS: I imagine you can set QoS limits on /dev/null these days via CPU
>>> cgroups...
>>>
>>> Cheers,
>>>
>>>
>>> On Thu., 14 Jun. 2018, 00:03 Jay Pipes, <jaypipes at gmail.com> wrote:
>>>
>>>> On 06/13/2018 09:58 AM, Blair Bethwaite wrote:
>>>> > Hi all,
>>>> >
>>>> > Wondering if anyone can share experience with architecting Nova KVM
>>>> > boxes for large capacity high-performance storage? We have some
>>>> > particular use-cases that want both high-IOPs and large capacity
>>>> local
>>>> > storage.
>>>> >
>>>> > In the past we have used bcache with an SSD based RAID0 write-through
>>>> > caching for a hardware (PERC) backed RAID volume. This seemed to work
>>>> > ok, but we never really gave it a hard time. I guess if we followed a
>>>> > similar pattern today we would use lvmcache (or are people still
>>>> using
>>>> > bcache with confidence?) with a few TB of NVMe and a NL-SAS array
>>>> with
>>>> > write cache.
>>>> >
>>>> > Is the collective wisdom to use LVM based instances for these
>>>> use-cases?
>>>> > Putting a host filesystem with qcow2 based disk images on it can't
>>>> help
>>>> > performance-wise... Though we have not used LVM based instance
>>>> storage
>>>> > before, are there any significant gotchas? And furthermore, is it
>>>> > possible to use set IO QoS limits on these?
>>>>
>>>> I've found /dev/null to be the fastest ephemeral storage system, bar
>>>> none.
>>>>
>>>> Not sure if you can set QoS limits on it though.
>>>>
>>>> Best,
>>>> -jay
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20180613/717674df/attachment.html>


More information about the OpenStack-operators mailing list