[Openstack-operators] large high-performance ephemeral storage

Blair Bethwaite blair.bethwaite at gmail.com
Wed Jun 13 14:18:11 UTC 2018


Hi Jay,

Ha, I'm sure there's some wisdom hidden behind the trolling here?

Believe me, I have tried to push these sorts of use-cases toward volume or
share storage, but in the research/science domain there is often more
accessible funding available to throw at infrastructure stop-gaps than
software engineering (parallelism is hard). PS: when I say ephemeral I
don't necessarily mean they aren't doing backups and otherwise caring that
they have 100+TB of data on a stand alone host.

PS: I imagine you can set QoS limits on /dev/null these days via CPU
cgroups...

Cheers,

On Thu., 14 Jun. 2018, 00:03 Jay Pipes, <jaypipes at gmail.com> wrote:

> On 06/13/2018 09:58 AM, Blair Bethwaite wrote:
> > Hi all,
> >
> > Wondering if anyone can share experience with architecting Nova KVM
> > boxes for large capacity high-performance storage? We have some
> > particular use-cases that want both high-IOPs and large capacity local
> > storage.
> >
> > In the past we have used bcache with an SSD based RAID0 write-through
> > caching for a hardware (PERC) backed RAID volume. This seemed to work
> > ok, but we never really gave it a hard time. I guess if we followed a
> > similar pattern today we would use lvmcache (or are people still using
> > bcache with confidence?) with a few TB of NVMe and a NL-SAS array with
> > write cache.
> >
> > Is the collective wisdom to use LVM based instances for these use-cases?
> > Putting a host filesystem with qcow2 based disk images on it can't help
> > performance-wise... Though we have not used LVM based instance storage
> > before, are there any significant gotchas? And furthermore, is it
> > possible to use set IO QoS limits on these?
>
> I've found /dev/null to be the fastest ephemeral storage system, bar none.
>
> Not sure if you can set QoS limits on it though.
>
> Best,
> -jay
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20180614/0c30d953/attachment.html>


More information about the OpenStack-operators mailing list