[Openstack-operators] ElasticSearch on OpenStack

Randall, Nathan X C-Nathan.Randall at charter.com
Tue Sep 6 17:49:08 UTC 2016


Tim,

For the storage backing Elasticsearch data nodes, we have been using one 500GB Cinder volume (backed by a Ceph cluster built from DL380s filled with 1.2TB 10k SAS drives) per data node. However, we've found that a VM with 8 vCPU and 64GB RAM can make use of more than 500GB disk capacity without bottlenecking on CPU or memory, so we are experimenting with 1TB or 1.5TB options per data node. We are also moving to a different tier of storage that uses an array of SSDs instead of spinning rust, though this change has very little to do with performance and very much to do with the automatic deduplication, compression, and encryption offered by the hardware backend (Solidfire) for that storage tier.  # <-- Not a vendor promo; just letting you know what we're using for that tier.

We get a lot of duplicated messages in Elasticsearch since we're using if for log monitoring, and JSON documents compress very well... so it actually costs us significantly less to leverage a storage hardware platform that provides native deduplication and compression. Having SSDs in the mix probably helps reduce latency a bit (due to lower seek times), but honestly we didn't have enough of a latency problem to justify moving away from volumes backed by Ceph.

Guidance from Elastic is always going to advocate using local SSDs when possible, but I'm pretty sure that's not what Elastic uses for their own cloud offering...

Thanks,
Nathan Randall

From: Tim Bell <Tim.Bell at cern.ch<mailto:Tim.Bell at cern.ch>>
Date: Saturday, September 3, 2016 at 1:12 AM
To: David Medberry <openstack at medberry.net<mailto:openstack at medberry.net>>
Cc: openstack-operators <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: Re: [Openstack-operators] ElasticSearch on OpenStack

Thanks. How's the storage handled ?

We're seeing slow I/O on local storage (which is also limited on space) and latencies with Ceph for block storage.

Tim

From: <medberry at gmail.com<mailto:medberry at gmail.com>> on behalf of David Medberry <openstack at medberry.net<mailto:openstack at medberry.net>>
Date: Friday 2 September 2016 at 22:18
To: Tim Bell <Tim.Bell at cern.ch<mailto:Tim.Bell at cern.ch>>
Cc: openstack-operators <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: Re: [Openstack-operators] ElasticSearch on OpenStack

Nathan: The page at https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html gives you good advice on a maximum size for the elasticsearch VM's memory.

Nathan: suggest you pick a flavor with 64GB RAM or less, then base other sizing things off of that (i.e. choose a flavor with 64GB of RAM and as many CPUs as possible for that RAM allocation, then base disk size on testing of your use case)

Nathan: give java heap 30GB, and leave the rest of the memory to the OS filesystem cache so that Lucene can make best use of it.

Nathan: that's mostly it for tuning. elasticsearch publishes many other docs for tuning recommendations, but there isn't anything specific to openstack besides the flavor choice. i personally chose CPU size (8CPU) such that all vCPUs for each VM would fit on a single NUMA node, which is a best practice for ESXi but not sure if it applies to KVM.

(resending for clarity)

On Fri, Sep 2, 2016 at 6:46 AM, David Medberry <openstack at medberry.net<mailto:openstack at medberry.net>> wrote:
Hey Tim,
We've just started this effort. I'll see if the guy running the service can comment today.

On Fri, Sep 2, 2016 at 6:36 AM, Tim Bell <Tim.Bell at cern.ch<mailto:Tim.Bell at cern.ch>> wrote:

Has anyone had experience running ElasticSearch on top of OpenStack VMs ?

Are there any tuning recommendations ?

Thanks
Tim

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160906/e35c1838/attachment.html>


More information about the OpenStack-operators mailing list