[openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results
Michał Dulko
michal.dulko at intel.com
Wed Sep 21 07:57:43 UTC 2016
On 09/20/2016 05:48 PM, John Griffith wrote:
> On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas
> <duncan.thomas at gmail.com <mailto:duncan.thomas at gmail.com>> wrote:
>
> On 20 September 2016 at 16:24, Nikita Konovalov
> <nkonovalov at mirantis.com <mailto:nkonovalov at mirantis.com>> wrote:
>
> Hi,
>
> From Sahara (and Hadoop workload in general) use-case the
> reason we used BDD was a complete absence of any overhead on
> compute resources utilization.
>
> The results show that the LVM+Local target perform pretty
> close to BDD in synthetic tests. It's a good sign for LVM. It
> actually shows that most of the storage virtualization
> overhead is not caused by LVM partitions and drivers
> themselves but rather by the iSCSI daemons.
>
> So I would still like to have the ability to attach partitions
> locally bypassing the iSCSI to guarantee 2 things:
> * Make sure that lio processes do not compete for CPU and RAM
> with VMs running on the same host.
> * Make sure that CPU intensive VMs (or whatever else is
> running nearby) are not blocking the storage.
>
>
> So these are, unless we see the effects via benchmarks, completely
> meaningless requirements. Ivan's initial benchmarks suggest
> that LVM+LIO is pretty much close enough to BDD even with iSCSI
> involved. If you're aware of a case where it isn't, the first
> thing to do is to provide proof via a reproducible benchmark.
> Otherwise we are likely to proceed, as John suggests, with the
> assumption that local target does not provide much benefit.
>
> I've a few benchmarks myself that I suspect will find areas where
> getting rid of iSCSI is benefit, however if you have any then you
> really need to step up and provide the evidence. Relying on vague
> claims of overhead is now proven to not be a good idea.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
> Honestly we can have both, I'll work up a bp to resurrect the idea of
> a "smart" scheduling feature that lets you request the volume be on
> the same node as the compute node and use it directly, and then if
> it's NOT it will attach a target and use it that way (in other words
> you run a stripped down c-vol service on each compute node).
Don't we have at least scheduling problem solved [1] already?
[1]
https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/instance_locality_filter.py
>
> Sahara keeps insisting on being a snow-flake with Cinder volumes and
> the block driver, it's really not necessary. I think we can
> compromise just a little both ways, give you standard Cinder semantics
> for volumes, but allow you direct acccess to them if/when requested,
> but have those be flexible enough that targets *can* be attached so
> they meet all of the required functionality and API implementations.
> This also means that we don't have to continue having a *special*
> driver in Cinder that frankly only works for one specific use case and
> deployment.
>
> I've pointed to this a number of times but it never seems to
> resonate... but I never learn so I'll try it once again [1]. Note
> that was before the name "brick" was hijacked and now means something
> completely different.
>
> [1]: https://wiki.openstack.org/wiki/CinderBrick
>
> Thanks,
> John
More information about the OpenStack-dev
mailing list