[openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

John Griffith john.griffith8 at gmail.com
Wed Sep 21 21:23:02 UTC 2016


On Wed, Sep 21, 2016 at 12:57 AM, Michał Dulko <michal.dulko at intel.com>
wrote:

> On 09/20/2016 05:48 PM, John Griffith wrote:
> > On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas
> > <duncan.thomas at gmail.com <mailto:duncan.thomas at gmail.com>> wrote:
> >
> >     On 20 September 2016 at 16:24, Nikita Konovalov
> >     <nkonovalov at mirantis.com <mailto:nkonovalov at mirantis.com>> wrote:
> >
> >         Hi,
> >
> >         From Sahara (and Hadoop workload in general) use-case the
> >         reason we used BDD was a complete absence of any overhead on
> >         compute resources utilization.
> >
> >         The results show that the LVM+Local target perform pretty
> >         close to BDD in synthetic tests. It's a good sign for LVM. It
> >         actually shows that most of the storage virtualization
> >         overhead is not caused by LVM partitions and drivers
> >         themselves but rather by the iSCSI daemons.
> >
> >         So I would still like to have the ability to attach partitions
> >         locally bypassing the iSCSI to guarantee 2 things:
> >         * Make sure that lio processes do not compete for CPU and RAM
> >         with VMs running on the same host.
> >         * Make sure that CPU intensive VMs (or whatever else is
> >         running nearby) are not blocking the storage.
> >
> >
> >     So these are, unless we see the effects via benchmarks, completely
> >     meaningless requirements. Ivan's initial benchmarks suggest
> >     that LVM+LIO is pretty much close enough to BDD even with iSCSI
> >     involved. If you're aware of a case where it isn't, the first
> >     thing to do is to provide proof via a reproducible benchmark.
> >     Otherwise we are likely to proceed, as John suggests, with the
> >     assumption that local target does not provide much benefit.
> >
> >     I've a few benchmarks myself that I suspect will find areas where
> >     getting rid of iSCSI is benefit, however if you have any then you
> >     really need to step up and provide the evidence. Relying on vague
> >     claims of overhead is now proven to not be a good idea.
> >
> >     ____________________________________________________________
> ______________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <http://OpenStack-dev-request@lists.openstack.org?subject:
> unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> > ​Honestly we can have both, I'll work up a bp to resurrect the idea of
> > a "smart" scheduling feature that lets you request the volume be on
> > the same node as the compute node and use it directly, and then if
> > it's NOT it will attach a target and use it that way (in other words
> > you run a stripped down c-vol service on each compute node).
>
> Don't we have at least scheduling problem solved [1] already?
>
> [1]
> https://github.com/openstack/cinder/blob/master/cinder/
> scheduler/filters/instance_locality_filter.py


Yes, that is a sizeable chunk of the solution.  The remaining components
are how to coordinate with Nova (compute nodes) and figuring out if we just
use c-vol as is, or if we come up with some form of a paired down agent.
Just using c-vol as a start might be the best way to go.
​


>
>
> >
> > Sahara keeps insisting on being a snow-flake with Cinder volumes and
> > the block driver, it's really not necessary.  I think we can
> > compromise just a little both ways, give you standard Cinder semantics
> > for volumes, but allow you direct acccess to them if/when requested,
> > but have those be flexible enough that targets *can* be attached so
> > they meet all of the required functionality and API implementations.
> > This also means that we don't have to continue having a *special*
> > driver in Cinder that frankly only works for one specific use case and
> > deployment.
> >
> > I've pointed to this a number of times but it never seems to
> > resonate... but I never learn so I'll try it once again [1].  Note
> > that was before the name "brick" was hijacked and now means something
> > completely different.
> >
> > [1]: https://wiki.openstack.org/wiki/CinderBrick
> >
> > Thanks,
> > John​
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/84e526b3/attachment.html>


More information about the OpenStack-dev mailing list