<div dir="ltr"><div class="gmail_default" style="font-family:monospace,monospace"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 21, 2016 at 12:57 AM, Michał Dulko <span dir="ltr"><<a href="mailto:michal.dulko@intel.com" target="_blank">michal.dulko@intel.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 09/20/2016 05:48 PM, John Griffith wrote:<br>
> On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas<br>
</span><span class="">> <<a href="mailto:duncan.thomas@gmail.com">duncan.thomas@gmail.com</a> <mailto:<a href="mailto:duncan.thomas@gmail.com">duncan.thomas@gmail.<wbr>com</a>>> wrote:<br>
><br>
> On 20 September 2016 at 16:24, Nikita Konovalov<br>
</span><div><div class="h5">> <<a href="mailto:nkonovalov@mirantis.com">nkonovalov@mirantis.com</a> <mailto:<a href="mailto:nkonovalov@mirantis.com">nkonovalov@mirantis.<wbr>com</a>>> wrote:<br>
><br>
> Hi,<br>
><br>
> From Sahara (and Hadoop workload in general) use-case the<br>
> reason we used BDD was a complete absence of any overhead on<br>
> compute resources utilization.<br>
><br>
> The results show that the LVM+Local target perform pretty<br>
> close to BDD in synthetic tests. It's a good sign for LVM. It<br>
> actually shows that most of the storage virtualization<br>
> overhead is not caused by LVM partitions and drivers<br>
> themselves but rather by the iSCSI daemons.<br>
><br>
> So I would still like to have the ability to attach partitions<br>
> locally bypassing the iSCSI to guarantee 2 things:<br>
> * Make sure that lio processes do not compete for CPU and RAM<br>
> with VMs running on the same host.<br>
> * Make sure that CPU intensive VMs (or whatever else is<br>
> running nearby) are not blocking the storage.<br>
><br>
><br>
> So these are, unless we see the effects via benchmarks, completely<br>
> meaningless requirements. Ivan's initial benchmarks suggest<br>
> that LVM+LIO is pretty much close enough to BDD even with iSCSI<br>
> involved. If you're aware of a case where it isn't, the first<br>
> thing to do is to provide proof via a reproducible benchmark.<br>
> Otherwise we are likely to proceed, as John suggests, with the<br>
> assumption that local target does not provide much benefit.<br>
><br>
> I've a few benchmarks myself that I suspect will find areas where<br>
> getting rid of iSCSI is benefit, however if you have any then you<br>
> really need to step up and provide the evidence. Relying on vague<br>
> claims of overhead is now proven to not be a good idea.<br>
><br>
> ______________________________<wbr>______________________________<wbr>______________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe:<br>
> <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
</div></div>> <<a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">http://OpenStack-dev-request@<wbr>lists.openstack.org?subject:<wbr>unsubscribe</a>><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
<span class="">> <<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a>><br>
><br>
> Honestly we can have both, I'll work up a bp to resurrect the idea of<br>
> a "smart" scheduling feature that lets you request the volume be on<br>
> the same node as the compute node and use it directly, and then if<br>
> it's NOT it will attach a target and use it that way (in other words<br>
> you run a stripped down c-vol service on each compute node).<br>
<br>
</span>Don't we have at least scheduling problem solved [1] already?<br>
<br>
[1]<br>
<a href="https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/instance_locality_filter.py" rel="noreferrer" target="_blank">https://github.com/openstack/<wbr>cinder/blob/master/cinder/<wbr>scheduler/filters/instance_<wbr>locality_filter.py</a></blockquote><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"><br></div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline">Yes, that is a sizeable chunk of the solution. The remaining components are how to coordinate with Nova (compute nodes) and figuring out if we just use c-vol as is, or if we come up with some form of a paired down agent. Just using c-vol as a start might be the best way to go.</div></div><div><div class="gmail_default" style="font-family:monospace,monospace;display:inline"></div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<span class="im HOEnZb"><br>
><br>
> Sahara keeps insisting on being a snow-flake with Cinder volumes and<br>
> the block driver, it's really not necessary. I think we can<br>
> compromise just a little both ways, give you standard Cinder semantics<br>
> for volumes, but allow you direct acccess to them if/when requested,<br>
> but have those be flexible enough that targets *can* be attached so<br>
> they meet all of the required functionality and API implementations.<br>
> This also means that we don't have to continue having a *special*<br>
> driver in Cinder that frankly only works for one specific use case and<br>
> deployment.<br>
><br>
> I've pointed to this a number of times but it never seems to<br>
> resonate... but I never learn so I'll try it once again [1]. Note<br>
> that was before the name "brick" was hijacked and now means something<br>
> completely different.<br>
><br>
> [1]: <a href="https://wiki.openstack.org/wiki/CinderBrick" rel="noreferrer" target="_blank">https://wiki.openstack.org/<wbr>wiki/CinderBrick</a><br>
><br>
> Thanks,<br>
> John<br>
<br>
<br>
</span><div class="HOEnZb"><div class="h5">______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>