<div dir="ltr">Hi,<div><br></div><div>Thanks all for the information, as for the filter Erlon(<span style="font-size:12.8px">InstanceLocalityFilter</span>) mentioned, this only solves a part of the problem,</div><div>we can create new volumes for existing instances using this filter and then attach to it, but the root volume still cannot</div><div>be guranteed to be on the same host as the compute resource, right?</div><div><br></div><div>The idea here is that all the volumes uses local disks.</div><div>I was wondering if we already have such a plan after the Resource Provider structure has accomplished?</div><div><br></div><div>Thanks</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz <span dir="ltr"><<a href="mailto:sombrafam@gmail.com" target="_blank">sombrafam@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Not sure exactly what you mean, but in Cinder using the InstanceLocalityFilter[1], you can  schedule a volume to the same compute node the instance is located. Is this what you need?<div><br></div><div>[1] <a href="http://docs.openstack.org/developer/cinder/scheduler-filters.html#instancelocalityfilter" target="_blank">http://docs.openstack.org/<wbr>developer/cinder/scheduler-<wbr>filters.html#<wbr>instancelocalityfilter</a></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <span dir="ltr"><<a href="mailto:jsbryant@electronicjungle.net" target="_blank">jsbryant@electronicjungle.net</a><wbr>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF">
    <p>Kevin,</p>
    <p>This is functionality that has been requested in the past but has
      never been implemented.</p>
    <p>The best way to proceed would likely be to propose a
      blueprint/spec for this and start working this through that.</p>
    <p>-Jay</p><div><div>
    <p><br>
    </p>
    <div>On 09/23/2016 02:51 AM, Zhenyu Zheng
      wrote:<br>
    </div>
    </div></div><blockquote type="cite"><div><div>
      <div dir="ltr">Hi Novaers and Cinders:
        <div><br>
        </div>
        <div>Quite often application requirements would demand using
          locally attached disks (or direct attached disks) for
          OpenStack compute instances. One such example is running
          virtual hadoop clusters via OpenStack.<br>
        </div>
        <div><br>
        </div>
        <div>We can now achieve this by using BlockDeviceDriver as
          Cinder driver and using AZ in Nova and Cinder, illustrated
          in[1], which is not very feasible in large scale production
          deployment.</div>
        <div><br>
        </div>
        <div>Now that Nova is working on resource provider trying to
          build an generic-resource-pool, is it possible to perform
          "volume-based-scheduling" to build instances according to
          volume? As this could be much easier to build instances like
          mentioned above.</div>
        <div><br>
        </div>
        <div>Or do we have any other ways of doing this?</div>
        <div><br>
        </div>
        <div>References:</div>
        <div>[1] <a href="http://cloudgeekz.com/71/how-to-setup-openstack-to-use-local-disks-for-instances.html" target="_blank">http://cloudgeekz.com/71/h<wbr>ow-to-setup-openstack-to-use-l<wbr>ocal-disks-for-instances.html</a></div>
        <div><br>
        </div>
        <div>Thanks,</div>
        <div><br>
        </div>
        <div>Kevin Zheng</div>
      </div>
      <br>
      <fieldset></fieldset>
      <br>
      </div></div><pre>______________________________<wbr>______________________________<wbr>______________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: <a href="mailto:OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a>
</pre>
    </blockquote>
    <br>
  </div>

<br>______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
<br></blockquote></div><br></div>
</div></div><br>______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.<wbr>openstack.org?subject:<wbr>unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-dev</a><br>
<br></blockquote></div><br></div>