[openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

Huang Zhiteng winston.d at gmail.com
Mon Sep 26 02:56:17 UTC 2016


Hi Zhenyu and all,

If you look at the problem from a different angle, for example, treating
local disks on hypervisors same resource like GPU/NIC, your requirement
doesn't necessarily need to involve Cinder.  Local disks become a resource
type associated with certain group of hypervisors, scheduling becomes
easier and provisioning is also simpler because it doesn't have to talk to
another service (Cinder) and do coordination between Nova and Cinder
anymore.

In eBay, we did some inhouse change to Nova so that our big data type of
use case can have physical disks as ephemeral disk for this type of
flavors.  It works well so far.   My 2 cents.


On Mon, Sep 26, 2016 at 9:35 AM, Zhenyu Zheng <zhengzhenyulixi at gmail.com>
wrote:

> Hi Matt,
>
> Yes, we can only do this using 1:1 AZs mapped for each compute node in the
> deployment, which is not very feasible in commercial deployment,
> we can either pass some hints to Cinder(for current code, cinder "InstanceLocalityFilter"
> uses instance uuid as parameter so it will be impossible for
> user to pass it while booting instances)/ add filters or something else to
> Nova when doing Nova scheduling. And maybe we will have new solutions
> after "Generic-resource-pool" is reached?
>
> The implementations may varies, but this could be a reasonable demands?
> right?
>
> Thanks
>
> On Sun, Sep 25, 2016 at 1:02 AM, Matt Riedemann <
> mriedem at linux.vnet.ibm.com> wrote:
>
>> On 9/23/2016 8:19 PM, Zhenyu Zheng wrote:
>>
>>> Hi,
>>>
>>> Thanks all for the information, as for the filter
>>> Erlon(InstanceLocalityFilter) mentioned, this only solves a part of the
>>> problem,
>>> we can create new volumes for existing instances using this filter and
>>> then attach to it, but the root volume still cannot
>>> be guranteed to be on the same host as the compute resource, right?
>>>
>>> The idea here is that all the volumes uses local disks.
>>> I was wondering if we already have such a plan after the Resource
>>> Provider structure has accomplished?
>>>
>>> Thanks
>>>
>>> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz <sombrafam at gmail.com
>>> <mailto:sombrafam at gmail.com>> wrote:
>>>
>>>     Not sure exactly what you mean, but in Cinder using the
>>>     InstanceLocalityFilter[1], you can  schedule a volume to the same
>>>     compute node the instance is located. Is this what you need?
>>>
>>>     [1] http://docs.openstack.org/developer/cinder/scheduler-filters
>>> .html#instancelocalityfilter
>>>     <http://docs.openstack.org/developer/cinder/scheduler-filter
>>> s.html#instancelocalityfilter>
>>>
>>>     On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant
>>>     <jsbryant at electronicjungle.net
>>>     <mailto:jsbryant at electronicjungle.net>> wrote:
>>>
>>>         Kevin,
>>>
>>>         This is functionality that has been requested in the past but
>>>         has never been implemented.
>>>
>>>         The best way to proceed would likely be to propose a
>>>         blueprint/spec for this and start working this through that.
>>>
>>>         -Jay
>>>
>>>
>>>         On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>>
>>>>         Hi Novaers and Cinders:
>>>>
>>>>         Quite often application requirements would demand using
>>>>         locally attached disks (or direct attached disks) for
>>>>         OpenStack compute instances. One such example is running
>>>>         virtual hadoop clusters via OpenStack.
>>>>
>>>>         We can now achieve this by using BlockDeviceDriver as Cinder
>>>>         driver and using AZ in Nova and Cinder, illustrated in[1],
>>>>         which is not very feasible in large scale production deployment.
>>>>
>>>>         Now that Nova is working on resource provider trying to build
>>>>         an generic-resource-pool, is it possible to perform
>>>>         "volume-based-scheduling" to build instances according to
>>>>         volume? As this could be much easier to build instances like
>>>>         mentioned above.
>>>>
>>>>         Or do we have any other ways of doing this?
>>>>
>>>>         References:
>>>>         [1] http://cloudgeekz.com/71/how-t
>>>> o-setup-openstack-to-use-local-disks-for-instances.html
>>>>         <http://cloudgeekz.com/71/how-to-setup-openstack-to-use-loca
>>>> l-disks-for-instances.html>
>>>>
>>>>         Thanks,
>>>>
>>>>         Kevin Zheng
>>>>
>>>>
>>>>         ____________________________________________________________
>>>> ______________
>>>>         OpenStack Development Mailing List (not for usage questions)
>>>>         Unsubscribe: OpenStack-dev-request at lists.op
>>>> enstack.org?subject:unsubscribe
>>>>         <mailto:OpenStack-dev-request at lists.openstack.org?subject:un
>>>> subscribe>
>>>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>>>> k-dev
>>>>         <http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta
>>>> ck-dev>
>>>>
>>>
>>>
>>>         ____________________________________________________________
>>> ______________
>>>         OpenStack Development Mailing List (not for usage questions)
>>>         Unsubscribe:
>>>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>         <http://OpenStack-dev-request@lists.openstack.org?subject:un
>>> subscribe>
>>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>>> k-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>>
>>>     ____________________________________________________________
>>> ______________
>>>     OpenStack Development Mailing List (not for usage questions)
>>>     Unsubscribe:
>>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>     <http://OpenStack-dev-request@lists.openstack.org?subject:un
>>> subscribe>
>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>>
>>>
>>>
>>>
>>> ____________________________________________________________
>>> ______________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Are you asking about the scenario where you are creating a server with a
>> source_type=blank/image/snapshot bdm and nova creates the volume to
>> attach to the server? In that case nova doesn't pass enough information to
>> cinder to build the volume on the same host that the server is building on.
>> Nova passes an AZ but that would mean you'd need to have 1:1 AZs mapped for
>> each compute node in the deployment (I think?).
>>
>> Maybe you're thinking of like nova passing a scheduler hint to cinder
>> telling it where to build the volume?
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards
Huang Zhiteng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160926/c17e7767/attachment.html>


More information about the OpenStack-dev mailing list