[openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

Erlon Cruz sombrafam at gmail.com
Mon Oct 10 11:22:37 UTC 2016


Kevin,

Now that you had a first feedback about the idea, as Jay said, the next
steps is to write a blueprint/spec so other folks in Cinder can better
understand/suggest/vote on what you are proposing.


Erlon

On Sat, Oct 8, 2016 at 12:14 AM, Zhenyu Zheng <zhengzhenyulixi at gmail.com>
wrote:

> So do we like the idea of "volume based scheduling?"
>
> On Tue, Sep 27, 2016 at 11:39 AM, Joshua Harlow <harlowja at fastmail.com>
> wrote:
>
>> Huang Zhiteng wrote:
>>
>>>
>>>
>>> On Tue, Sep 27, 2016 at 12:00 AM, Joshua Harlow <harlowja at fastmail.com
>>> <mailto:harlowja at fastmail.com>> wrote:
>>>
>>>     Huang Zhiteng wrote:
>>>
>>>
>>>         On Mon, Sep 26, 2016 at 12:05 PM, Joshua Harlow
>>>         <harlowja at fastmail.com <mailto:harlowja at fastmail.com>
>>>         <mailto:harlowja at fastmail.com <mailto:harlowja at fastmail.com>>>
>>>
>>>         wrote:
>>>
>>>              Huang Zhiteng wrote:
>>>
>>>                  In eBay, we did some inhouse change to Nova so that our
>>>         big data
>>>                  type of
>>>                  use case can have physical disks as ephemeral disk for
>>>         this type of
>>>                  flavors.  It works well so far.   My 2 cents.
>>>
>>>
>>>              Is there a published patch (or patchset) anywhere that
>>>         people can
>>>              look at for said in-house changes?
>>>
>>>
>>>         Unfortunately no, but I think we can publish it if there are
>>> enough
>>>         interests.  However, I don't think that can be easily adopted
>>> onto
>>>         upstream Nova since it depends on other in-house changes we've
>>>         done to Nova.
>>>
>>>
>>>     Is there any blog, or other that explains the full bunch of changes
>>>     that ebay has done (u got me curious)?
>>>
>>>     The nice thing about OSS is that if u just get the patchsets out
>>>     (even to github or somewhere), those patches may trigger things to
>>>     change to match your usecase better just by the nature of people
>>>     being able to read them; but if they are never put out there, then
>>>     well ya, it's a little hard to get anything to change.
>>>
>>>
>>>     Anything stopping a full release of all in-house changes?
>>>
>>>     Even if they are not 'super great quality' it really doesn't matter
>>> :)
>>>
>>> Apology for sidetracking the topic a bit.  While we encourage our
>>> engineers to embrace community and open source, I think we didn't do a
>>> good job to actually emphasize that. 'Time To Market' is another factor,
>>> usually a feature requirement becomes deployed service in 2,3 sprint
>>> (4~6 weeks), but you know how much can be done in same amount of time in
>>> community, especially with Nova. :)
>>>
>>
>> Ya, sorry for side-tracking,
>>
>> Overall yes I do know getting changes done in upstream is not a 4-6 week
>> process (though maybe someday it could be). In general I don't want to turn
>> this into a rant, and thankfully I think there is a decent LWN article
>> about this kind of situation already. You might like it :)
>>
>> https://lwn.net/Articles/647524/ (replace embedded linux/kernel in this
>> with openstack and imho it's equally useful/relevant).
>>
>>
>> -Josh
>>
>>
>>
>>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20161010/ee9fd0d1/attachment.html>


More information about the OpenStack-dev mailing list