[Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

Joe Topjian joe at topjian.net
Mon Apr 3 14:54:28 UTC 2017


On Mon, Apr 3, 2017 at 8:20 AM, Jay Pipes <jaypipes at gmail.com> wrote:

> On 04/01/2017 08:32 PM, Joe Topjian wrote:
>
>> On Sat, Apr 1, 2017 at 5:21 PM, Matt Riedemann <mriedemos at gmail.com
>> <mailto:mriedemos at gmail.com>> wrote:
>>
>>     On 4/1/2017 8:36 AM, Blair Bethwaite wrote:
>>
>>         Hi all,
>>
>>         The below was suggested for a Forum session but we don't yet have
>> a
>>         submission or name to chair/moderate. I, for one, would certainly
>> be
>>         interested in providing input. Do we have any owners out there?
>>
>>         Resource reservation requirements:
>>         ==
>>         The Blazar project [https://wiki.openstack.org/wiki/Blazar
>>         <https://wiki.openstack.org/wiki/Blazar>] has been
>>         revived following Barcelona and will soon release a new version.
>> Now
>>         is a good time to get involved and share requirements with the
>>         community. Our development priorities are described through
>>         Blueprints
>>         on Launchpad: https://blueprints.launchpad.net/blazar
>>         <https://blueprints.launchpad.net/blazar>
>>
>>         In particular, support for pre-emptible instances could be
>> combined
>>         with resource reservation to maximize utilization on unreserved
>>         resources.+1
>>
>>
>>     Regarding resource reservation, please see this older Nova spec
>>     which is related:
>>
>>     https://review.openstack.org/#/c/389216/
>>     <https://review.openstack.org/#/c/389216/>
>>
>>     And see the points that Jay Pipes makes in that review. Before
>>     spending a lot of time reviving the project, I'd encourage people to
>>     read and digest the points made in that review and if there
>>     responses or other use cases then let's discuss them *before*
>>     bringing a service back from the dead and assume it will be
>>     integrated into the other projects.
>>
>> This is appreciated. I'll describe the way I've seen Blazar used and I
>> believe it's quite different than the above slot reservation as well as
>> spot instance support, but please let me know if I am incorrect or if
>> there have been other discussions about this use-case elsewhere:
>>
>> A research group has a finite amount of specialized hardware and there
>> are more people wanting to use this hardware than what's currently
>> available. Let's use high performance GPUs as an example. The group is
>> OK with publishing the amount of hardware they have available (normally
>> this is hidden as best as possible). By doing this, a researcher can use
>> Blazar as sort of a community calendar, see that there are 3 GPU nodes
>> available for the week of April 3, and reserve them for that time period.
>>
>
> Yeah, I totally understand this use case.
>
> However, implementing the above in any useful fashion requires that Blazar
> be placed *above* Nova and essentially that the cloud operator turns off
> access to Nova's  POST /servers API call for regular users. Because if not,
> the information that Blazar acts upon can be simply circumvented by any
> user at any time.
>
> In other words, your "3 GPU nodes available for the week of April 3" can
> change at any time by a user that goes and launches instances that consumes
> those 3 GPU nodes.
>
> If you have a certain type of OpenStack deployment that isn't multi-user
> and where the only thing that launches instances is an
> automation/orchestration tool (in other words, an NFV MANO system), the
> reservation concepts works great -- because you don't have pesky users that
> can sidestep the system and actually launch instances that would impact
> reserved consumables.
>
> However, if you *do* have normal users of your cloud -- as most scientific
> deployments must have -- then I'm afraid the only way to make this work is
> to have users *only* use the Blazar API to reserve instances and
> essentially shut off the normal Nova POST /servers API.
>
> Does that make sense?
>

Ah, yes, indeed it does. Thanks, Jay.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170403/33a22891/attachment.html>


More information about the OpenStack-operators mailing list