[Openstack-operators] Blazar? (Reservations and/or scheduled termination)

Stig Telfer stig.openstack at telfer.org
Sun Aug 7 07:11:27 UTC 2016


Sorry for arriving late on this, the last Scientific WG discussion on Blazar was here:

http://eavesdrop.openstack.org/meetings/scientific_wg/2016/scientific_wg.2016-07-06-09.00.log.html#l-115 <http://eavesdrop.openstack.org/meetings/scientific_wg/2016/scientific_wg.2016-07-06-09.00.log.html#l-115>

Although there was more substance on OPIE in previous meetings.

Best wishes,
Stig


> On 4 Aug 2016, at 04:51, Blair Bethwaite <blair.bethwaite at gmail.com> wrote:
> 
> We discussed Blazar fairly extensively in a couple of recent
> scientific-wg meetings. I'm having trouble searching out right the irc
> log to support this but IIRC the problem with Blazar as is for the
> typical virtualised cloud (non-Ironic) use-case is that it uses an
> old/deprecated Nova API extension in order to reserve capacity for a
> lease (I think it was something to do with shelve-offload, but I could
> have my wires crossed). Whereas with Ironic it manipulates
> host-aggregates.
> 
> I like the idea of the positive, "yes I'm still using this",
> confirmation, but it needs a tie into the dashboard. For those of us
> using identity federations to bootstrap users this would also help
> with account cleanup, as users who no longer have access (e.g., left
> the sector) to the dashboard would not be able to extend resource
> usage (without talking to support etc).
> 
> Cheers,
> 
> On 4 August 2016 at 03:23, Tim Bell <Tim.Bell at cern.ch> wrote:
>> 
>> When I last looked, Blazar allows you to reserve instances for a given time. An example would be
>> 
>> - We are organizing a user training session for 100 physicists from Monday to Friday
>> - We need that each user is able to create 2 VMs within a single shared project (as the images etc. are set up before)
>> - OpenStack should ensure that these resources would be available (or reject the request) and schedule other Blazar requests around it
>> 
>> This does need other functionality though which is currently not there
>> 
>> - Spot instances, i.e. give me the resources if they are available but kill when you have a reservation. Alvaro is proposing a talk for some work done in Indigo Datacloud project for this at the summit (votes welcome).
>> - Impending shutdown notification .. save what you want quickly because you’re going to be deleted
>> 
>> We have a use case where we offer each user a quota of 2 VMs for ‘Personal’ use. This gives a good onboarding experience but the problem is that our users forget they asked for resources. Something where we can define a default lifetime for a VM (ideally, a project) and users need to positively confirm they want to extend the lifetime would help identify these forgotten VMs.
>> 
>> I think a good first step would be
>> 
>> - An agreed metadata structure for a VM with an expiry date
>> - An agreed project metadata giving the default length of the VM
>> - An OSops script which finds those VMs exceeding the agreed period and goes through a ‘suspend’ for N days followed by a delete M days afterwards (to catch the accidents)
>> 
>> I think this can be done in Nova as is if this is not felt to be a ‘standard’ function but we should agree on the names/concepts.
>> 
>> Some of the needs are covered in https://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html
>> 
>> Tim
>> 
>> 
>> On 03/08/16 18:47, "Jonathan D. Proulx" <jon at csail.mit.edu> wrote:
>> 
>>    Hi All,
>> 
>>    As a private cloud operatior who doesn't charge internal users, I'd
>>    really like a way to force users to set an exiration time on their
>>    instances so if they forget about them they go away.
>> 
>>    I'd though Blazar was the thing to look at and Chameleoncloud.org
>>    seems to be using it (any of you around here?) but it also doesn't
>>    look like it's seen substantive work in a long time.
>> 
>>    Anyone have operational exprience with blazar to share or other
>>    solutions?
>> 
>>    -Jon
>> 
>>    _______________________________________________
>>    OpenStack-operators mailing list
>>    OpenStack-operators at lists.openstack.org
>>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> 
> -- 
> Cheers,
> ~Blairo
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160807/d12eafcb/attachment.html>


More information about the OpenStack-operators mailing list