<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Hi Belmiro,
<br>
<br>
<br>
On 06/30/2014 11:42 PM, Belmiro Moreira wrote:<br>
</div>
<blockquote
cite="mid:CAPkQhnd5gd5dc9z_aVoF0tTxY62W-nayzDBjN6nsnd8hKxSfOw@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Eric,
<div>definitely...</div>
<div><br>
</div>
<div>In my view a "FairShareScheduler" could be a very
interesting option for private clouds that support scientific
communities. Basically this is the model used by batch systems
in order to fully use the available resources.</div>
</div>
</blockquote>
Yes, it is.<br>
<blockquote
cite="mid:CAPkQhnd5gd5dc9z_aVoF0tTxY62W-nayzDBjN6nsnd8hKxSfOw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div> </div>
<div>I'm very curious about the work that you are doing. <br>
</div>
</div>
</blockquote>
You can find more info at the following link:
<br>
<a class="moz-txt-link-freetext" href="https://agenda.infn.it/getFile.py/access?contribId=17&sessionId=3&resId=0&materialId=slides&confId=7915">https://agenda.infn.it/getFile.py/access?contribId=17&sessionId=3&resId=0&materialId=slides&confId=7915</a>
<br>
<blockquote
cite="mid:CAPkQhnd5gd5dc9z_aVoF0tTxY62W-nayzDBjN6nsnd8hKxSfOw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>Is it available in github?</div>
</div>
</blockquote>
At slide 18, you can find the pointer to the FairShareScheduler.<br>
<br>
Cheers,<br>
Eric.<br>
<br>
<blockquote
cite="mid:CAPkQhnd5gd5dc9z_aVoF0tTxY62W-nayzDBjN6nsnd8hKxSfOw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Belmiro</div>
<div><br>
</div>
<div>
<p class="MsoNormal"
style="font-family:arial,sans-serif;font-size:13px">
----------------------------------</p>
<p class="MsoNormal"
style="font-family:arial,sans-serif;font-size:13px">Belmiro
Moreira</p>
<p class="MsoNormal"
style="font-family:arial,sans-serif;font-size:13px"><span
class="">CERN</span></p>
<p class="MsoNormal"
style="font-family:arial,sans-serif;font-size:13px">Email: <a
moz-do-not-send="true"
href="mailto:belmiro.moreira@cern.ch" target="_blank">belmiro.moreira@<span
class="">cern</span>.ch</a></p>
<p class="MsoNormal"
style="font-family:arial,sans-serif;font-size:13px">
IRC: belmoreira</p>
</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Mon, Jun 30, 2014 at 4:05 PM, Eric
Frizziero <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:eric.frizziero@pd.infn.it" target="_blank">eric.frizziero@pd.infn.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hi All,<br>
<br>
we have analyzed the nova-scheduler component
(FilterScheduler) in our Openstack installation used by some
scientific teams.<br>
<br>
In our scenario, the cloud resources need to be distributed
among the teams by considering the predefined share (e.g.
quota) assigned to each team, the portion of the resources
currently used and the resources they have already consumed.<br>
<br>
We have observed that:<br>
1) User requests are sequentially processed (FIFO
scheduling), i.e. FilterScheduler doesn't provide any
dynamic priority algorithm;<br>
2) User requests that cannot be satisfied (e.g. if resources
are not available) fail and will be lost, i.e. on that
scenario nova-scheduler doesn't provide any queuing of the
requests;<br>
3) OpenStack simply provides a static partitioning of
resources among various projects / teams (use of quotas). If
project/team 1 in a period is systematically underutilizing
its quota and the project/team 2 instead is systematically
saturating its quota, the only solution to give more
resource to project/team 2 is a manual change (to be done by
the admin) to the related quotas.<br>
<br>
The need to find a better approach to enable a more
effective scheduling in Openstack becomes more and more
evident when the number of the user requests to be handled
increases significantly. This is a well known problem which
has already been solved in the past for the Batch Systems.<br>
<br>
In order to solve those issues in our usage scenario of
Openstack, we have developed a prototype of a pluggable
scheduler, named FairShareScheduler, with the objective to
extend the existing OpenStack scheduler (FilterScheduler) by
integrating a (batch like) dynamic priority algorithm.<br>
<br>
The architecture of the FairShareScheduler is explicitly
designed to provide a high scalability level. To all user
requests will be assigned a priority value calculated by
considering the share allocated to the user by the
administrator and the evaluation of the effective resource
usage consumed in the recent past. All requests will be
inserted in a priority queue, and processed in parallel by a
configurable pool of workers without interfering with the
priority order. Moreover all significant information (e.g.
priority queue) will be stored in a persistence layer in
order to provide a fault tolerance mechanism while a proper
logging system will annotate all relevant events, useful for
auditing processing.<br>
<br>
In more detail, some features of the FairshareScheduler are:<br>
a) It assigns dynamically the proper priority to every new
user requests;<br>
b) The priority of the queued requests will be recalculated
periodically using the fairshare algorithm. This feature
guarantees the usage of the cloud resources is distributed
among users and groups by considering the portion of the
cloud resources allocated to them (i.e. share) and the
resources already consumed;<br>
c) all user requests will be inserted in a (persistent)
priority queue and then processed asynchronously by the
dedicated process (filtering + weighting phase) when compute
resources are available;<br>
d) From the client point of view the queued requests remain
in “Scheduling” state till the compute resources are
available. No new states added: this prevents any possible
interaction issue with the Openstack clients;<br>
e) User requests are dequeued by a pool of WorkerThreads
(configurable), i.e. no sequential processing of the
requests;<br>
f) The failed requests at filtering + weighting phase may be
inserted again in the queue for n-times (configurable).<br>
<br>
We have integrated the FairShareScheduler in our Openstack
installation (release "HAVANA"). We're now working to adapt
the FairShareScheduler to the new release "IceHouse".<br>
<br>
Does anyone have experiences in those issues found in our
cloud scenario?<br>
<br>
Could the FairShareScheduler be useful for the Openstack
community?<br>
In that case, we'll be happy to share our work.<br>
<br>
Any feedback/comment is welcome!<br>
<br>
Cheers,<br>
Eric.<br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a moz-do-not-send="true"
href="mailto:OpenStack-dev@lists.openstack.org"
target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a moz-do-not-send="true"
href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"
target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
OpenStack-dev mailing list
<a class="moz-txt-link-abbreviated" href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
</blockquote>
<br>
</body>
</html>