<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=iso-8859-1"><meta name=Generator content="Microsoft Word 14 (filtered medium)"><!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:Tahoma;
        panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        font-size:12.0pt;
        font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
tt
        {mso-style-priority:99;
        font-family:"Courier New";}
span.EmailStyle18
        {mso-style-type:personal-reply;
        font-family:"Calibri","sans-serif";
        color:#1F497D;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-family:"Calibri","sans-serif";
        mso-fareast-language:EN-US;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=FR link=blue vlink=purple><div class=WordSection1><p class=MsoNormal><tt><span style='font-size:10.0pt'>>Step 1: use flavors so nova can tell between the two workloads, and</span></tt><span style='font-size:10.0pt;font-family:"Courier New"'><br><tt>>configure them differently</tt><br>><br><tt>>Step 2: find capacity for your workload given your current cloud usage</tt><br>><br><tt>>At the moment, most of our solutions involve reserving bits of your</tt><br><tt>>cloud capacity for different workloads, generally using host</tt><br><tt>>aggregates.<o:p></o:p></tt></span></p><p class=MsoNormal><tt><span style='font-size:10.0pt'>>The issue with claiming back capacity from other workloads is a bit</span></tt><span style='font-size:10.0pt;font-family:"Courier New"'><br><tt>>tricker. The issue is I don't think you have defined where you get</tt><br><tt>>that capacity back from? Maybe you want to look at giving some</tt><br><tt>>workloads a higher priority over the constrained CPU resources? But</tt><br><tt>>you will probably starve the little people out at random, which seems</tt><br><tt>>bad. Maybe you want to have a concept of "spot instances" where they</tt><br><tt>>can use your "spare capacity" until you need it, and you can just kill</tt><br><tt>>them?</tt><br>><br><tt>>But maybe I am miss understanding your use case, its not totally clear to me.</tt></span><span style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p></o:p></span></p><p class=MsoNormal><tt><span style='font-size:10.0pt'><o:p> </o:p></span></tt></p><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>Yes currently we can only reserve some hosts for particular workloads. But «reservation» is done by admin’s operation,<o:p></o:p></span></p><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>not «on-demand»  as I understand. Anyway, it’s just some speculations from what I think of Alexander’ usecase. Or maybe <o:p></o:p></span></p><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>I misunderstand Alexander ?<o:p></o:p></span></p><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p> </o:p></span></p><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>It is interesting to see the development of the CPU entitlement blueprint that Alex mentioned. It was registered in Jan 2013.<o:p></o:p></span></p><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>Any idea whether it is still going on?<o:p></o:p></span></p><p class=MsoNormal><span style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p> </o:p></span></p><p class=MsoNormal><b><span style='font-size:10.0pt;font-family:"Tahoma","sans-serif"'>De :</span></b><span style='font-size:10.0pt;font-family:"Tahoma","sans-serif"'> Alex Glikson [mailto:GLIKSON@il.ibm.com] <br><b>Envoyé :</b> jeudi 14 novembre 2013 16:13<br><b>À :</b> OpenStack Development Mailing List (not for usage questions)<br><b>Objet :</b> Re: [openstack-dev] [nova] Configure overcommit policy<o:p></o:p></span></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><span style='font-size:10.0pt;font-family:"Arial","sans-serif"'>In fact, there is a blueprint which would enable supporting this scenario without partitioning -- </span><a href="https://blueprints.launchpad.net/nova/+spec/cpu-entitlement">https://blueprints.launchpad.net/nova/+spec/cpu-entitlement</a> <br><span style='font-size:10.0pt;font-family:"Arial","sans-serif"'>The idea is to annotate flavors with CPU allocation guarantees, and enable differentiation between instances, potentially running on the same host.</span> <br><span style='font-size:10.0pt;font-family:"Arial","sans-serif"'>The implementation is augmenting the CoreFilter code to factor in the differentiation. Hopefully this will be out for review soon.</span> <br><br><span style='font-size:10.0pt;font-family:"Arial","sans-serif"'>Regards,</span> <br><span style='font-size:10.0pt;font-family:"Arial","sans-serif"'>Alex<br><br></span><br><br><br><br><span style='font-size:7.5pt;font-family:"Arial","sans-serif";color:#5F5F5F'>From:        </span><span style='font-size:7.5pt;font-family:"Arial","sans-serif"'>John Garbutt <<a href="mailto:john@johngarbutt.com">john@johngarbutt.com</a>></span> <br><span style='font-size:7.5pt;font-family:"Arial","sans-serif";color:#5F5F5F'>To:        </span><span style='font-size:7.5pt;font-family:"Arial","sans-serif"'>"OpenStack Development Mailing List (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a>>, </span><br><span style='font-size:7.5pt;font-family:"Arial","sans-serif";color:#5F5F5F'>Date:        </span><span style='font-size:7.5pt;font-family:"Arial","sans-serif"'>14/11/2013 04:57 PM</span> <br><span style='font-size:7.5pt;font-family:"Arial","sans-serif";color:#5F5F5F'>Subject:        </span><span style='font-size:7.5pt;font-family:"Arial","sans-serif"'>Re: [openstack-dev] [nova] Configure overcommit policy</span> <o:p></o:p></p><div class=MsoNormal align=center style='text-align:center'><hr size=3 width="100%" noshade style='color:#A0A0A0' align=center></div><p class=MsoNormal style='margin-bottom:12.0pt'><br><br><br><tt><span style='font-size:10.0pt'>On 13 November 2013 14:51, Khanh-Toan Tran</span></tt><span style='font-size:10.0pt;font-family:"Courier New"'><br><tt><<a href="mailto:khanh-toan.tran@cloudwatt.com">khanh-toan.tran@cloudwatt.com</a>> wrote:</tt><br><tt>> Well, I don't know what John means by "modify the over-commit calculation in</tt><br><tt>> the scheduler", so I cannot comment.</tt><br><br><tt>I was talking about this code:</tt><br></span><a href="https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_filter.py#L64"><tt><span style='font-size:10.0pt'>https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_filter.py#L64</span></tt></a><span style='font-size:10.0pt;font-family:"Courier New"'><br><br><tt>But I am not sure thats what you want.</tt><br><br><tt>> The idea of choosing free host for Hadoop on the fly is rather complicated</tt><br><tt>> and contains several operations, namely: (1) assuring the host never get</tt><br><tt>> pass 100% CPU load; (2) identifying a host that already has a Hadoop VM</tt><br><tt>> running on it, or already 100% CPU commitment; (3) releasing the host from</tt><br><tt>> 100% CPU commitment once the Hadoop VM stops; (4) possibly avoiding other</tt><br><tt>> applications to use the host (to economy the host resource).</tt><br><tt>></tt><br><tt>> - You'll need (1) because otherwise your Hadoop VM would come short of</tt><br><tt>> resources after the host gets overloaded.</tt><br><tt>> - You'll need (2) because you don't want to restrict a new host while one of</tt><br><tt>> your 100% CPU commited hosts still has free resources.</tt><br><tt>> - You'll need (3) because otherwise you host would be forerever restricted,</tt><br><tt>> and that is no longer "on the fly".</tt><br><tt>> - You'll may need (4) because otherwise it'd be a waste of resources.</tt><br><tt>></tt><br><tt>> The problem of changing CPU overcommit on the fly is that when your Hadoop</tt><br><tt>> VM is still running, someone else can add another VM in the same host with a</tt><br><tt>> higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting your</tt><br><tt>> Hadoop VM also.</tt><br><tt>> The idea of putting the host in the aggregate can give you (1) and (2). (4)</tt><br><tt>> is done by AggregateInstanceExtraSpecsFilter. However, it does not give you</tt><br><tt>> (3); which can be done with pCloud.</tt><br><br><tt>Step 1: use flavors so nova can tell between the two workloads, and</tt><br><tt>configure them differently</tt><br><br><tt>Step 2: find capacity for your workload given your current cloud usage</tt><br><br><tt>At the moment, most of our solutions involve reserving bits of your</tt><br><tt>cloud capacity for different workloads, generally using host</tt><br><tt>aggregates.</tt><br><br><tt>The issue with claiming back capacity from other workloads is a bit</tt><br><tt>tricker. The issue is I don't think you have defined where you get</tt><br><tt>that capacity back from? Maybe you want to look at giving some</tt><br><tt>workloads a higher priority over the constrained CPU resources? But</tt><br><tt>you will probably starve the little people out at random, which seems</tt><br><tt>bad. Maybe you want to have a concept of "spot instances" where they</tt><br><tt>can use your "spare capacity" until you need it, and you can just kill</tt><br><tt>them?</tt><br><br><tt>But maybe I am miss understanding your use case, its not totally clear to me.</tt><br><br><tt>John</tt><br><br><tt>_______________________________________________</tt><br><tt>OpenStack-dev mailing list</tt><br><tt><a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a></tt><br></span><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><span style='font-size:10.0pt'>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</span></tt></a><span style='font-size:10.0pt;font-family:"Courier New"'><br><br></span><o:p></o:p></p><div class=MsoNormal align=center style='text-align:center'><hr size=1 width="100%" noshade style='color:#A0A0A0' align=center></div><p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Aucun virus trouvé dans ce message.<br>Analyse effectuée par AVG - <a href="http://www.avg.fr">www.avg.fr</a><br>Version: 2014.0.4158 / Base de données virale: 3629/6834 - Date: 13/11/2013<o:p></o:p></p></div></body></html>