[openstack-hpc] CPU intensive apps on OpenStack

Adam Huffman adam.huffman at gmail.com
Sun Apr 19 06:44:08 UTC 2015


May I humbly ask that good notes be taken for those of us strongly
interested but unable to attend?

Cheers,
Adam

On Sat, Apr 18, 2015 at 3:18 PM, Tim Bell <Tim.Bell at cern.ch> wrote:
>> -----Original Message-----
>> From: Blair Bethwaite [mailto:blair.bethwaite at gmail.com]
>> Sent: 17 April 2015 07:36
>> To: openstack-hpc at lists.openstack.org; Tim Bell
>> Subject: Re: [openstack-hpc] CPU intensive apps on OpenStack
>>
>> Hi Tim,
>>
>> What's your reference on disabling EPT? I can find some fairly old stuff on this,
>> but I thought newer processors had improved performance here to lower page
>> fault overheads... so I guess I'm wondering if you've seen a recent evaluation
>> somewhere?
>>
>
> In High Energy Physics, we have a subset of the Spec2006 benchmark which has been found to scale similarly to our application code (and is much simpler to run). The benchmark is run in 'n' times in parallel according to the number of cores in the system (as our applications are high throughput rather than individual high performance applications). With this application set running on a 2 CPU box with Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz and hyperthreading on, we get the following
>
> - bare metal(Centos 7 (3.10)): 366
> - single VM ept on (Centos 6 guest on KVM on Centos 7): 297
> - single VM ept off (Centos 6 guest on KVM on Centos 7): 316
>
> I also had suspected that this was a solved problem.
>
> We're working through the options (NUMA, pinning, huge pages etc.). Some of these we can already do with OpenStack directly through flavour/image flags (or are coming in Juno).
>
>> Re. NUMA etc, it seems like the most important thing if you're running large
>> memory apps inside your guests is to make sure the guest is pinned (both CPU
>> and NUMA wise) and sees a NUMA topology that matches how it is pinned on
>> the host. I haven't had a chance to try it yet but I thought this was all now
>> possible Juno. Beyond that you probably want the guest kernel to be NUMA
>> sassy as well, so it needs (IIRC) Linux 3.13+ to get the numa_balancing (nee
>> autonuma) goodies. I'm not sure if for such cases you might actually want to
>> disable numa balancing on the host or whether the pinning would effectively
>> remove any overhead there anyway...
>
> This is the kind of experiences I would like to sharing at the summit. In particular, how some standard recommendations from KVM tuning can be translated into the OpenStack equivalents and whether the property settings in flavors/images coming for the NFV work is going to allow us all to explore the phase space without adjusting the XML.
>
> My expectation is that somethings will be default recommendations for all HPC/HTC use cases. Others would be more to iterate over the potential setting options benchmarking your sample workload.
>
> I think we're getting to enough people for the meetup. It will be a smallish set but with a lot to discuss...
>
> Tim
>
>>
>> PS: +1 on the meetup suggestion
>>
>> --
>> Cheers,
>> ~Blairo
> _______________________________________________
> OpenStack-HPC mailing list
> OpenStack-HPC at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc



More information about the OpenStack-HPC mailing list