[Openstack-operators] Perform MPI program on a cluster of OpenStack instances

Reza Bakhshayeshi reza.b2008 at gmail.com
Thu May 16 22:34:31 UTC 2013


Thanks all for your reply.
I'm now investigating your solutions and I will tell you the results
afterwards.
Any further suggestions will be welcomed :)


On 16 May 2013 20:56, Jacob Liberman <jliberma at redhat.com> wrote:

>  On 05/15/2013 08:08 AM, Reza Bakhshayeshi wrote:
>
>  Hi
>
>  I want to perform a MPI program across the instances. I've already done
> it on a traditional and virtual cluster, so I'm pretty sure about the
> healthiness of my installation.
>  Unfortunately I can't perform it on a cluster of OpenStack instances.
>  My MPI program is HPCC, it stops at the begging of MPIRandomAccess.
>
>  I would be so grateful if anyone had a similar experience or can guess
> some possibilities and solutions.
>
>  Regards,
>  Reza
>
>
> _______________________________________________
> OpenStack-operators mailing listOpenStack-operators at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>  Does it hang or does it fail with an error? Please send along any errors.
>
> The HPCC random access test will size the problem to half the available
> RAM in the whole system.
>
> I would make sure your memory over commitment ratio is set to 1.
>
> I would also disable hyperthreading and make sure you are running on a
> power of 2 processor count.
>
> You can start by running the MPI test within a single instance on a single
> host.
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130517/2a06c43f/attachment.html>


More information about the OpenStack-operators mailing list