<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 05/15/2013 08:08 AM, Reza
Bakhshayeshi wrote:<br>
</div>
<blockquote
cite="mid:CAMGoRG22-HA6+=YLB1jq8rwfTmzH0wqh5jboZwjuTydz_a1CKg@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_default"
style="font-family:tahoma,sans-serif;color:rgb(0,0,102)">Hi <br>
<br>
</div>
<div class="gmail_default"
style="font-family:tahoma,sans-serif;color:rgb(0,0,102)">I
want to perform a MPI program across the instances. I've
already done it on a traditional and virtual cluster, so I'm
pretty sure about the healthiness of my installation.<br>
</div>
<div class="gmail_default"
style="font-family:tahoma,sans-serif;color:rgb(0,0,102)">Unfortunately
I can't perform it on a cluster of OpenStack instances.<br>
</div>
<div class="gmail_default"
style="font-family:tahoma,sans-serif;color:rgb(0,0,102)">
My MPI program is HPCC, it stops at the begging of
MPIRandomAccess.<br>
<br>
</div>
<div class="gmail_default"
style="font-family:tahoma,sans-serif;color:rgb(0,0,102)">I
would be so grateful if anyone had a similar experience or can
guess some possibilities and solutions.<br>
<br>
</div>
<div class="gmail_default"
style="font-family:tahoma,sans-serif;color:rgb(0,0,102)">Regards,<br>
</div>
<div class="gmail_default"
style="font-family:tahoma,sans-serif;color:rgb(0,0,102)">Reza<br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
OpenStack-operators mailing list
<a class="moz-txt-link-abbreviated" href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a>
</pre>
</blockquote>
Does it hang or does it fail with an error? Please send along any
errors.<br>
<br>
The HPCC random access test will size the problem to half the
available RAM in the whole system.<br>
<br>
I would make sure your memory over commitment ratio is set to 1.<br>
<br>
I would also disable hyperthreading and make sure you are running on
a power of 2 processor count.<br>
<br>
You can start by running the MPI test within a single instance on a
single host.<br>
</body>
</html>