[Openstack-operators] Perform MPI program on a cluster of OpenStack instances

Brian Schott brian.schott at nimbisservices.com
Wed May 15 14:46:35 UTC 2013


I saw some old references to MPI_Test running out of memory with fragmentation if I read the post correctly.  They had to tune the btl_openib_free_list_max MCA parameter.
http://www.open-mpi.org/community/lists/users/2008/05/5601.php

On May 15, 2013, at 9:38 AM, Reza Bakhshayeshi <reza.b2008 at gmail.com> wrote:

> Thank Lorin for your reply
> 
> Yes, instances are able to ping/ssh each other.
> I've used either Open MPI and MPICH, as you know, they use ssh to perform tasks.
> even I'm able to execute example programs, here is the output of mpich cpi on two instances:
> 
> localadmin at ubuntu-benchmark01:
> ~$ mpiexec -f ./hosts2 -n 2 ./cpi
> Process 0 of 2 is on ubuntu-benchmark01
> Process 1 of 2 is on ubuntu-benchmark02
> pi is approximately 3.1415926544231318, Error is 0.0000000008333387
> wall clock time = 0.050358
> 
> but HPCC is much more complex than these ones.
> I can perform HPCC on a single instance.
> I've setup everything similiar to my virtual cluster, but here I think something prevent the test.
> as soon as hpcc process start to execute on the second instance, it killed unexpectedly.
> 
> 
> 
> On 15 May 2013 17:51, Lorin Hochstein <lorin at nimbisservices.com> wrote:
> Reza:
> 
> You should be able to run MPI programs across OpenStack instances as long as your setup is configured so that instances are allowed to communicate with each other over the fixed IPs, and your security group settings allow traffic across the ports that your MPI implementation is using. 
> 
> (OpenStack doesn't provide a distributed file system, so if you need to do MPI-IO stuff you have to add the shared file system part yourself)
> 
> Are you able to ping/ssh from one instance to another on the fixed IPs?
> 
> Lorin
> 
> 
> On Wed, May 15, 2013 at 9:08 AM, Reza Bakhshayeshi <reza.b2008 at gmail.com> wrote:
> Hi 
> 
> I want to perform a MPI program across the instances. I've already done it on a traditional and virtual cluster, so I'm pretty sure about the healthiness of my installation.
> Unfortunately I can't perform it on a cluster of OpenStack instances.
> My MPI program is HPCC, it stops at the begging of MPIRandomAccess.
> 
> I would be so grateful if anyone had a similar experience or can guess some possibilities and solutions.
> 
> Regards,
> Reza
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> 
> 
> -- 
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130515/bf406613/attachment.html>


More information about the OpenStack-operators mailing list