include("../../include/msg-header.inc"); ?>
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-05-01 09:58:35
For the moment, a possible workaround might be to use plain TCP
sockets (i.e., outside of MPI) to make the initial connection. That
way, you can just have your server blocking in accept().
After the TCP connection is made, use MPI_COMM_JOIN to create a
communicator and then proceed with normal MPI communications after that.
On Apr 28, 2007, at 1:07 PM, Nuno Sucena Almeida wrote:
> Hi Jeff,
>
> thanks for taking the time to answer this. I actually reached that
> conclusion after trying a simple MPI::Barrier() with both OpenMPI and
> Lam-MPI , where both had the same active wait kind of behaviour.
> What I'm trying to achive is to have some kind of calculation
> server, where the clients can connect through MPI::Intercomm to the
> server process with rank 0, and transfer data so that it can perform
> computation, but it seems wasteful to have a server group of processes
> running at 100% while waiting for the clients.
> It would be nice to be able to specify the behaviour in this
> case, or do you suggest another approach?
>
> Cheers,
>
> Nuno
>
> On Fri, Apr 27, 2007 at 07:49:04PM -0400, Jeff Squyres wrote:
> | This is actually expected behavior. We make the assumption that MPI
> | processes are meant to exhibit as low latency as possible, and
> | therefore use active polling for most message passing.
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
-- Jeff Squyres Cisco Systems