$subject_val = "Re: [OMPI users] Openmpi and processor affinity"; include("../../include/msg-header.inc"); ?>
Subject: Re: [OMPI users] Openmpi and processor affinity
From: JACOB_LIBERMAN_at_[hidden]
Date: 2009-06-03 11:31:22
Hi Jeff,
Yes, this technique is particularly helpful for multi-threaded and works consistently across the various MPIs I test.
Thanks, jacob
> -----Original Message-----
> From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
> Behalf Of Jeff Squyres
> Sent: Wednesday, June 03, 2009 10:27 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Openmpi and processor affinity
>
> On Jun 3, 2009, at 10:48 AM, <JACOB_LIBERMAN_at_[hidden]> wrote:
>
> > For HPL, try writing a bash script that pins processes to their
> > local memory controllers using numactl before kicking off HPL. This
> > is particularly helpful when spawning more than 1 thread per
> > process. The last line of your script should look like "numactl -c
> > $cpu_bind -m $ mem_bind $*".
> >
> > Believe it or not, I hit 94.5% HPL efficiency using this tactic on a
> > 16 node cluster. Using processor affinity (various MPIs) my results
> > were inconsistent and ranged between 88-93%
> >
>
> If you're using multi-threaded HPL, that might be useful. But if
> you're not, I'd be surprised if you got any different results than
> Open MPI binding itself. If there really is a difference, we should
> figure out why. More specifically, calling numactl yourself should be
> pretty much exactly what we do in OMPI (via API, not via calling
> numactl).
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users