Subject: Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband
From: Noam Bernstein (noam.bernstein_at_[hidden])
Date: 2009-06-24 09:37:35


On Jun 23, 2009, at 6:19 PM, Gus Correa wrote:

> Hi Jim, list
>
> On my OpenMPI 1.3.2 ompi_info -config gives:
>
> Wrapper extra LIBS: -lrdmacm -libverbs -ltorque -lnuma -ldl -Wl,--
> export-dynamic -lnsl -lutil -lm -ldl
>
> Yours doesn't seem to have the IB libraries: -lrdmacm -libverbs
>
> So, I would guess your OpenMPI 1.3.2 build doesn't have IB support.

The second of these statements doesn't follow from the first.

My "ompi_info -config" returns

ompi_info -config | grep LIBS
               Build LIBS: -lnsl -lutil -lm
       Wrapper extra LIBS: -ldl -Wl,--export-dynamic -lnsl -lutil -
lm -ldl

But it does have openib

ompi_info | grep openib
                  MCA btl: openib (MCA v2.0, API v2.0, Component v1.3.2)

and osu_bibw returns

# OSU MPI Bi-Directional Bandwidth Test v3.0
# Size Bi-Bandwidth (MB/s)
4194304 1717.43

which it's sure not getting over ethernet. I think Jeff Squyres' test
(ompi_info | grep openib) must be more definitive.

                                                                                Noam