From: Mike Tsai (bibibobo_at_[hidden])
Date: 2007-05-09 20:21:43


Greetings eveyrong,

My name is Mike, and I have recently downloaded the OMPI v1.2.1 and decide
to run the OSU bandwidth benchmark. However, I have noticed a few weird
things during my run.

Btw, I am using FreeBSD 6.2.

The OSU bandwidth test basically pre-post many ISend and IRecv. It tries to
measure the max. sustainable bandwidth.

Here is an output (I didn't finish running, but it should be sufficient to
show the problem that I am seeing):

Quick system info:
Two nodes testing (running Intel P4 Xeon 3.2Ghz Hyperthreading disabled,
1024Mb RAM).
3 1-Gig NiCs, all Intel Pro em1000(em0 and em2 are the private interfaces (
10.1.x.x) , while em1 is the public interface)

----------------------------------

[myct_at_netbed21 ~/mpich/osu_benchmarks]$ mpirun --mca btl_tcp_if_include em0
--hostfile ~/mpd.hosts.private --mca btl tcp,self --mca btl_tcp_sndbuf
233016 --mca btl_tcp_rcvbuf 233016 -np 2 ./osu_bw
# OSU MPI Bandwidth Test (Version 2.3)
# Size Bandwidth (MB/s)
1 0.12
2 0.26
4 0.53
8 1.06
16 2.12
32 4.22
64 8.26
128 14.61
256 28.06
512 51.27
1024 82.59
2048 102.21
4096 110.53
8192 114.58
16384 118.16
32768 120.71
65536 33.23
131072 41.75
262144 70.42
524288 82.96
^Cmpirun: killing job...

------------------------------

The rendezvous threshold is set to 64k by default.

It seems that when the rendezvous starts, the performance dropped
tremendously.
Btw, this is an out-of-box run, I have not tweaked anything except changing
the socket buffer sizes during runtime.
Is there something obvious that I am not doing correctly?

I have also attached the "ompi-info" output.

Thanks for everything,

Mike