$subject_val = "Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband"; include("../../include/msg-header.inc"); ?>
Subject: Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband
From: Jim Kress (jimkress_58_at_[hidden])
Date: 2009-06-23 15:31:56
OK. I'll try that, too.
Also,
> BTW: did you set that mpi_show_mca_params option to ensure
> the app is actually seeing these params?
I'm working to get to a point where I can get some time to try that.
Hopefully it will be before 5PM EDT.
Jim
> -----Original Message-----
> From: users-bounces_at_[hidden]
> [mailto:users-bounces_at_[hidden]] On Behalf Of Ralph Castain
> Sent: Tuesday, June 23, 2009 2:43 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] 50% performance reduction due to
> OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead
> of using Infiniband
>
> Assuming you aren't oversubscribing your nodes, set
> mpi_paffinity_alone=1.
>
> BTW: did you set that mpi_show_mca_params option to ensure
> the app is actually seeing these params?
>
>
>
> On Tue, Jun 23, 2009 at 12:35 PM, Jim Kress
> <jimkress_58_at_[hidden]> wrote:
>
>
> I assume you a referring to the openmpi-mca-params.conf file
>
> As I indicated previously, my first run was with the line
>
> btl=self,openib
>
> As the only entry in the openmpi-mca-params.conf file.
> This my default
> setting and was what I used, and it worked well, for v 1.2.8
>
> Then I tried
>
> btl=self,openib
> mpi_yield_when_idle=0
>
> As the only entries in the openmpi-mca-params.conf
> file. No difference in
> the results.
>
> Then I tried
>
> btl=self,openib
> mpi_yield_when_idle=0
>
> As the only entries in the openmpi-mca-params.conf file
> and also set the
> environment variable OMPI_MCA_mpi_leave_pinned=0
> No difference in the results.
>
> What else can I provide?
>
> By the way, did you read the message where I retracted
> my assumption about
> MPI traffic being forced over Ethernet?
>
> Jim
>
>
> > -----Original Message-----
> > From: users-bounces_at_[hidden]
> > [mailto:users-bounces_at_[hidden]] On Behalf Of
> Pavel Shamis (Pasha)
> > Sent: Tuesday, June 23, 2009 7:24 AM
> > To: Open MPI Users
> > Subject: Re: [OMPI users] 50% performance reduction due to
> > OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead
> > of using Infiniband
> >
> > Jim,
> > Can you please share with us you mca conf file.
> >
> > Pasha.
> > Jim Kress ORG wrote:
> > > For the app I am using, ORCA (a Quantum Chemistry
> program), when it
> > > was compiled using openMPI 1.2.8 and run under
> 1.2.8 with the
> > > following in the openmpi-mca-params.conf file:
> > >
> > > btl=self,openib
> > >
> > > the app ran fine with no traffic over my Ethernet
> network and all
> > > traffic over my Infiniband network.
> > >
> > > However, now that ORCA has been recompiled with openMPI
> > v1.3.2 and run
> > > under 1.3.2 (using the same openmpi-mca-params.conf
> file), the
> > > performance has been reduced by 50% and all the MPI traffic
> > is going
> > > over the Ethernet network.
> > >
> > > As a matter of fact, the openMPI v1.3.2 performance now
> > looks exactly
> > > like the performance I get if I use MPICH 1.2.7.
> > >
> > > Anyone have any ideas:
> > >
> > > 1) How could this have happened?
> > >
> > > 2) How can I fix it?
> > >
> > > a 50% reduction in performance is just not
> acceptable. Ideas/
> > > suggestions would be appreciated.
> > >
> > > Jim
> > >
> > > _______________________________________________
> > > users mailing list
> > > users_at_[hidden]
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >
> > >
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>