Frequently Asked Questions
|
Home
|
Support
|
FAQ
|
all
just the FAQ
About
Presentations
Open MPI Team
FAQ
Rollup/ALL
General information
 
General information
 
Supported systems
 
Contributing
 
Developer information
 
Sysadmin information
 
Fault Tolerance
Building
 
Building Open MPI
 
Removed MPI constructs
 
Compiling MPI apps
Running Jobs
 
Running MPI jobs
 
Troubleshooting
 
Parallel debugging
 
rsh/ssh
 
BProc
 
Torque / PBS Pro
 
Slurm
 
SGE
 
Large clusters
Tuning
 
General tuning
 
Shared memory (Vader)
 
TCP
 
IB, RoCE, iWARP
 
Omni-Path
 
Performance tools
 
OMPIO
 
UDAPL
 
Myrinet
Platform
 
OS X
 
AIX (unsupported)
Contrib
 
VampirTrace
Languages
 
Java
CUDA-aware
 
Building CUDA-aware
 
Running CUDA-aware
Videos
Performance
Open MPI Software
Download
Documentation
Source Code Access
Bug Tracking
Regression Testing
Version Information
Sub-Projects
Hardware Locality
Network Locality
MPI Testing Tool
Open Tool for Parameter Optimization
Community
Mailing Lists
Getting Help/Support
Contribute
Contact
License
This FAQ is for Open MPI v4.x and earlier.
If you are looking for documentation for Open MPI v5.x and later, please visit
docs.open-mpi.org
.
FAQ categories:
Rollup of ALL FAQ categories and questions
General information about the Open MPI Project
General information about the Open MPI Project
What kinds of systems / networks / run-time environments does Open MPI support?
Contributing to the Open MPI project
Developer-level technical information on the internals of Open MPI
System administrator-level technical information about Open MPI
Fault tolerance for parallel MPI jobs
Building
Building Open MPI
Removed MPI constructs
Compiling MPI applications
Running Jobs with Open MPI
Running MPI jobs
Troubleshooting building and running MPI jobs
Debugging applications in parallel
Running jobs under rsh/ssh
Running jobs under BProc
Running jobs under Torque / PBS Pro
Running jobs under Slurm
Running jobs under SGE
Running on large clusters
Tuning
General run-time tuning
Tuning the run-time characteristics of MPI shared memory communications
Tuning the run-time characteristics of MPI TCP communications
Tuning the run-time characteristics of MPI InfiniBand, RoCE, and iWARP communications
Tuning the run-time characteristics of MPI Omni-Path communications
Performance analysis tools
Tuning the OMPIO parallel I/O component
Tuning the run-time characteristics of MPI UDAPL communications
Tuning the run-time characteristics of MPI Myrinet communications
Platform Specific Questions
OS X
AIX (unsupported)
Contributed Software
VampirTrace Integration
Languages
Java
CUDA-aware Support
Building CUDA-aware Open MPI
Running CUDA-aware Open MPI
Open MPI is an Associated Project
of the
Software in the Public Interest
non-profit organization
Page last modified: 25-Mar-2024
©2004-2024 The Open MPI Project