$subject_val = "Re: [OMPI users] Exit Program Without Calling MPI_Finalize ForSpecial Case"; include("../../include/msg-header.inc"); ?>
Subject: Re: [OMPI users] Exit Program Without Calling MPI_Finalize ForSpecial Case
From: Ralph Castain (rhc_at_[hidden])
Date: 2009-06-04 18:01:12
If it helps, note that Open MPI already includes hooks (and just added
some more) to support this area of research. Note that Open MPI does -
not- kill your job when a process dies or leaves without calling
MPI_Finalize. What it actually does is call an Error Manager (denoted
as "errmgr") in the underlying RTE, which then decides what action to
take in response to that event.
It is true that the default errmgr which ships with Open MPI releases
kills the entire job, but that is by no means a requirement - it is
simply the default. We deliberately designed the errmgr to be an MCA
framework for exactly this reason - to allow anyone to write their own
errmgr component and experiment with alternative fault responses.
You currently have two options you can pursue:
1. if you want to use 1.2.8 or 1.3.2 (the latter is a superior
platform), you can write your own errmgr component and use it. Look at
the orte/mca/errmgr directory and you will see a "base" that contains
some common functions for startup, and a "default" that contains the
default errmgr component. Either add you own component (see the Open
MPI home page for a detailed writeup on how to do this), or modify the
default component to suit your needs.
2. if you want to use the developer's trunk, additional capabilities
to support FT research were just added to it. In particular, we
implemented an ability to register a callback function in the errmgr
so that an application can receive a callback when a specified type of
error occurs - and can then take whatever action it desires. Second,
we added a new "resilient mapper" component that automatically re-maps
failed processes to other available nodes, and then restarts them. You
could use these, for example, to write your own version of a "fault
tolerant mpiexec" - an example of how to do this will be added to the
developer's trunk over the weekend.
Note that, in either case, you will still have to deal with all the
MPI issues mentioned by Dick - all OMPI does for you is provide an
infrastructure so that you don't have to do all the nitty-gritty stuff
of mapping process locations, launching the procs, detecting errors,
etc.
Instead, you get to do the "simple" things, like figure out how to
deal with failures in the middle of a collective! :-)
HTH
Ralph
On Jun 4, 2009, at 7:20 AM, Richard Treumann wrote:
> Tee Wen Kai -
>
> You asked "Just to find out more about the consequences for exiting
> MPI processes without calling MPI_Finalize, will it cause memory
> leak or other fatal problem?"
>
> Be aware that Jeff has offered you an OpenMPI implementation
> oriented answer rather than an MPI standard oriented answer.
>
> When there is a communicator involving 2 or more tasks and any task
> involved in that communicator goes down, all other tasks that are
> members of that communicator enter a state the MPI standard says
> cannot be trusted. It is legitimate for the process that manages an
> MPI job as a single entity to recognize that the loss of a member
> task has made the state of all connected tasks untrustworthy and
> bring down all previously connected tasks too.
>
> When you use MPI_Comm_spawn, one result is an intercommunicator
> connecting the task that did the spawn to the task(s) that were
> spawned so the two sides are "connected". If you intend to use MPI
> to communicate between the spawn caller and the spawned tasks they
> must remain connected. You can explicitly disconnect them and then a
> failure of the spawned task is harmless to the task that spawned it
> but doing the disconnect costs you the communication path.
>
> The MPI standard does not require that connected tasks be brought
> down but it is a valid MPI implementation behavior. This makes some
> sense when you consider the fact that there is no MPI mechanism by
> which the other tasks can see that the communicator involving the
> lost task is now broken and there is no way a collective
> communication can work "correctly" on a communicator that has lost a
> member task.
>
> For example, what would it mean to call MPI_Reduce on MPI_COMM_WORLD
> when a member of MPI_COMM_WORLD has been lost (especially if it is
> the root that was lost)? If you had an MPI application that computed
> for hours between the loss of one task and the next collective call
> on MPI_ COMM_WORLD, would you prefer to pay for hours of computation
> and then deadlock at the collective call or just abort ASAP after
> the job is recognizably broken.
>
> There is a fault tolerance working group trying to define something
> for MPI 3.0 but at this stage they are still trying to work out a
> proposal to bring before the MPI Forum. You might be interested in
> getting involved in that effort. They try to address question like:
> - how would a task know it should not make collective calls on the
> broken communicator?
> - should the communicator still support point to point
> communications with remaining tasks?
> - If a task has posted a receive and the expected sender is then
> lost, how should the posted receive act?
> - is there a clean way to "repair" the broken communicator by
> spawning a replacement task?
> - is there a clean way to "shrink" the broken communicator
>
> The Fault Tolerance Working Group has taken on a very tough problem.
> The list above is just a tiny sample of the challenges in making MPI
> fault tolerant.
>
> Dick
>
>
> Dick Treumann - MPI Team
> IBM Systems & Technology Group
> Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
> Tele (845) 433-7846 Fax (845) 433-8363
>
>
> <graycol.gif>Jeff Squyres ---06/04/2009 07:32:25 AM---On Jun 4,
> 2009, at 2:16 AM, Tee Wen Kai wrote: > Just to find out more about
> the consequences for ex
>
> <ecblank.gif>
> From: <ecblank.gif>
> Jeff Squyres <jsquyres_at_[hidden]>
> <ecblank.gif>
> To: <ecblank.gif>
> "Open MPI Users" <users_at_[hidden]>
> <ecblank.gif>
> Date: <ecblank.gif>
> 06/04/2009 07:32 AM
> <ecblank.gif>
> Subject: <ecblank.gif>
> Re: [OMPI users] Exit Program Without Calling MPI_Finalize
> ForSpecial Case
> <ecblank.gif>
> Sent by: <ecblank.gif>
> users-bounces_at_[hidden]
>
>
>
> On Jun 4, 2009, at 2:16 AM, Tee Wen Kai wrote:
>
> > Just to find out more about the consequences for exiting MPI
> > processes without calling MPI_Finalize, will it cause memory leak or
> > other fatal problem?
>
> If you're exiting the process, you won't cause any kind of problems --
> the OS will clean up everything.
>
> However, we might also have the orted clean up some things when MPI
> processes unexpectedly die (e.g., filesystem temporary files in /
> tmp). So you might want to leave those around to clean themselves up
> and die naturally.
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users