hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Kimball (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1119) When tasks fail to report status, show tasks's stack dump before killing
Date Tue, 10 Nov 2009 00:13:32 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12775200#action_12775200
] 

Aaron Kimball commented on MAPREDUCE-1119:
------------------------------------------

To address all three reviews above:

* I'll rename the constant to SIGQUIT_TASK_JVM; understandability overrides consistency. "CORE_DUMP_JVM"
is also a misnomer; if anything, the proper alternative would be "STACK_DUMP_JVM".
* I'll adapt the logic to go SIGQUIT, SIGKILL instead of SIGQUIT, SIGTERM, SIGKILL. Use the
same constant (sleepTimeBeforeSigkill) as already exists.
* ProcessTree and DefaultTaskController both contain a lot of redundant code clones; if people
are okay with it, I'm +1 on condensing a lot of the logic. The initial version of this patch
didn't modify anything extraneous, but since the earlier versions of this patch make its intent
clear, I'll write a broader one that cleans up the other code "in the neighborhood."
* I'll rename the current logic in {{finishTask()}} to {{sendSignal()}}; we can preserve a
method named {{finishTask()}} that specifically sends SIGKILL using {{sendSignal()}}


| This currently causes stack traces for all killed tasks, right? I don't personally have
a problem with that, but the description of the JIRA indicates that only those due to failing
to report status will dump their stack, and it's worth noting the difference.

I've tried tracing backwards through the code to figure out what triggers JVM kills, but it's
a lengthy string of methods involved. What other operations are there (besides task timeout)
which wind up killing the task? The {{TaskController.destroyTaskJVM()}} method is only called
from {{JvmManager.kill()}}, which itself receives no information as to why it's killing the
JVM in question. (There are no arguments to this method; there's no flag in there to determine
that the jvm under management has experienced a timeout.)

I actually claim that actually receiving a SIGKILL implies that something has gone wrong in
the process. Tasks that clean up "politely" do not get the SIGKILL, and thus also do not get
the SIGQUIT/dump stacks.  So if something's gone wrong, then in my mind, we should capture
the stack trace for debug purposes.

Otherwise, we will need to modify the JvmManager API to be more precise about the nature of
kills. I suppose I could live with that, but someone else needs to point me to where the information
about this lives. So this would add complexity, and in my mind, be less useful.

* As for sigQuitProcessGroup(), I modeled the logic in dumpTaskStack() after that of all the
other signalling methods. It's conceivable that a user's task process hangs because it spawns
subprocesses and then gets in a deadlock based on IPC between the task and its subprocesses.
I think the better question is: Is there a good reason *not* to send SIGQUIT to the entire
child process group? (e.g., substantial overhead, especially that which blocks the TT?) I
don't think there's any more overhead here on the TT's side than sending SIGTERM to the process
group, which we already do. If we maintain this same logical structure in dumpTaskStack as
we do in killTask, etc, then we can refactor this code down into a considerably more condensed
form involving better code sharing. Otherwise, dumpTaskStack will have to remain a special
case.




> When tasks fail to report status, show tasks's stack dump before killing
> ------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-1119
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1119
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: tasktracker
>    Affects Versions: 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: Aaron Kimball
>         Attachments: MAPREDUCE-1119.2.patch, MAPREDUCE-1119.patch
>
>
> When the TT kills tasks that haven't reported status, it should somehow gather a stack
dump for the task. This could be done either by sending a SIGQUIT (so the dump ends up in
stdout) or perhaps something like JDI to gather the stack directly from Java. This may be
somewhat tricky since the child may be running as another user (so the SIGQUIT would have
to go through LinuxTaskController). This feature would make debugging these kinds of failures
much easier, especially if we could somehow get it into the TaskDiagnostic message

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message