hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2911) Gracefully handle OutOfMemoryErrors
Date Fri, 31 Aug 2012 05:39:07 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13445688#comment-13445688

Suresh Srinivas commented on HDFS-2911:

I actually thought about it. But given title "gracefully handle" and killing is not graceful,
decided to close the bug :)

Feel free to change the title and reopen. Or perhaps a new Jira.
> Gracefully handle OutOfMemoryErrors
> -----------------------------------
>                 Key: HDFS-2911
>                 URL: https://issues.apache.org/jira/browse/HDFS-2911
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, name-node
>    Affects Versions: 0.23.0, 1.0.0
>            Reporter: Eli Collins
> We should gracefully handle j.l.OutOfMemoryError exceptions in the NN or DN. We should
catch them in a high-level handler, cleanly fail the RPC (vs sending back the OOM stackrace)
or background thread, and shutdown the NN or DN. Currently the process is left in a not well-test
tested state (continuously fails RPCs and internal threads, may or may not recover and doesn't
shutdown gracefully).

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message