hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2911) Gracefully handle OutOfMemoryErrors
Date Wed, 08 Feb 2012 00:43:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13203037#comment-13203037

Tsz Wo (Nicholas), SZE commented on HDFS-2911:

OutOfMemoryError is a subclass of Error which indicates serious problems that a reasonable
application *should not try to catch* according to the [javadoc|http://docs.oracle.com/javase/6/docs/api/java/lang/Error.html].

It is hard to handle OutOfMemoryError.  One problem is that there could be more OutOfMemoryErrors
being thrown when handling the first OutOfMemoryError.
> Gracefully handle OutOfMemoryErrors
> -----------------------------------
>                 Key: HDFS-2911
>                 URL: https://issues.apache.org/jira/browse/HDFS-2911
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, name-node
>    Affects Versions: 0.23.0, 1.0.0
>            Reporter: Eli Collins
> We should gracefully handle j.l.OutOfMemoryError exceptions in the NN or DN. We should
catch them in a high-level handler, cleanly fail the RPC (vs sending back the OOM stackrace)
or background thread, and shutdown the NN or DN. Currently the process is left in a not well-test
tested state (continuously fails RPCs and internal threads, may or may not recover and doesn't
shutdown gracefully).

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message