hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5822) InterruptedException to thread sleep ignored
Date Tue, 10 Nov 2015 19:48:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14999204#comment-14999204
] 

Chris Nauroth commented on HDFS-5822:
-------------------------------------

Swallowing {{InterruptedException}} is a well-known anti-pattern in Java, well-documented
in multiple sources, and we're doing it all over the Hadoop codebase.  Without restoring the
interrupted status, there is a rsik that additional logic running later on that same thread,
which is dependent on seeing interrupted status for timely shutdown, won't see it.  This is
something I'd like to see us clean up.  I don't think logging is necessary, but I do think
restoring the interrupted status with a call to {{Thread.currentThread().interrupt()}} is
necessary.

HDFS-4328 is an example of a real bug with noticeable symptoms that were caused by swallowing
an {{InterruptedException}}.

https://issues.apache.org/jira/browse/HDFS-4328?focusedCommentId=13550470&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13550470

> InterruptedException to thread sleep ignored
> --------------------------------------------
>
>                 Key: HDFS-5822
>                 URL: https://issues.apache.org/jira/browse/HDFS-5822
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.2.0
>            Reporter: Ding Yuan
>         Attachments: hdfs-5822.patch
>
>
> In org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java, there is the following
code snippet in the run() method:
> {noformat}
> 156:      } catch (OutOfMemoryError ie) {
> 157:        IOUtils.cleanup(null, peer);
> 158:        // DataNode can run out of memory if there is too many transfers.
> 159:       // Log the event, Sleep for 30 seconds, other transfers may complete by
> 160:        // then.
> 161:        LOG.warn("DataNode is out of memory. Will retry in 30 seconds.", ie);
> 162:        try {
> 163:          Thread.sleep(30 * 1000);
> 164:        } catch (InterruptedException e) {
> 165:          // ignore
> 166:        }
> 167:      }
> {noformat}
> Note that InterruptedException is completely ignored. This might not be safe since any
potential events that lead to InterruptedException are lost?
> More info on why InterruptedException shouldn't be ignored: http://stackoverflow.com/questions/1087475/when-does-javas-thread-sleep-throw-interruptedexception
> Thanks,
> Ding



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message