hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3633) Uncaught exception in DataXceiveServer
Date Tue, 24 Jun 2008 23:57:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12607809#action_12607809

Konstantin Shvachko commented on HADOOP-3633:

There are 2 issues here, imo
# why there is OutOfMemoryError, probably memory leak.
# DataXceiveServer.run() should catch all exceptions as any server, not only IOExceptions,
and shutdown the data-node. 
Otherwise it is not clear that there is a problem with this node, it appears to happily sending
heartbeats, but in fact cannot
do any data processing because the server thread is dead.

> Uncaught exception in DataXceiveServer
> --------------------------------------
>                 Key: HADOOP-3633
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3633
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Koji Noguchi
>         Attachments: jstack-H3633.txt
> Observed dfsclients timing out to some datanodes.
> Datanode's  '.out' file had 
> {noformat}
> Exception in thread "org.apache.hadoop.dfs.DataNode$DataXceiveServer@82d37" java.lang.OutOfMemoryError:
unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:597)
>   at org.apache.hadoop.dfs.DataNode$DataXceiveServer.run(DataNode.java:906)
>   at java.lang.Thread.run(Thread.java:619)
> {noformat}
> Datanode was still running but not much activity besides verification.
> Jstack showed no DataXceiveServer running.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message