hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Kunz (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4517) unstable dfs when running jobs on 0.18.1
Date Fri, 24 Oct 2008 19:19:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642521#action_12642521
] 

Christian Kunz commented on HADOOP-4517:
----------------------------------------

Datanode Exception:
2008-10-23 17:30:21,703 INFO org.apache.hadoop.dfs.DataNode: writeBlock blk_4622591820056312866_6335826
received exception java.io.EOFException: while trying to read 65625 bytes
2008-10-23 17:30:21,703 ERROR org.apache.hadoop.dfs.DataNode: DatanodeRegistration(xxx.yyy.zzz.uuu:50010,
storageID=DS-1873593537-xx.yyy.zzz.uuu-vvvv-1216412543041, infoPort=50075, ipcPort=50020):DataXceiver:
java.io.EOFException: while trying to read 65625 bytes
        at org.apache.hadoop.dfs.DataNode$BlockReceiver.readToBuf(DataNode.java:2464)
        at org.apache.hadoop.dfs.DataNode$BlockReceiver.readNextPacket(DataNode.java:2508)
        at org.apache.hadoop.dfs.DataNode$BlockReceiver.receivePacket(DataNode.java:2572)
        at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:2698)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1283)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:1045)
        at java.lang.Thread.run(Thread.java:619)



> unstable dfs when running jobs on 0.18.1
> ----------------------------------------
>
>                 Key: HADOOP-4517
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4517
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>         Environment: hadoop-0.18.1 plus patches HADOOP-4277 HADOOP-4271 HADOOP-4326 HADOOP-4314
HADOOP-3914 HADOOP-4318 HADOOP-4351 HADOOP-4395
>            Reporter: Christian Kunz
>         Attachments: datanode.out
>
>
> 2 attempts of a job using 6000 maps, 19000 reduces
> 1.st attempt: failed during reduce phase after 22 hours with 31 dead datanodes most of
which became unresponsive due to an exception; dfs lost blocks
> 2nd attempt: failed during map phase after 5 hours with 5 dead datanodes due to exception;
dfs lost blocks responsible for job failure.
> I will post typical datanode exception and attach thread dump.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message