hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3504) Reduce task hangs after java.net.SocketTimeoutException
Date Fri, 13 Jun 2008 02:30:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12604724#action_12604724
] 

Tsz Wo (Nicholas), SZE commented on HADOOP-3504:
------------------------------------------------

Lohit and I are finally able to reproduce the problem.  We ran randomwriter on a 500-node
cluster and got a lot of map tasks failure.  The problem is due to out of disk space in some
datanodes.  When a dfs client writes a block to datanode pipeline, if there is a datanode
in the pipeline ran out of disk space.  It throws
{noformat}
2008-06-13 01:01:58,909 ERROR org.apache.hadoop.dfs.DataNode: DatanodeRegistration(xx.xx.xx.xx:52961,
storageID=DS-yyyyyyy, infoPort=58576, ipcPort=50020):DataXceiver: 
org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: No space left on device
        at org.apache.hadoop.dfs.DataNode.checkDiskError(DataNode.java:590)
        at org.apache.hadoop.dfs.DataNode.access$1400(DataNode.java:82)
        at org.apache.hadoop.dfs.DataNode$BlockReceiver.receivePacket(DataNode.java:2611)
        at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:2664)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1267)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:1027)
        at java.lang.Thread.run(Thread.java:619)
{noformat}
However, the datanode does not report the exception back to client and the client will time-out.
 That's why we see SocketTimeoutException from the client.

> Reduce task hangs after java.net.SocketTimeoutException
> -------------------------------------------------------
>
>                 Key: HADOOP-3504
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3504
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>         Environment: Linux
>            Reporter: Mukund Madhugiri
>            Assignee: Tsz Wo (Nicholas), SZE
>            Priority: Blocker
>             Fix For: 0.18.0
>
>
> When running gridmix, I saw 11 reduce tasks hanging. I manually failed the tasks and
they re-ran and then finished.
> Here is the task tracker logs:
> syslog logs
> al on-disk merge with 14 files
> 2008-06-05 19:02:49,804 INFO org.apache.hadoop.mapred.Merger: Merging 14 sorted segments
> 2008-06-05 19:03:03,663 INFO org.apache.hadoop.mapred.Merger: Down to the last merge-pass,
with 14 segments left of total size: 1476315198 bytes
> 2008-06-05 19:03:03,731 WARN org.apache.hadoop.fs.FileSystem: "hostname:56007" is a deprecated
filesystem name. Use "hdfs://hostname:56007/" instead.
> 2008-06-05 19:03:27,301 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=1/0/0 in:0=1/2626
[rec/s] out:0=0/2626 [rec/s]
> 2008-06-05 19:03:27,347 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=10/0/0 in:0=10/2626
[rec/s] out:0=0/2626 [rec/s]
> 2008-06-05 19:03:27,578 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=100/0/0 in:0=100/2627
[rec/s] out:0=0/2627 [rec/s]
> 2008-06-05 19:03:28,380 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=226/1
> 2008-06-05 19:03:35,276 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=1000/842/0
in:0=1000/2634 [rec/s] out:0=842/2634 [rec/s]
> 2008-06-05 19:03:38,667 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=2434/2274
> 2008-06-05 19:03:45,301 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=10000/9892/0
in:3=10000/2644 [rec/s] out:3=9892/2644 [rec/s]
> 2008-06-05 19:03:48,716 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=15057/14957
> 2008-06-05 19:03:59,056 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=24946/24887
> 2008-06-05 19:04:11,742 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=34653/34433
> 2008-06-05 19:04:22,548 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=42930/42803
> 2008-06-05 19:04:32,635 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=57737/57686
> 2008-06-05 19:04:42,662 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=76224/76063
> 2008-06-05 19:04:52,666 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=99423/99307
> 2008-06-05 19:04:52,802 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=100000/99795/0
in:36=100000/2712 [rec/s] out:36=99795/2712 [rec/s]
> 2008-06-05 19:05:02,754 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=127265/127145
> 2008-06-05 19:05:12,758 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=185310/185202
> 2008-06-05 19:05:15,858 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=200000/199974/0
in:73=200000/2735 [rec/s] out:73=199974/2735 [rec/s]
> 2008-06-05 19:05:22,772 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=218164/218082
> 2008-06-05 19:05:55,316 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=242591/242411
> 2008-06-05 19:06:13,678 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=242591/242412
> 2008-06-05 19:07:23,173 WARN org.apache.hadoop.dfs.DFSClient: DFSOutputStream ResponseProcessor
exception  for block blk_-3463507617208131068_33273java.net.SocketTimeoutException: 69000
millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/NNN.NNN.NNN.125:59802 remote=/NNN.NNN.NNN.125:57834]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:162)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:150)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:123)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at java.io.DataInputStream.readLong(DataInputStream.java:399)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2045)
> 2008-06-05 19:11:22,535 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=243534/243374
> 2008-06-05 19:11:22,536 WARN org.apache.hadoop.dfs.DFSClient: Error Recovery for block
blk_-3463507617208131068_33273 bad datanode[0] NNN.NNN.NNN.125:57834
> 2008-06-05 19:11:24,388 WARN org.apache.hadoop.dfs.DFSClient: Error Recovery for block
blk_-3463507617208131068_33273 in pipeline NNN.NNN.NNN.125:57834, NNN.NNN.NNN.107:58706, NNN.NNN.NNN.122:52897:
bad datanode NNN.NNN.NNN.125:57834

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message