hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj K (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-2481) SocketTimeoutException is coming in the reduce task when the data size is very high
Date Tue, 10 May 2011 13:50:47 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13031188#comment-13031188
] 

Devaraj K commented on MAPREDUCE-2481:
--------------------------------------

It has already read many other fields and so there is no chance of a network disconnect, because
always problem is coming while reading same field (isMap property in TaskCompletionEvent object).

{code:title=TaskCompletionEvent.java|borderStyle=solid}
  public void readFields(DataInput in) throws IOException {
    taskId.readFields(in); 
    idWithinJob = WritableUtils.readVInt(in);
    isMap = in.readBoolean();
{code}

> SocketTimeoutException is coming in the reduce task when the data size is very high
> -----------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2481
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2481
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task
>    Affects Versions: 0.20.2
>            Reporter: Devaraj K
>
> SocketTimeoutException is coming when reduce task tries to read MapTaskCompletionEventsUpdate
object from task tracker, it is able to read reset, TaskCompletionEvent.taskId, TaskCompletionEvent.idWithinJob
properties and it is failing for reading the property isMap in TaskCompletionEvent which is
of type boolean. This exception is coming multiple times.
> {code}
> 2011-04-20 15:58:03,037 FATAL mapred.TaskTracker (TaskTracker.java:fatalError(2812))
- Task: attempt_201104201115_0010_r_000002_0 - Killed : java.io.IOException:  Tried for the
max ping retries On TimeOut :1
> 	at org.apache.hadoop.ipc.Client.checkPingRetries(Client.java:1342)
> 	at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:402)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> 	at org.apache.hadoop.mapred.TaskCompletionEvent.readFields(TaskCompletionEvent.java:230)
> 	at org.apache.hadoop.mapred.MapTaskCompletionEventsUpdate.readFields(MapTaskCompletionEventsUpdate.java:64)
> 	at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:245)
> 	at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:69)
> 	at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:698)
> 	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:593)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel
to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:45798
remote=/127.0.0.1:35419]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:397)
> 	... 9 more
> {code}
>  org.mortbay.jetty.EofException is also coming many times in the logs as described in
MAPREDUCE-5.
> {code}
> 2011-04-20 15:57:20,748 WARN  mapred.TaskTracker (TaskTracker.java:doGet(3164)) - getMapOutput(attempt_201104201115_0010_m_000038_0,4)
failed :
> org.mortbay.jetty.EofException
> 	at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:787)
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message