hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Amar Kamat (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter
Date Tue, 08 Apr 2008 07:55:25 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12586698#action_12586698
] 

Amar Kamat commented on HADOOP-3198:
------------------------------------

Some comments.
1) Declare a _private static final_ for {{MAX_DFS_RETRIES}} and initialize it to 10. Use this
in the for loop. 
2) Remove extra spaces after {{reporter}} (line 14 of the patch)
3) Sleeping for 1 sec needs to be argued. Btw a log message is required before waiting. 
4) Some extra code slipped in (regarding the log message). 
5) After 10 retries we should throw the exception rather than silently coming out of the loop
(leading to null pointer exception).

+_Points to ponder_+
Can we do a timeout based stuff where we wait for _shuffle-run-time / 2_ before bailing out
and having multiple retries within this timeout. This will somehow make sure that we dont
kill the reducer too early.

> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce.

> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380)

> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106)

> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone!

> The reduce task should handle this case better. It is worthwile to try a few times before
it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message