hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Runping Qi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter
Date Wed, 09 Apr 2008 12:54:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587181#action_12587181
] 

Runping Qi commented on HADOOP-3198:
------------------------------------


HDFS client has a retry on exists.
It is likely that it tried and failed the several times. 
That is perhaps fine for exists call in general.

However, for this particular call in getRecordWriter in reduce rask, the cost of failure is
too expensive.
Thus, reduce task has to do something special.
In this sense, I think it is reduce task's responsibility to further re-try.

I am open for any suggestions to fix the problem.
However, I am not convenced that re-try at rpc level is the right answer.



> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
>
>
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce.

> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380)

> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106)

> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone!

> The reduce task should handle this case better. It is worthwile to try a few times before
it gives up. 
> The stake is too high to give up at the first try. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message