hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter
Date Thu, 17 Apr 2008 16:43:21 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Owen O'Malley updated HADOOP-3198:

    Resolution: Won't Fix
        Status: Resolved  (was: Patch Available)

This will lead to very unmaintainable code. We absolutely do not want to have nested retries
for different contexts.

> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce.

> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380)

> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106)

> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone!

> The reduce task should handle this case better. It is worthwile to try a few times before
it gives up. 
> The stake is too high to give up at the first try. 

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message