hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Runping Qi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3198) ReduceTask should handle rpc timeout exception for getting recordWriter
Date Wed, 09 Apr 2008 13:26:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587185#action_12587185

Runping Qi commented on HADOOP-3198:

BTW, why the output format class bothers to check the existence of the output dir (see https://issues.apache.org/jira/browse/HADOOP-3218)?

> ReduceTask should handle rpc timeout exception for getting recordWriter
> -----------------------------------------------------------------------
>                 Key: HADOOP-3198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3198
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>         Attachments: patch-3198.txt
> After shuffling and sorting, the reduce task is ready for the final phase --- reduce.

> The first thing is to create a record writer. 
> That call may fail due to rpc timeout. 
> java.net.SocketTimeoutException: timed out waiting for rpc response 
> at org.apache.hadoop.ipc.Client.call(Client.java:559) 
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212) 
> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

> at java.lang.reflect.Method.invoke(Method.java:597) 
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

> at org.apache.hadoop.dfs.$Proxy1.getFileInfo(Unknown Source) 
> at org.apache.hadoop.dfs.DFSClient.getFileInfo(DFSClient.java:548) 
> at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:380)

> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:598) 
> at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:106)

> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:366) 
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2126) 
> Then the whole reduce task failed, and all the work of shuflling and sorting is gone!

> The reduce task should handle this case better. It is worthwile to try a few times before
it gives up. 
> The stake is too high to give up at the first try. 

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message