hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2087) Errors for subsequent requests for file creation after original DFSClient goes down..
Date Tue, 23 Oct 2007 18:38:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12537109

Raghu Angadi commented on HADOOP-2087:

> So the first question to answer is whether the retry frameworks does what it is intended
to do that is retry.

Retry after one minute applies only when there is a timeout. Otherwise it retries after a
few millisec. Comment from http://issues.apache.org/jira/browse/HADOOP-1263#action_12504135

I am planning to use this framework for some new RPCs I am adding. I just want to confirm
if my understanding is correct: This patch adds a random exponential back off timeout starting
with 400 milliseconds for 5 times. In all 5 retries, this add a max of 12 seconds. Since client
RPC timeout is 60sec, time it takes for such RPC to fail takes between 300-312 seconds over
6 attempts. Is this expected?, because it is not exponential back off but essentially constant
timeout of around 60sec for each retry.

> Errors for subsequent requests for file creation after original DFSClient goes down..
> -------------------------------------------------------------------------------------
>                 Key: HADOOP-2087
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2087
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Gautam Kowshik
>             Fix For: 0.15.0
> task task_200710200555_0005_m_000725_0 started writing a file and the Node went down..
so all following file creation attempts were returned with AlreadyBeingCreatedException
> I think the dfs should handle cases wherein, if a dfsclient goes down between file creation,
subsequent creates to the same file could be allowed. 
> 2007-10-20 06:23:51,189 INFO org.apache.hadoop.mapred.TaskInProgress: Error from task_200710200555_0005_m_000725_0:
Task task_200710200555_0005_m_000725_0 failed to report status for 606 seconds. Killing!
> 2007-10-20 06:23:51,189 INFO org.apache.hadoop.mapred.JobTracker: Removed completed task
'task_200710200555_0005_m_000725_0' from '[tracker_address]:/'
> 2007-10-20 06:23:51,209 INFO org.apache.hadoop.mapred.JobInProgress: Choosing normal
task tip_200710200555_0005_m_000725
> 2007-10-20 06:23:51,209 INFO org.apache.hadoop.mapred.JobTracker: Adding task 'task_200710200555_0005_m_000725_1'
to tip tip_200710200555_0005_m_000725, for tracker '[tracker_address]:/'
> 2007-10-20 06:28:54,991 INFO org.apache.hadoop.mapred.TaskInProgress: Error from task_200710200555_0005_m_000725_1:
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.dfs.AlreadyBeingCreatedException:
failed to create file /benchmarks/TestDFSIO/io_data/test_io_825 for DFSClient_task_200710200555_0005_m_000725_1
on client, because this file is already being created by DFSClient_task_200710200555_0005_m_000725_0
>         at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(FSNamesystem.java:881)
>         at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:806)
>         at org.apache.hadoop.dfs.NameNode.create(NameNode.java:276)
>         at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message