hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-4318) distcp fails
Date Fri, 03 Oct 2008 06:20:44 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tsz Wo (Nicholas), SZE updated HADOOP-4318:
-------------------------------------------

    Resolution: Fixed
        Status: Resolved  (was: Patch Available)

I just committed this.

> distcp fails
> ------------
>
>                 Key: HADOOP-4318
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4318
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: tools/distcp
>    Affects Versions: 0.17.2
>            Reporter: Christian Kunz
>            Assignee: Tsz Wo (Nicholas), SZE
>            Priority: Blocker
>             Fix For: 0.17.3
>
>         Attachments: 4318_20081001_0.17.patch, 4318_20081001_0.17b.patch, 4318_20081002.patch,
4318_20081002_0.17.patch
>
>
> we run distcp between two clusters running 0.17.2 using hdfs.
> As long as one of the tasks fails after opening a file for writing (which typically always
happens), subsequent retries will always fail with the following exception (we did not see
this with 0.16.3, seems to be a regression):
> 2008-09-30 22:54:49,430 INFO org.apache.hadoop.util.CopyFiles: FAIL 3169 : org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.dfs.AlreadyBeingCreatedException: failed to create file xxx/_distcp_tmp_dvml74/3169
for DFSClient_task_200809121811_0034_m_001085_1 on client xxx.yyy.zzz.uuu because current
leaseholder is trying to recreate file.
> 	at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(FSNamesystem.java:1010)
> 	at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:967)
> 	at org.apache.hadoop.dfs.NameNode.create(NameNode.java:269)
> 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:557)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
> 	at org.apache.hadoop.dfs.$Proxy1.create(Unknown Source)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> 	at org.apache.hadoop.dfs.$Proxy1.create(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2192)
> 	at org.apache.hadoop.dfs.DFSClient.create(DFSClient.java:479)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.create(DistributedFileSystem.java:138)
> 	at org.apache.hadoop.util.CopyFiles$CopyFilesMapper.create(CopyFiles.java:317)
> 	at org.apache.hadoop.util.CopyFiles$CopyFilesMapper.copy(CopyFiles.java:369)
> 	at org.apache.hadoop.util.CopyFiles$CopyFilesMapper.map(CopyFiles.java:493)
> 	at org.apache.hadoop.util.CopyFiles$CopyFilesMapper.map(CopyFiles.java:268)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:219)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2122)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message