Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 25056 invoked from network); 22 Oct 2007 19:59:12 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 22 Oct 2007 19:59:12 -0000 Received: (qmail 65375 invoked by uid 500); 22 Oct 2007 19:59:00 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 65014 invoked by uid 500); 22 Oct 2007 19:58:59 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 65005 invoked by uid 99); 22 Oct 2007 19:58:59 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Oct 2007 12:58:59 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Oct 2007 19:59:11 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id BF153714209 for ; Mon, 22 Oct 2007 12:58:50 -0700 (PDT) Message-ID: <12034878.1193083130780.JavaMail.jira@brutus> Date: Mon, 22 Oct 2007 12:58:50 -0700 (PDT) From: "dhruba borthakur (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-2050) distcp failed due to problem in creating files In-Reply-To: <6357089.1192246430774.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12536812 ] dhruba borthakur commented on HADOOP-2050: ------------------------------------------ Hi Runping, You said that "this problem happened in the 2nd, 3rd, 4th attempts, after the first attempt failed." Can you pl look at the logs and verify that the timestamp between the 2nd, 3rd and 4th attempts were 1 minute apart? This information would help us figure out whether the bug is in the client or the server. > distcp failed due to problem in creating files > ---------------------------------------------- > > Key: HADOOP-2050 > URL: https://issues.apache.org/jira/browse/HADOOP-2050 > Project: Hadoop > Issue Type: Bug > Components: mapred > Affects Versions: 0.15.0 > Reporter: Runping Qi > > When I run a distcp program to copy files from one dfs to another, my job failed with > the mappers throwing the following exception: > org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.dfs.AlreadyBeingCreatedException: failed to create file /xxxxx/part-00007 for DFSClient_task_200710122302_0002_m_000456_2 on client 72.30.43.23 because current leaseholder is trying to recreate file. > at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(FSNamesystem.java:850) > at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:806) > at org.apache.hadoop.dfs.NameNode.create(NameNode.java:333) > at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596) > at org.apache.hadoop.ipc.Client.call(Client.java:482) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184) > at org.apache.hadoop.dfs.$Proxy1.create(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at org.apache.hadoop.dfs.$Proxy1.create(Unknown Source) > at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.(DFSClient.java:1432) > at org.apache.hadoop.dfs.DFSClient.create(DFSClient.java:376) > at org.apache.hadoop.dfs.DistributedFileSystem.create(DistributedFileSystem.java:121) > at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.copy(CopyFiles.java:284) > at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.map(CopyFiles.java:352) > at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.map(CopyFiles.java:217) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:195) > at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1750) > It seems that this problem happened in the 2nd, 3rd, 4th attempts, > after the first attemp failed. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.