Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AF57A10727 for ; Mon, 30 Dec 2013 19:57:35 +0000 (UTC) Received: (qmail 63860 invoked by uid 500); 30 Dec 2013 19:57:30 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 63757 invoked by uid 500); 30 Dec 2013 19:57:30 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 63750 invoked by uid 99); 30 Dec 2013 19:57:30 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Dec 2013 19:57:30 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jiayu.ji@gmail.com designates 74.125.82.169 as permitted sender) Received: from [74.125.82.169] (HELO mail-we0-f169.google.com) (74.125.82.169) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Dec 2013 19:57:23 +0000 Received: by mail-we0-f169.google.com with SMTP id w61so10589696wes.28 for ; Mon, 30 Dec 2013 11:57:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=NZKbCy/2wCwhfbj4RDA8pVf/Zjjunm1dLCR/KSyZyuU=; b=s7Jbk0p/WwelB5uH7YLEuk9CUIkTqMK/VVMd265U4Z8Gh2eF2UDqrNo55bCW4A5j5y 07COuIco0OLF+GIf3U7RjOJ9UYgiWCP7KLYJTtb1J+LQFFIl4T4R+p7T4VJItlecTEtw hD7K3mdWNwBxLnMXydxheWc3Ssiyi/FXiBhz5E0fTsfXXT19ML7FZXlpRuS5YYUpyPRn n9mH9y6PRkTwmXIAZPTbRKihvML38FnlhczvfGRQMkIgRF8BsVTsLu46HK3DRTCZFCrM cYfwx7NsbxjTnnBTAYn1Kl+aKpXeYBbEtdGsuxu9CsxC8T2aaNrlvr8vH4X7vrJwYWLi VfsA== MIME-Version: 1.0 X-Received: by 10.194.173.163 with SMTP id bl3mr47159014wjc.10.1388433423575; Mon, 30 Dec 2013 11:57:03 -0800 (PST) Received: by 10.216.10.3 with HTTP; Mon, 30 Dec 2013 11:57:03 -0800 (PST) In-Reply-To: References: Date: Mon, 30 Dec 2013 13:57:03 -0600 Message-ID: Subject: Re: Job fails while re attempting the task in multiple outputs case From: Jiayu Ji To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e013c622001b0f004eec5d98b X-Virus-Checked: Checked by ClamAV on apache.org --089e013c622001b0f004eec5d98b Content-Type: text/plain; charset=ISO-8859-1 I think if the task fails, the output related to that task will be clean up before the second attempt. I am guessing you have this exception is because you have two reducers tried to write to the same file. One thing you need to be aware of is that all data that is supposed to be in the same file should go to the same reducer. Let's say if you have data1 on reducer1 and data2 on reducer2 and they are all going to be stored on fileAll, then you will end up having that exception. On Mon, Dec 30, 2013 at 11:22 AM, AnilKumar B wrote: > Thanks Harsh. > > @Are you using the MultipleOutputs class shipped with Apache Hadoop or > one of your own? > I am using Apache Hadoop's multipleOutputs. > > But as you see in stack trace, it's not appending the attempt id to file > name, it's only consists of task id. > > > > Thanks & Regards, > B Anil Kumar. > > > On Mon, Dec 30, 2013 at 7:42 PM, Harsh J wrote: > >> Are you using the MultipleOutputs class shipped with Apache Hadoop or >> one of your own? >> >> If its the latter, please take a look at gotchas to take care of >> described at >> http://wiki.apache.org/hadoop/FAQ#Can_I_write_create.2Fwrite-to_hdfs_files_directly_from_map.2Freduce_tasks.3F >> >> On Mon, Dec 30, 2013 at 4:22 PM, AnilKumar B >> wrote: >> > Hi, >> > >> > I am using multiple outputs in our job. So whenever any reduce task >> fails, >> > all it's next task attempts are failing with file exist exception. >> > >> > >> > The output file name should also append the task attempt right? But it's >> > only appending the task id. Is this the bug or Some thing wrong from my >> > side? >> > >> > Where should look in src code? I went through code at >> > FileOutputFormat$getTaskOutputPath(), but there it's only considering >> task >> > id. >> > >> > >> > Exception Trace: >> > 13/12/29 09:13:00 INFO mapred.JobClient: Task Id : >> > attempt_201312162255_60465_r_000008_0, Status : FAILED >> > 13/12/29 09:14:42 WARN mapred.JobClient: Error reading task >> > >> outputhttp://localhost:50050/tasklog?plaintext=true&attemptid=attempt_201312162255_60465_r_000008_0&filter=stdout >> > 13/12/29 09:14:42 WARN mapred.JobClient: Error reading task >> > >> outputhttp://localhost:50050/tasklog?plaintext=true&attemptid=attempt_201312162255_60465_r_000008_0&filter=stderr >> > 13/12/29 09:15:04 INFO mapred.JobClient: map 100% reduce 93% >> > 13/12/29 09:15:23 INFO mapred.JobClient: map 100% reduce 96% >> > 13/12/29 09:17:31 INFO mapred.JobClient: map 100% reduce 97% >> > 13/12/29 09:19:34 INFO mapred.JobClient: Task Id : >> > attempt_201312162255_60465_r_000008_1, Status : FAILED >> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to >> create >> > /x/y/z/2013/12/29/04/o_2013_12_29_03-r-00008.gz on client 10.103.10.31 >> > either because the filename is invalid or the file exists >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1672) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1599) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:732) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:711) >> > at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown >> > Source) >> > at >> > >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> > at java.lang.reflect.Method.invoke(Method.java:597) >> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) >> > at >> > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1448) >> > at >> > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1444) >> > at java.security.AccessController.doPrivileged(Native >> > Method) >> > at javax.security.auth.Subject.doAs(Subject.java:396) >> > at >> > >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232) >> > at >> > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1442) >> > >> > at org.apache.hadoop.ipc.Client.call(Client.java:1118) >> > at >> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) >> > at $Proxy7.create(Unknown Source) >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >> > Method) >> > at >> > >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >> > at >> > >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> > at java.lang.reflect.Method.invoke(Method.java:597) >> > at >> > >> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) >> > at >> > >> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) >> > at $Proxy7.create(Unknown Source) >> > at >> > >> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.(DFSClient.java:3753) >> > at >> > org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:937) >> > at >> > >> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:207) >> > at >> > org.apache.hadoop.fs.FileSystem.create(FileSystem.java:555) >> > at >> > org.apache.hadoop.fs.FileSystem.create(FileSystem.java:536) >> > at >> > org.apache.hadoop.fs.FileSystem.create(FileSystem.java:443) >> > at >> > >> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:131) >> > at >> > >> org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.getRecordWriter(MultipleOutputs.java:411) >> > at >> > >> org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleOutputs.java:370) >> > at >> > com.team.hadoop.mapreduce1$Reducer1.reduce(MapReduce1.java:254) >> > at >> > com.team.hadoop.mapreduce1$Reducer1.reduce(MapReduce1.java::144) >> > at >> org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:177) >> > at >> > org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649) >> > at >> > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418) >> > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) >> > at java.security.AccessController.doPrivileged(Native >> > Method) >> > at javax.security.auth.Subject.doAs(Subject.java:396) >> > at >> > >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232) >> > at org.apache.hadoop.mapred.Child.main(Child.java:249) >> > >> > 13/12/29 09:19:35 WARN mapred.JobClient: Error reading task >> > >> outputhttp://localhost:50050/tasklog?plaintext=true&attemptid=attempt_201312162255_60465_r_000008_1&filter=stdout >> > 13/12/29 09:19:35 WARN mapred.JobClient: Error reading task >> > >> outputhttp://localhost:50050/tasklog?plaintext=true&attemptid=attempt_201312162255_60465_r_000008_1&filter=stderr >> > 13/12/29 09:19:36 INFO mapred.JobClient: map 100% reduce 89% >> > 13/12/29 09:19:54 INFO mapred.JobClient: map 100% reduce 92% >> > 13/12/29 09:20:11 INFO mapred.JobClient: map 100% reduce 96% >> > 13/12/29 09:22:52 INFO mapred.JobClient: map 100% reduce 97% >> > 13/12/29 09:23:48 INFO mapred.JobClient: Task Id : >> > attempt_201312162255_60465_r_000008_2, Status : FAILED >> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to >> create >> > /x/y/z/2013/12/29/04/o_2013_12_29_03-r-00008.gz on client 10.103.7.33 >> either >> > because the filename is invalid or the file exists >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1672) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1599) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:732) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:711) >> > at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown >> > Source) >> > at >> > >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> > at java.lang.reflect.Method.invoke(Method.java:597) >> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) >> > at >> > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1448) >> > >> > >> > Thanks & Regards, >> > B Anil Kumar. >> >> >> >> -- >> Harsh J >> > > -- Jiayu (James) Ji, Cell: (312)823-7393 --089e013c622001b0f004eec5d98b Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
I think if the task fails, the output related to that task= will be clean up before the second attempt. I am guessing you have this ex= ception is because you have two reducers tried to write to the same file. O= ne thing you need to be aware of is that all data that is supposed to be in= the same file should go to the same reducer. Let's say if you have dat= a1 on reducer1 and data2 on reducer2 and they are all going to be stored on= fileAll, then you will end up having that exception.=A0


On Mon, Dec 3= 0, 2013 at 11:22 AM, AnilKumar B <akumarb2010@gmail.com>= wrote:
Thanks Harsh.

@Are you using the MultipleOutputs class shipped with Apache Hadoop = or
one of your own?
I am using Apache Hadoop's multipleOutputs.=A0

<= /span>
But as you see in stack trace, it's not appending the attempt id = to file name, it's only consists of task id.=A0



Thanks & Regards,
B Anil Kumar.


On Mon, Dec= 30, 2013 at 7:42 PM, Harsh J <harsh@cloudera.com> wrote:
Are you using the MultipleOutputs class shipped with Apache Hadoop or
one of your own?

If its the latter, please take a look at gotchas to take care of
described at http://wiki.apache.org/hadoop/FAQ#Can_I_write_create.2Fwrite-to_hdfs_fi= les_directly_from_map.2Freduce_tasks.3F

On Mon, Dec 30, 2013 at 4:22 PM, AnilKumar B <akumarb2010@gmail.com> wrote:
> Hi,
>
> I am =A0using multiple outputs in our job. So whenever any reduce task= fails,
> all it's next task attempts are failing with file exist exception.=
>
>
> The output file name should also append the task attempt right? But it= 's
> only appending the task id. Is this the bug or Some thing wrong from m= y
> side?
>
> Where should look in src code? I went through =A0code at
> FileOutputFormat$getTaskOutputPath(), but there it's only consider= ing task
> id.
>
>
> Exception Trace:
> 13/12/29 09:13:00 INFO mapred.JobClient: Task Id :
> attempt_201312162255_60465_r_000008_0, Status : FAILED
> 13/12/29 09:14:42 WARN mapred.JobClient: Error reading task
> outputhttp://localhost:50050/tasklog?plaintext=3Dtrue&attemptid=3D= attempt_201312162255_60465_r_000008_0&filter=3Dstdout
> 13/12/29 09:14:42 WARN mapred.JobClient: Error reading task
> outputhttp://localhost:50050/tasklog?plaintext=3Dtrue&attemptid=3D= attempt_201312162255_60465_r_000008_0&filter=3Dstderr
> 13/12/29 09:15:04 INFO mapred.JobClient: =A0map 100% reduce 93%
> 13/12/29 09:15:23 INFO mapred.JobClient: =A0map 100% reduce 96%
> 13/12/29 09:17:31 INFO mapred.JobClient: =A0map 100% reduce 97%
> 13/12/29 09:19:34 INFO mapred.JobClient: Task Id :
> attempt_201312162255_60465_r_000008_1, Status : FAILED
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to = create
> /x/y/z/2013/12/29/04/o_2013_12_29_03-r-00008.gz on client 10.103.10.31=
> either because the filename is invalid or the file exists
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(= FSNamesystem.java:1672)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesy= stem.java:1599)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:7= 32)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:7= 11)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at sun.reflect.GeneratedMethodAccessor= 14.invoke(Unknown
> Source)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:25)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at java.lang.reflect.Method.invoke(Met= hod.java:597)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.RPC$Server.ca= ll(RPC.java:587)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1448)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1444)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at java.security.AccessController.doPr= ivileged(Native
> Method)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at javax.security.auth.Subject.doAs(Su= bject.java:396)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1232)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1442)
>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Client.call(C= lient.java:1118)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.RPC$Invoker.i= nvoke(RPC.java:229)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at $Proxy7.create(Unknown Source)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at sun.reflect.NativeMethodAccessorImp= l.invoke0(Native
> Method)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j= ava:39)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:25)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at java.lang.reflect.Method.invoke(Met= hod.java:597)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryIn= vocationHandler.java:85)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocati= onHandler.java:62)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at $Proxy7.create(Unknown Source)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClien= t.java:3753)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:937)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSys= tem.java:207)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:555)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:536)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:443)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWrite= r(TextOutputFormat.java:131)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.getRecordWriter= (MultipleOutputs.java:411)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleO= utputs.java:370)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> com.team.hadoop.mapreduce1$Reducer1.reduce(MapReduce1.java:254)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> com.team.hadoop.mapreduce1$Reducer1.reduce(MapReduce1.java::144)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at org.apache.hadoop.mapreduce.Reducer= .run(Reducer.java:177)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)=
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.Child$4.ru= n(Child.java:255)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at java.security.AccessController.doPr= ivileged(Native
> Method)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at javax.security.auth.Subject.doAs(Su= bject.java:396)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1232)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.Child.main= (Child.java:249)
>
> 13/12/29 09:19:35 WARN mapred.JobClient: Error reading task
> outputhttp://localhost:50050/tasklog?plaintext=3Dtrue&attemptid=3D= attempt_201312162255_60465_r_000008_1&filter=3Dstdout
> 13/12/29 09:19:35 WARN mapred.JobClient: Error reading task
> outputhttp://localhost:50050/tasklog?plaintext=3Dtrue&attemptid=3D= attempt_201312162255_60465_r_000008_1&filter=3Dstderr
> 13/12/29 09:19:36 INFO mapred.JobClient: =A0map 100% reduce 89%
> 13/12/29 09:19:54 INFO mapred.JobClient: =A0map 100% reduce 92%
> 13/12/29 09:20:11 INFO mapred.JobClient: =A0map 100% reduce 96%
> 13/12/29 09:22:52 INFO mapred.JobClient: =A0map 100% reduce 97%
> 13/12/29 09:23:48 INFO mapred.JobClient: Task Id :
> attempt_201312162255_60465_r_000008_2, Status : FAILED
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to = create
> /x/y/z/2013/12/29/04/o_2013_12_29_03-r-00008.gz on client 10.103.7.33 = either
> because the filename is invalid or the file exists
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(= FSNamesystem.java:1672)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesy= stem.java:1599)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:7= 32)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:7= 11)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at sun.reflect.GeneratedMethodAccessor= 14.invoke(Unknown
> Source)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:25)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at java.lang.reflect.Method.invoke(Met= hod.java:597)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.RPC$Server.ca= ll(RPC.java:587)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1448)
>
>
> Thanks & Regards,
> B Anil Kumar.



--
Harsh J




--
Jiayu (James) Ji,

Cell: (312)823-7393<= /span>

--089e013c622001b0f004eec5d98b--