hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pedro Costa <psdc1...@gmail.com>
Subject Re: DFSClient.closeInternal:3231 Could not complete fi
Date Wed, 11 Apr 2012 12:58:02 GMT
It was modified the algorithm to decide if a job should end successfully or
not. I don't know if I'm removing the temporary directories before the last
reduce task save the result in HDFS.

But I think that this error means that the reduce task couldn't save the
data in HDFS.

On 10 April 2012 18:56, Harsh J <harsh@cloudera.com> wrote:

> Pedro,
>
> Could we also know what was modified, since you claim it happens only
> in the modified build?
>
> On Tue, Apr 10, 2012 at 9:15 PM, Pedro Costa <psdc1978@gmail.com> wrote:
> > When I'm executing an MapReduce example on my modified Hadoop MapReduce,
> > sometimes the reduce task give me this error, and the example doesn't
> > finish:
> >
> > [code]
> > 2012-04-10 11:32:38,110 INFO hdfs.DFSClient.closeInternal:3231 Could not
> > complete file /user/output//part-r-00000 retrying....
> > [/code]
> >
> > This happens normally, when I'm executing an example after restarting JT
> > and TT, without restarting the Namenode and the datanode.
> >
> > What this error means?
> >
> > Please notice that this happens only to my modified version and not the
> > official. I modified the version 0.20.1.
> >
> > --
> > Best regards,
>
>
>
> --
> Harsh J
>



-- 
Best regards,

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message