hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Zhang <zjf...@gmail.com>
Subject Re: Cleanup Attempt in Map Task
Date Thu, 28 Jan 2010 10:24:31 GMT
One easy way is to increase the timeout by setting mapred.task.timeout in

On Thu, Jan 28, 2010 at 5:59 PM, #YONG YONG CHENG# <
aarnchng@pmail.ntu.edu.sg> wrote:

> Good Day,
> Is there any way to control the cleanup attempt of a failed map task
> without changing the Hadoop platform? I mean doing it in my MapReduce
> application.
> I discovered that FileSystem.copyFromLocal() will take a long time
> sometimes. Is there any other method in the Hadoop API that I can use to
> swiftly transfer my file to the HDFS?
> Situation: Each map task in my job executes very fast under 5 secs. But
> normally, it hangs at the FileSystem.copyFromLocal(), which will take more
> than 55 secs. As machine timeout is 5 secs and task timeout is 1 min, the
> task will fail. And subsequent attempt will also fail at the
> FileSystem.copyFromLocal().
> Thanks. I welcome any solutions. Feel free.

Best Regards

Jeff Zhang

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message