hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "#YONG YONG CHENG#" <aarnc...@pmail.ntu.edu.sg>
Subject Cleanup Attempt in Map Task
Date Thu, 28 Jan 2010 09:59:44 GMT
Good Day,
 
Is there any way to control the cleanup attempt of a failed map task without changing the
Hadoop platform? I mean doing it in my MapReduce application.
 
I discovered that FileSystem.copyFromLocal() will take a long time sometimes. Is there any
other method in the Hadoop API that I can use to swiftly transfer my file to the HDFS?
 
Situation: Each map task in my job executes very fast under 5 secs. But normally, it hangs
at the FileSystem.copyFromLocal(), which will take more than 55 secs. As machine timeout is
5 secs and task timeout is 1 min, the task will fail. And subsequent attempt will also fail
at the FileSystem.copyFromLocal().
 
Thanks. I welcome any solutions. Feel free.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message