hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhang Bingjun (Eddy)" <eddym...@gmail.com>
Subject too many 100% mapper does not complete / finish / commit
Date Mon, 02 Nov 2009 08:32:46 GMT
Dear hadoop fellows,

We have been using Hadoop-0.20.1 MapReduce to crawl some web data. In this
case, we only have mappers to crawl data and save data into HDFS in a
distributed way. No reducers is specified in the job conf.

The problem is that for every job we have about one third mappers stuck with
100% progress but never complete. If we look at the the tasktracker log of
those mappers, the last log was the key input INFO log line and no others
logs were output after that.

>From the stdout log of a specific attempt of one of those mappers, we can
see that the map function of the mapper has been finished completely and the
control of the execution should be somewhere in the MapReduce framework
part.

Does anyone have any clue about this problem? Is it because we didn't use
any reducers? Since two thirds of the mappers could complete successfully
and commit their output data into HDFS, I suspect the stuck mappers has
something to do with the MapReduce framework code?

Any input will be appreciated. Thanks a lot!

Best regards,
Zhang Bingjun (Eddy)

E-mail: eddymier@gmail.com, bingjun@nus.edu.sg, bingjun@comp.nus.edu.sg
Tel No: +65-96188110 (M)

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message