hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lili Wu" <lil...@gmail.com>
Subject OOM error with large # of map tasks
Date Wed, 30 Apr 2008 20:39:27 GMT
We are using hadoop 0.16 and are seeing a consistent problem:  out of memory
errors when we have a large # of map tasks.
The specifics of what is submitted when we reproduce this:

three large jobs:
1. 20,000 map tasks and 10 reduce tasks
2. 17,000 map tasks and 10 reduce tasks
3. 10,000 map tasks and 10 reduce tasks

these are at normal priority and periodically we swap the priorities around
to get some tasks started by each and let them complete.
other smaller jobs come  and go every hour or so (no more than 200 map
tasks, 4-10 reducers).

Our cluster consists of 23 nodes and we have 69 map tasks and 69 reduce
tasks.
Eventually, we see consistent oom errors in the task logs and the task
tracker itself goes down on as many as 14 of our nodes.

We examined a heap dump after one of these crashes of a TaskTracker and
found something interesting--there were 572 instances of JobConf's that
accounted for 940mb of String objects.   This seems quite odd that there are
so many instances of JobConf.  It seems to correlate with task in the
COMMIT_PENDING state as shown on the status for a task tracker node.  Has
anyone observed something like this?  can anyone explain what would cause
tasks to remain in this state? (which also apparently is in-memory vs
serialized to disk...).   In general, what does COMMIT_PENDING mean?  (job
done, but output not committed to dfs?)

Thanks!

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message