hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Lewis <lordjoe2...@gmail.com>
Subject Changing the maximum tasks per node on a per job basis
Date Wed, 22 May 2013 09:25:43 GMT
I have a series of Hadoop jobs to run - one of my jobs requires larger than
standard memory
I allow the task to use 2GB of memory. When I run some of these jobs the
slave nodes are crashing because they run out of swap space. It is not that
s slave count not run one. or even 4  of these jobs but 8 stresses the
limits.
 I could cut the mapred.tasktracker.reduce.tasks.maximum for the entire
cluster but this cripples the whole cluster for one of many jobs.
It seems to be a very bad design
a) to allow the job tracker to keep assigning tasks to a slave that is
already getting low on memory
b) to allow the user to run jobs capable or crashing noeds on the cluster
c) not to allow the user to specify that some jobs need to be limited to a
lower value without requiring this limit for every job.

Are there plans to fix this??

--

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message