hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lohit <lohit.vijayar...@gmail.com>
Subject Re: Memory based scheduling
Date Tue, 30 Oct 2012 16:08:06 GMT
As far as I recall this is not possible. Per job or per user configurations
like these are little difficult in existing version.
What you could try is to set max map per job to be say half of cluster
capacity. (This is possible with FairSchedule, I do not know of
For eg, if you have 10 nodes with 4 slots each. You would create pool and
set max maps to be 20.
JobTracker will try its best to spread tasks across nodes provided they are
empty slots. But again, this is not guaranteed.

2012/10/30 Marco Z├╝hlke <mzuehlke@gmail.com>

> Hi,
> on our cluster our jobs usually satisfied with less than 2 GB of heap
> space.
> so we have on our 8 GB computers 3 maps maximum and on our 16 GB
> computers 4 maps maximum (we only have quad core CPUs and to have
> memory left for reducers). This works very well.
> But now we have a new kind of jobs. Each mapper requires at lest 4 GB
> of heap space.
> Is it possible to limit the number of tasks (mapper) per computer to 1 or
> 2 for
> these kinds of jobs ?
> Regards,
> Marco

Have a Nice Day!

View raw message