hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <tdunn...@veoh.com>
Subject Re: question on Hadoop configuration for non cpu intensive jobs - 0.15.1
Date Tue, 25 Dec 2007 21:56:16 GMT

What are your mappers doing that they run out of memory?  Or is it your
reducers?

Often, you can write this sort of program so that you don't have higher
memory requirements for larger splits.


On 12/25/07 1:52 PM, "Jason Venner" <jason@attributor.com> wrote:

> We have tried reducing the number of splits by increasing the block
> sizes to 10x and 5x 64meg, but then we constantly have out of memory
> errors and timeouts. At this point each jvm is getting 768M and I can't
> readily allocate more without dipping into swap.


Mime
View raw message