hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michel Segel <michael_se...@hotmail.com>
Subject Re: Suggestions for swapping issue
Date Wed, 11 May 2011 17:40:51 GMT
You have to do the math...
If you have 2gb per mapper, and run 10 mappers per node... That means 20gb of memory.
Then you have TT and DN running which also take memory... 

What did you set as the number of mappers/reducers per node?

What do you see in ganglia or when you run top?

Sent from a remote device. Please excuse any typos...

Mike Segel

On May 11, 2011, at 12:31 PM, Adi <adi.pandit@gmail.com> wrote:

> Hello Hadoop Gurus,
> We are running a 4-node cluster. We just upgraded the RAM to 48 GB. We have
> allocated around 33-34 GB per node for hadoop processes. Leaving the rest of
> the 14-15 GB memory for OS and as buffer. There are no other processes
> running on these nodes.
> Most of the lighter jobs run successfully but one big job is de-stabilizing
> the cluster. One node starts swapping and runs out of swap space and goes
> offline. We tracked the processes on that node and noticed that it ends up
> with more than expected hadoop-java processes.
> The other 3 nodes were running 10 or 11 processes and this node ends up with
> 36. After killing the job we find these processes still show up and we have
> to kill them manually.
> We have tried reducing the swappiness to 6 but saw the same results. It also
> looks like hadoop stays well within the memory limits allocated and still
> starts swapping.
> Some other suggestions we have seen are:
> 1) Increase swap size. Current size is 6 GB. The most quoted size is 'tons
> of swap' but note sure how much it translates to in numbers. Should it be 16
> or 24 GB
> 2) Increase overcommit ratio. Not sure if this helps as a few blog comments
> mentioned it didn't help
> Any other hadoop or linux config suggestions are welcome.
> Thanks.
> -Adi

View raw message