hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adi <adi.pan...@gmail.com>
Subject Suggestions for swapping issue
Date Wed, 11 May 2011 17:31:19 GMT
Hello Hadoop Gurus,
We are running a 4-node cluster. We just upgraded the RAM to 48 GB. We have
allocated around 33-34 GB per node for hadoop processes. Leaving the rest of
the 14-15 GB memory for OS and as buffer. There are no other processes
running on these nodes.
Most of the lighter jobs run successfully but one big job is de-stabilizing
the cluster. One node starts swapping and runs out of swap space and goes
offline. We tracked the processes on that node and noticed that it ends up
with more than expected hadoop-java processes.
The other 3 nodes were running 10 or 11 processes and this node ends up with
36. After killing the job we find these processes still show up and we have
to kill them manually.
We have tried reducing the swappiness to 6 but saw the same results. It also
looks like hadoop stays well within the memory limits allocated and still
starts swapping.

Some other suggestions we have seen are:
1) Increase swap size. Current size is 6 GB. The most quoted size is 'tons
of swap' but note sure how much it translates to in numbers. Should it be 16
or 24 GB
2) Increase overcommit ratio. Not sure if this helps as a few blog comments
mentioned it didn't help

Any other hadoop or linux config suggestions are welcome.



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message