hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun C Murthy <...@yahoo-inc.com>
Subject Re: sort failing, help?
Date Tue, 12 Aug 2008 23:43:59 GMT
io.sort.mb and fs.inmemory.size.mb are way too high given you are  
using default of -200Xmx.

Bump both down to 100-200 and up -Xmx to 512M via  
mapred.child.java.opts.

Arun

On Aug 12, 2008, at 1:26 PM, James Graham (Greywolf) wrote:

> Environment specifications:
>
> Hadoop 0.16.4 (stable)
> MACHINES: 20 (18 datanodes)
> RAM: 8G
> SWAP: none (most of our production machines do not use swap as it  
> kills
>    response; this may change for the hadoop machines)
> CPU: 4
> OS: Gentoo Linux; kernel 2.6.23
>
> Problem:
> The sort example routine is failing.  The map completes  
> successfully, but the
> reduce fails, with a GC out of memory/heap problem.
>
> PARAMETERS (in human readable format)
> io.sort.mb = 256
> io.file.buffer.size = 65536
> io.bytes.per.checksum = 4096
> fs.inmemory.size.mb = 2048
> dfs.namenode.handler.count = 128
> dfs.balance.bandwidthPerSec = 131072
> mapred.job.tracker.handler.count = 1
> local.cache.size = 238435456
> mapred.map.tasks = 67
> mapred.reduce.tasks = 23
> mapred.reduce.parallel.copies = 4
> mapred.child.java.opts = default (changing the heap size doesn't  
> seem to help)
> mapred.inmem.merge.threshold = 0 (let the ramfs memory consumption  
> trigger)
> mapred.submit.replication = 5
> tasktracker.http.threads = 128
> ipc.server.listen.queue.size = 128
>
> # all others are default values.
>
> What should I be looking at, here?
>
> -- 
> James Graham (Greywolf)							      |
> 650.930.1138|925.768.4053						      *
> greywolf@searchme.com							      |
> Check out what people are saying about SearchMe! -- click below
> 	http://www.searchme.com/stack/109aa


Mime
View raw message