hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Abbass MAROUNI <abbass.maro...@virtualscale.fr>
Subject MapReduce Memory Utilization
Date Wed, 18 Jun 2014 16:20:23 GMT
Hi all,

I have a hadoop cluster with 4 dataNodes+nodeManager and 1 
namenode+resourceManager. I'm launching a MR job (identity mapper and 
identity reducer) with the relevant memory settings set to appropriate 
values :
mapreduce.[map|reduce].memory.mb, JAVA_CHILD_OPTS, map sort buffer, 
reduce buffer, ...

Does the framework guarantee that I will not run into a "Out of memory" 
situation for any input dataset size ? i.e. The only things that can 
lead to a "Out of memory" on mappers or reducers are :
Bad memory settings (for example map sort buffer > mapreduce.map.memory.mb )
Bad Mapper/Reducer code (user code)

Best Regards,




Mime
View raw message