hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pavan Kulkarni <pavan.babu...@gmail.com>
Subject Setting number of parallel Reducers and Mappers for optimal performance
Date Fri, 10 Aug 2012 20:24:50 GMT

 I was trying to optimize Hadoop-1.0.2 performance by setting
such that the entire memory is utilized. The tuning of this parameter is
given as (CPUS > 2) ? (CPUS * 0.50): 1 for reduce and (CPUS > 2) ? (CPUS *
0.75): 1 for map.
I didn't quite get how they made this suggestion ?  Isn't the setting
dependent on  main memory available?
For example I had 48GB of memory and I split the parameters as 32 for
mappers and 12 for reducers and remaining 4 for OS and other processes.
Please correct me if my assumption is wrong.Also suggest a way to get the
optimal performance by setting these parameters. Thanks.


--With Regards
Pavan Kulkarni

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message