hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rob Stewart <robstewar...@googlemail.com>
Subject Re: Setting #Reducers at runtime
Date Thu, 18 Feb 2010 16:44:02 GMT
OK, thanks for letting me know.

I'll make a tiny change to this code to allow reducers as a parameter, and
rerun my experiments.


Thanks Eric,

Rob



On 18 February 2010 16:37, E. Sammer <eric@lifeless.net> wrote:

> On 2/18/10 11:24 AM, Rob Stewart wrote:
>
>> Hi Eric, thanks.
>>
>> It appears not:
>> ----------------
>>  JobConf jobConf = new JobConf(getConf(), Sort.class);
>>     jobConf.setJobName("join");
>>
>>     jobConf.setMapperClass(IdentityMapper.class);
>>     jobConf.setReducerClass(IdentityReducer.class);
>>
>>     JobClient client = new JobClient(jobConf);
>>     ClusterStatus cluster = client.getClusterStatus();
>>     int num_maps = cluster.getTaskTrackers() *
>>                    jobConf.getInt("test.sort.maps_per_host", 10);
>>     int num_reduces = (int) (cluster.getMaxReduceTasks() * 0.9);
>>     String sort_reduces = jobConf.get("test.sort.reduces_per_host");
>>     if (sort_reduces != null) {
>>        num_reduces = cluster.getTaskTrackers() *
>>                        Integer.parseInt(sort_reduces);
>>     }
>>
>> jobConf.setNumReduceTasks(num_reduces);
>>
>> -----------
>>
>> Any idea why my parameter for reduce tasks is being ignored ?
>>
>
> Rob:
>
> It is setting the number of reducers itself. See the line:
>
> jobConf.setNumReduceTasks(num_reduces)
>
> In short, you can't control the number of reducers this code uses from the
> command line.
>
>
> --
> Eric Sammer
> eric@lifeless.net
> http://esammer.blogspot.com
>

Mime
View raw message