hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Subroto <ssan...@datameer.com>
Subject LocalJobRunner is not using the correct JobConf to setup the OutputCommitter
Date Tue, 28 May 2013 09:16:14 GMT
Hi,

I am reusing JobClient object which internally holds a LocalJobRunner instance.
When I submit the Job via the JobClient; LocalJobRunner is not using the correct JobConf to
set the OutputCommitter.setupJob().

Following is the code snippet from LocalJobRunner#org.apache.hadoop.mapred.LocalJobRunner.Job.run():
public void run() {
      JobID jobId = profile.getJobID();
      JobContext jContext = new JobContext(conf, jobId);
      OutputCommitter outputCommitter = job.getOutputCommitter();
      try {
        TaskSplitMetaInfo[] taskSplitMetaInfos =
          SplitMetaInfoReader.readSplitMetaInfo(jobId, localFs, conf, systemJobDir);     
  
        int numReduceTasks = job.getNumReduceTasks();
        if (numReduceTasks > 1 || numReduceTasks < 0) {
          // we only allow 0 or 1 reducer in local mode
          numReduceTasks = 1;
          job.setNumReduceTasks(1);
        }
        outputCommitter.setupJob(jContext);
        status.setSetupProgress(1.0f);
// Some more code to start map and reduce
}

The JobContext created in the second line of snippet is being created with the JobConf with
which LocalJobRunner is instantiated; instead the JobContext should be created with JobConf
with which the Job is instantiated.
Same context is being used to call outputcommitter.setupJob.

Please let me know if this is a bug or there is some specific intention behind this ??

Cheers,
Subroto Sanyal
Mime
View raw message