hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: M/R job to a cluster?
Date Mon, 29 Apr 2013 18:15:52 GMT
To validate if your jobs are running locally, look for the classname
"LocalJobRunner" in the runtime output.

Configs are sourced either from the classpath (if a dir or jar on the
classpath has the XMLs at their root, they're read), or via the code
(conf.set("mapred.job.tracker", "foo:349");) or also via -D parameters
if you use Tool.

The tool + classpath way is usually the best thing to do, for flexibility.

On Sat, Apr 27, 2013 at 2:29 AM,  <rkevinburton@charter.net> wrote:
> I suspect that my MapReduce job is being run locally. I don't have any
> evidence but I am not sure how the specifics of my configuration are
> communicated to the Java code that I write. Based on the text that I have
> read online basically I start with code like:
> JobClient client = new JobClient();
> JobConf conf - new JobConf(WordCount.class);
> . . . . .
> Where do I communicate the configuration information so that the M/R job
> runs on the cluster and not locally? Or is the configuration location
> "magically determined"?
> Thank you.

Harsh J

View raw message