hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jun Young Kim <juneng...@gmail.com>
Subject Re: is there more smarter way to execute a hadoop cluster?
Date Fri, 25 Feb 2011 04:47:24 GMT

I got the reason of my problem.

in case of submitting a job by shell,

conf.get("fs.default.name") is "hdfs://localhost"

in case of submitting a job by a java application directly,

conf.get("fs.default.name") is "file://localhost"
so I couldn't read any files from hdfs.

I think the execution of my java app couldn't read *-site.xml 
configurations properly.

Junyoung Kim (juneng603@gmail.com)

On 02/24/2011 06:41 PM, Harsh J wrote:
> Hey,
> On Thu, Feb 24, 2011 at 2:36 PM, Jun Young Kim<juneng603@gmail.com>  wrote:
>> How are I going to do?
> In new API, 'Job' class too has a Job.submit() and
> Job.waitForCompletion(bool) method. Please see the API here:
> http://hadoop.apache.org/mapreduce/docs/current/api/org/apache/hadoop/mapreduce/Job.html

View raw message