hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hemanth Yamijala <yhema...@thoughtworks.com>
Subject Re: Submitting a job to a remote cluster
Date Fri, 05 Oct 2012 04:08:12 GMT
Hi,

Could you please share your setup details - i.e. how many slaves, how many
datanodes and tasktrackers. Also, the configuration - in particular
hdfs-site.xml ?

To answer your question: the datanode address is picked up from
hdfs-site.xml, or hdfs-default.xml from the property dfs.datanode.address.
This is generally left as default value, unless you want to change the port
number and things will work fine.

Thanks
Hemanth

On Fri, Oct 5, 2012 at 1:28 AM, Oleg Zhurakousky <oleg.zhurakousky@gmail.com
> wrote:

> Trying to submit a Job to a remote Hadoop instance. Everything seem to
> start fine but then I am seeing this:
>
> 2012-10-04 15:56:32,617 INFO [org.apache.hadoop.mapred.JobClient] - < map
> 0% reduce 0%>
>
> 2012-10-04 15:56:32,621 INFO [org.apache.hadoop.mapred.MapTask] - <data
> buffer = 79691776/99614720>
>
> 2012-10-04 15:56:32,621 INFO [org.apache.hadoop.mapred.MapTask] - <record
> buffer = 262144/327680>
>
> 2012-10-04 15:56:32,641 WARN [org.apache.hadoop.hdfs.DFSClient] - <Failed
> to connect to /127.0.0.1:50010, add to deadNodes and
> continuejava.net.ConnectException: Connection refused>
>
>
> How can I specify datanode address to use?
>
> Thanks
>
> Oleg
>

Mime
View raw message