hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hadoop hive <hadooph...@gmail.com>
Subject Re: Datanode denied communication with namenode
Date Sat, 26 Jul 2014 19:47:30 GMT
Did you allowed RPC and TCP communication in you security group, which you
have added to you hosts.

Please also check your exclude file and third point is to increase your dn
heapsize and start it.

Thanks
On Jul 27, 2014 1:01 AM, "Ed Sweeney" <ed.sweeney@falkonry.com> wrote:

> All,
>
> New AWS cluster with Cloudera 4.3 RPMs.
>
> dfs.hosts contains 3 host names, they all resolve from each of the 3 hosts.
>
> the datanode on the same machine as the namenode starts fine (once I
> added it's longname hostname to dfs.hosts file).
>
> the 2 remote datanodes both get the error below.
>
> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
> Datanode denied communication with namenode because hostname cannot be
> resolved (ip=10.0.7.61, hostname=10.0.7.61):
> DatanodeRegistration(0.0.0.0,
> datanodeUuid=de84029d-107b-4c80-b503-c990a3621a40,
>
> It is AWS VPC so no reverse dns and I don't want to add anything to
> the /etc/hosts files - shouldn't have to since the long and short
> names all resolve properly.
>
> Seeing hostname field in the error message has the ip field, I tried
> using dfs.client.use.datanode.hostname = true but no change.
>
> Any help appreciated!
>
> -Ed
>

Mime
View raw message