hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matt Tanquary <matt.tanqu...@gmail.com>
Subject Re: DisallowedDatanodeException
Date Thu, 09 Sep 2010 16:27:31 GMT
hostname shows the unqualified name of the server.

I didn't have a  dfs.includes file at all, in my little test
environment I wasn't worried about that, but apparently it made a
difference in this case. I'm not sure why 3 servers worked and 2
didn't, but no worries...including the includes file did the trick.

So, my final working solution was:

Create an includes file in which I included all of the unqualified server names.

Thanks for help!

On Wed, Sep 8, 2010 at 2:39 PM, Allen Wittenauer
<awittenauer@linkedin.com> wrote:
>
> On Sep 8, 2010, at 10:00 AM, Harsh J wrote:
>
>> Hosts file or the slaves file? A valid datanode must be in the slaves
>> file. Alternatively you can see if they are 'triggered' to start by
>> start-dfs.sh or not.
>
> No it doesn't.
>
> The slaves file is only used by the start commands.
>
> The hosts file is the proper place for it.
>
> Chances are good, we have a DNS issue:
>
>>>>> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
>>>>> org.apache.hadoop.ipc.RemoteException:
>>>>> org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
>>>>> Datanode denied communication with namenode: dev01:50010
>>>>>
>
> Note that this is unqualified.  Yet:
>
>
>>>>>    <value>hdfs://dev05.mynet.corp:54310</value>
>
> This is qualified.
>
> What form does your dfs.include file take and what is the output of the hostname command?
>
>
>



-- 
Have you thanked a teacher today? ---> http://www.liftateacher.org

Mime
View raw message