hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jameson Li <hovlj...@gmail.com>
Subject Re: the new adding datanode can't use hadoop-daemon.sh start datanode to start, return a NPE
Date Fri, 18 Feb 2011 03:37:33 GMT
I have found the problem.
There has a error global parameter “HADOOP_CONF_DIR” in "/etc/profile" that
is configed by others, and this parameter “HADOOP_CONF_DIR” lead to an
unexpected path.
So I got the error, even I use “hadoop fs -ls” in these new-adding machine,
it shows files in the local file system.
When I cancelled this parameter, all works normal.

Thanks.

2011/2/17 James Ram <hadooprj@gmail.com>

> Hi
>
> Try formating the new datanode and then run the hadoop-daemon.sh.
>
> With Regards,
> RJ
>
>
> On Wed, Feb 16, 2011 at 3:50 PM, Jameson Li <hovlj.ei@gmail.com> wrote:
>
>> Hi,
>>
>> My new adding datanodes are working well.
>>
>> But I just use start-dfs.sh in the namenode and hadoop-daemons.sh start
>> datanode in the namenode can start the new adding datanode, I can't use
>> hadoop-daemon.sh start datanode in the new adding datanode to start itself,
>> and when I do that it will return a null pointer exception.
>>
>> I have read the mail-archives "New Datanode won't start, null pointer
>> exception" reported by Scott, and the url is:
>> http://www.mail-archive.com/hdfs-user@hadoop.apache.org/msg00271.html
>>
>> But there has not have a really result. Scott at last just said:"A
>> stop-all/*start*-all on the cluster got it to *start*."
>>
>> Thanks,
>> Jameson.
>>
>
>

Mime
View raw message