hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brahma Reddy Battula <brahmareddy.batt...@huawei.com>
Subject RE: fsck only working on namenode
Date Thu, 08 Nov 2012 11:46:53 GMT
 wherever you are running fsck command,it's not getting dfs.http.address(Might be some other
configurations are there in classpath where dfs.http.address not configured).

Please check classpath...
From: 梁李印 [liyin.liangly@aliyun-inc.com]
Sent: Thursday, November 08, 2012 4:58 PM
To: user@hadoop.apache.org
Subject: 答复: fsck only working on namenode

Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de]
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode


I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?


View raw message