hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sebastian.Lehrack" <Sebastian.Lehr...@physik.uni-muenchen.de>
Subject fsck only working on namenode
Date Wed, 07 Nov 2012 16:48:24 GMT
Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings

Mime
View raw message