hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From MalikHusain <ma...@planetmalik.com>
Subject Re: getting HDFS to rack-aware mode
Date Fri, 04 Jun 2010 17:35:09 GMT

Were you able to resolve this? I am running into a similar issue. I am
currently evaluating a 3 node cluster, When I do "hadoop fsck /" on the
namenode or one of the datanodes it successfully shows ths status as
healthy. However, on the third machine (datanode) the fsck command stopped
working after a reboot. It throws an Exception with a Connection refused.
Can you please let me know how I can resolve it. Everything else on the
datanode works except for the fsck command.

Error on the third node
hadoop fsck /
Exception in thread "main" java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
        at
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:193)
        at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
        at java.net.Socket.connect(Socket.java:525)
        at java.net.Socket.connect(Socket.java:475)
        at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:233)
        at sun.net.www.http.HttpClient.New(HttpClient.java:306)
        at sun.net.www.http.HttpClient.New(HttpClient.java:323)
        at
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:860)
        at
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:801)
        at
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:726)
        at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1049)
        at org.apache.hadoop.dfs.DFSck.run(DFSck.java:116)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hadoop.dfs.DFSck.main(DFSck.java:137)




imcaptor wrote:
> 
> In the master, I execute this command ok.
> 
> -bash-3.00$ ./bin/hadoop fsck /
> .
> /tmp/hadoop-hadoop/mapred/system/job_200810100944_0001/job.jar:  Under 
> replicated blk_6972591866335308074_1001. Target Replicas is 10 but found 
> 2 replica(s).
> ....Status: HEALTHY
>  Total size:    2798816 B
>  Total dirs:    10
>  Total files:   5
>  Total blocks (validated):      5 (avg. block size 559763 B)
>  Minimally replicated blocks:   5 (100.0 %)
>  Over-replicated blocks:        0 (0.0 %)
>  Under-replicated blocks:       1 (20.0 %)
>  Mis-replicated blocks:         0 (0.0 %)
>  Default replication factor:    2
>  Average block replication:     2.0
>  Corrupt blocks:                0
>  Missing replicas:              8 (80.0 %)
>  Number of data-nodes:          2
>  Number of racks:               1
> 
> imcaptor 写道:
>> I get this error:
>>
>> -bash-3.00$ ./bin/hadoop fsck /
>> Exception in thread "main" java.net.ConnectException: Connection refused
>> at java.net.PlainSocketImpl.socketConnect(Native Method)
>> at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
>> at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:193)
>> at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
>> at java.net.Socket.connect(Socket.java:519)
>> at java.net.Socket.connect(Socket.java:469)
>> at sun.net.NetworkClient.doConnect(NetworkClient.java:157)
>> at sun.net.www.http.HttpClient.openServer(HttpClient.java:382)
>> at sun.net.www.http.HttpClient.openServer(HttpClient.java:509)
>> at sun.net.www.http.HttpClient.<init>(HttpClient.java:231)
>> at sun.net.www.http.HttpClient.New(HttpClient.java:304)
>> at sun.net.www.http.HttpClient.New(HttpClient.java:316)
>> at 
>> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:813)

>>
>> at 
>> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:765)

>>
>> at 
>> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:690) 
>>
>> at 
>> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:934)

>>
>> at org.apache.hadoop.dfs.DFSck.run(DFSck.java:116)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>> at org.apache.hadoop.dfs.DFSck.main(DFSck.java:137)
>>
>> Yi-Kai Tsai 写道:
>>> hi Sriram
>>>
>>> Run hadoop fsck / will give you summary of current HDFS status 
>>> including some useful information :
>>>
>>> Minimally replicated blocks: 51224 (100.0 %)
>>> Over-replicated blocks: 0 (0.0 %)
>>> Under-replicated blocks: 0 (0.0 %)
>>> Mis-replicated blocks: 7 (0.013665469 %)
>>> Default replication factor: 3
>>> Average block replication: 3.0
>>> Missing replicas: 0 (0.0 %)
>>> Number of data-nodes: 83
>>> Number of racks: 6
>>>
>>>> Hi,
>>>>
>>>> We have a cluster where we running HDFS in non-rack-aware mode. Now,
>>>> we want to switch HDFS to run in rack-aware mode. Apart from the
>>>> config changes (and restarting HDFS), to rackify the existing data, we
>>>> were thinking of increasing/decreasing replication level a few times
>>>> to get the data spread. Are there any tools that will enable us to
>>>> know when we are "done"?
>>>>
>>>> Sriram
>>>
>>>
>>
>>
>>
>>
> 
> 
> 
> 
> 

-- 
View this message in context: http://old.nabble.com/getting-HDFS-to-rack-aware-mode-tp19980091p28758811.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Mime
View raw message