hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster
Date Tue, 31 May 2011 17:30:43 GMT
Hello RX,

On Tue, May 31, 2011 at 9:05 PM, Xu, Richard <richard.xu@citi.com> wrote:
> Running on namenode(hostname: loanps4d):
> :/opt/hadoop-install/hadoop-0.20.2/bin:59 > hadoop dfsadmin -report
> Configured Capacity: 0 (0 KB)
> Present Capacity: 3072 (3 KB)
> DFS Remaining: 0 (0 KB)
> DFS Used: 3072 (3 KB)
> DFS Used%: 100%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
>
> Name: 169.193.181.213:50010
> Decommission Status : Normal
> Configured Capacity: 0 (0 KB)
> DFS Used: 3072 (3 KB)
> Non DFS Used: 0 (0 KB)
> DFS Remaining: 0(0 KB)
> DFS Used%: 100%
> DFS Remaining%: 0%
> Last contact: Tue May 31 11:30:37 EDT 2011

Yup, for some reason the DN's not picking up any space stats on your platform.

Could you give me the local command outputs of the following from both
your Solaris and Linux systems?

$ df -k /opt/hadoop-install/hadoop-0.20.2/hadoop-data
$ du -sk /opt/hadoop-install/hadoop-0.20.2/hadoop-data

FWIW, the code am reading says that the DU and DF util classes have
only been tested on Cygwin, Linux and FreeBSD. I think Solaris may
need a bit of tweaking, but am not aware of a resource for this off
the top of my head.

-- 
Harsh J

Mime
View raw message