hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From A Df <abbey_dragonfor...@yahoo.com>
Subject Re: Hadoop Cluster setup - no datanode
Date Sat, 13 Aug 2011 01:48:34 GMT

I did more test again but now I noticed that only 3 nodes have datanodes while the others
do not. I ran the admin report tool and the result is below. Where do i configure the capacity?

 bin/hadoop dfsadmin -report

Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

Datanodes available: 1 (1 total, 0 dead)

Decommission Status : Normal
Configured Capacity: 0 (0 KB)
DFS Used: 0 (0 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 0(0 KB)
DFS Used%: 100%
DFS Remaining%: 0%
Last contact: Sat Aug 13 02:39:39 BST 2011

A Df

>From: A Df <abbey_dragonforest@yahoo.com>
>To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>; Harsh J <harsh@cloudera.com>
>Sent: Saturday, 13 August 2011, 0:19
>Subject: Hadoop Cluster setup - no datanode
>Hello Mates:
>Thanks to everyone for their help so far. I have learnt a lot and have now done single
and pseudo mode. I have a hadoop cluster but I ran jps on the master node and slave node
but not all process are started
>22160 NameNode
>22716 Jps
>22458 JobTracker
>32195 Jps
>I also checked the logs and I see files for all the datanodes, jobtracker, namenode, secondarynamenode,
and tasktracker. The tasktracker has one slave node log missing. The namenode formatted correctly.
I set the values for below so I'm not sure if I need more. My cluster is 11 nodes (1 master,
10 slaves). I do not have permission to access root only my directory so hadoop is installed
in there. I can ssh to the slaves properly.
>    * fs.default.name, dfs.name.dir, dfs.data.dir, mapred.job.tracker, mapred.system.dir
>It also gave errors regarding:
>    * it cannot find the hadoop-daemon.sh file but I can see it
>/home/my-user/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: line 40: cd: /home/my-user/hadoop-0.20.2_cluster/bin:
No such file or directory
>    * it has the wrong path for the hadoop-config.sh so which parameter sets this field??
>/home/my-user/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: line 42: /home/my-user/hadoop-0.20.2_cluster/hadoop-config.sh:
No such file or directory
>    * not being able to create the log directory on the same slave node that doesn't
have its tasktracker, which parameters should be used to set the log directory?
>The same slave node which is giving problems also has:
> Usage: hadoop-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] (start|stop)
<hadoop-command> <args...>
>Thanks for your help.
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message