hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From C G <parallel...@yahoo.com>
Subject Re: problem getting started with hadoop
Date Tue, 04 Sep 2007 19:40:26 GMT
1.  I would suggest changing /tmp/hadoop-${user.name} to something concrete like:
   
                /tmp/hadoop
   
      otherwise, make sure that user.name is defined.
   
  2.  You are trying to run single node, but you have dfs.replication set to 8.  It should
be 1.
   
  3.  Does your machine respect localhost ?  Can you ping localhost?  If not, either fix your
       /etc/hosts file or use the actual hostname of the machine.
   
  4.  Do you have only the single machine listed in both the conf/masters and conf/slaves
file?
   
  When I did my first tests using single node I ran into the same sorts of problems you had.
 My issue turned out to be host name resolution confusion.  I made changes to the way my system
was configured (/etc/hosts, etc.) so that the various APIs which resolve hostnames and IP
addresses could all agree.  With that complete things worked great. Note that if you're renting
time on a hosted server someplace you are almost guarenteed to have to spend time sorting
out whatever OS configuration they happened so stick on the machine.
   
  Hope this helps...
  Chris

chenfren@post.tau.ac.il wrote:
  Hi,
I've tried setting up hadoop on a single computer, and I'm 
experiencing a problem with the datanode. when i run the start-all.sh 
script it seems to run smoothly, including setting up the datanode. 
The problem occurs when I try to use the hdfs for example running 
"bin/hadoop dfs -put ".
It gives me the following error:

put: java.io.IOException: Failed to create file 
/user/chenfren/mytest/.slaves.crc on client 127.0.0.1 because there 
were not enough datanodes available. Found 0 datanodes but 
MIN_REPLICATION for the cluster is configured to be 1.

I'm not sure if the "/user/chenfren/mytest/" refers to the hdfs or 
not. If not then "/user/chenfren" doesn't exist, and I don't have 
write permissions to /usr/ anyway. So if this is the case, how do I 
change this dir?
This is the hadoop-site.xml I use:










hadoop.tmp.dir
/tmp/hadoop-${user.name}




fs.default.name
localhost:54310




mapred.job.tracker
localhost:54311



dfs.replication
8




mapred.child.java.opts
-Xmx512m





Can anyone advise?



       
---------------------------------
Need a vacation? Get great deals to amazing places on Yahoo! Travel. 
Mime
  • Unnamed multipart/alternative (inline, 8-Bit, 0 bytes)
View raw message