hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Trivial Update of "HowToSetupYourDevelopmentEnvironment" by AlexLoddengaard
Date Thu, 28 Aug 2008 08:29:47 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by AlexLoddengaard:

  I was getting this error when putting data into the dfs.  The solution is strange and probably
inconsistent: I erased all temporary data along with the namenode, reformatted the namenode,
started everything up, and visited my "cluster's" dfs health page (http://your_host:50070/dfshealth.jsp).
 The last step, visiting the health page, is the only way I can get around the error.  Once
I've visited the page, putting and getting files in and out of the dfs works great!
+ == DataNode process appearing then disappearing on slave ==
+ When transitioning from a single-node cluster to a multi-node cluster, one of your nodes
may appear to be up at first and then will go down immediately.  Check the datanode logs of
the node that goes down, and look for a connection refused error.  If you get this connection
refused error, then this means that your slave is having difficulties finding the master.
 I did two things to solve this problem, and I'm not sure which one, if not both, solved it.
 First, erase all of your hadoop temporary data and the namenode on all masters and slaves.
 Reformat the namenode.  Second, make sure all of your master and slave hosts in the conf
files (slaves, masters, hadoop-site.xml) refer to full host names (ex: host.domain.com instead
of host).

View raw message