Hi

I check the jps command on namenode:
8426 ResourceManager
23861 Jps
23356 SecondaryNameNode
23029 NameNode

datanode:
25104 NodeManager
25408 Jps

Obviously the datanode was not working,after I format the HDFS with hadoop namenode -format,the problem remains still.


The latest log form the file:hadoop-root-namenode-node32.log



2013-12-10 14:56:56,562 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.lang.OutOfMemoryError: Java heap space
        at java.util.jar.Manifest$FastInputStream.<init>(Manifest.java:313)
        at java.util.jar.Manifest$FastInputStream.<init>(Manifest.java:308)
        at java.util.jar.Manifest.read(Manifest.java:176)
        at java.util.jar.Manifest.<init>(Manifest.java:50)
        at java.util.jar.JarFile.getManifestFromReference(JarFile.java:167)
        at java.util.jar.JarFile.getManifest(JarFile.java:148)
        at sun.misc.URLClassPath$JarLoader$2.getManifest(URLClassPath.java:696)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:228)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
        at org.apache.log4j.spi.LoggingEvent.<init>(LoggingEvent.java:165)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.error(Log4JLogger.java:257)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:147)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:715)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:660)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:267)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:534)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:424)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:386)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:398)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:432)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1128)
2013-12-10 14:56:56,565 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-12-10 14:56:56,567 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node32/11.11.11.32
************************************************************/


I was wondering how could I fix this problem



2013/12/16 Jiayu Ji <jiayu.ji@gmail.com>
It is possible that your datanode daemon has not started yet. Logon to the datanode and check if the daemon is running by issue a jps command.

Another possible reason is that your namenode can not communicate with datanode. Try ping datanode from the namenode. 

The log files are supposed in HADOOP_HOME/logs by default.


On Mon, Dec 16, 2013 at 5:18 AM, Geelong Yao <geelongyao@gmail.com> wrote:
where should I find this logs?
I think the problem mainly on slaves,where should I find the logs?


2013/12/16 shashwat shriparv <dwivedishashwat@gmail.com>
Had your upgrade finished successfully?? check if datanode is able to connect to namenode, check datanode logs and please attach some log here if you are getting any error in if data node is running.



Warm Regards_
_
Shashwat Shriparv
Big-Data Engineer(HPC)
http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9https://twitter.com/shriparvhttps://www.facebook.com/shriparvhttp://google.com/+ShashwatShriparvhttp://www.youtube.com/user/sShriparv/videoshttp://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/




On Mon, Dec 16, 2013 at 4:04 PM, Geelong Yao <geelongyao@gmail.com> wrote:
Now the datanode is not working
内嵌图片 1


2013/12/16 Geelong Yao <geelongyao@gmail.com>
it is the namenode's problem.
How can I fix this problem?



2013/12/16 Shekhar Sharma <shekhar2581@gmail.com>
Seems like DataNode is not running or went dead
Regards,
Som Shekhar Sharma
+91-8197243810


On Mon, Dec 16, 2013 at 1:40 PM, Geelong Yao <geelongyao@gmail.com> wrote:
> Hi Everyone
>
> After I upgrade the hadoop to CDH 4.2.0 Hadoop 2.0.0,I try to running some
> test
> When I try to upload file to HDFS,error comes:
>
>
>
> node32:/software/hadoop-2.0.0-cdh4.2.0 # hadoop dfs -put
> /public/data/carinput1G_BK carinput1G
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> ls: Call From node32/11.11.11.32 to node32:9000 failed on connection
> exception: java.net.ConnectException: Connection refused; For more details
> see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
>
> Something wrong with my setting?
>
> BRs
> Geelong
>
>
> --
> From Good To Great



--
From Good To Great



--
From Good To Great




--
From Good To Great



--
Jiayu (James) Ji,

Cell: (312)823-7393




--
From Good To Great