hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Varadharajan Mukundan <srinath...@gmail.com>
Subject Re: exceptions copying files into HDFS
Date Sun, 12 Dec 2010 09:16:13 GMT
HI,

> jps reports DataNode, NameNode, and SecondayNameNode as running:
>
> rock@ritter:/tmp/hadoop-rock> jps
> 31177 Jps
> 29909 DataNode
> 29751 NameNode
> 30052 SecondaryNameNode

In master node, the output of the "JPS" will contain a tasktracker,
jobtracker, namenode, secondary namenode, datanode(optional, depending on
your config) and your slaves will have tasktracker, datanodes in their jps
output. If you need more help on configuring hadoop, i recommend you to take
a look at
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/




> Here is the contents of the Hadoop node tree.  The only thing that looks
> like a log file are the dncp_block_verification.log.curr files, and those
> are empty.
> Note the presence of the in_use.lock files, which suggests that this node
is
> indeed being used.


The logs will be in the "logs" directory in $HADOOP_HOME (hadoop home
directory), are you looking for logs in this directory?


-- 
Thanks,
M. Varadharajan

------------------------------------------------

"Experience is what you get when you didn't get what you wanted"
               -By Prof. Randy Pausch in "The Last Lecture"

My Journal :- www.thinkasgeek.wordpress.com

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message