hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From a...@apics.co.uk
Subject Hadoop not responding on port
Date Tue, 17 Nov 2009 19:35:15 GMT
Hi All,
        I hope this is the correct list on which to post this query.... If 
not I apologise in advance.

        I'm trying to get hadoop running on Debian Lenny, however failing, 
I've been working from the cluster setup guide on the hadoop site.  I've 
tried everything I can, but drawn a blank on the following problem..... 
When I start hadoop, it appears to start, and the name node begins 
listening on port 50070, however if I try and connect to the port no data 
is ever returned.  For example a "wget http://hadoop1:50070" just sits 
there forever, nothing appears in the logs, although I suspect the server 
isn't starting correctly.  All the nodes do this, secondary name node, 
jobtracker & data node.
        Any help is greatly appreciated.  Please find below configuration 
details:

hadoop@hadoop1:~$ dpkg -l|grep java (I put a version of java from sid on, 
just incase it made any difference.... it didn't)
ii  java-common                     0.30                       Base of all 
Java packages
ii  sun-java6-bin                   6-16-1                     Sun 
Java(TM) Runtime Environment (JRE) 6 (ar
ii  sun-java6-jdk                   6-16-1                     Sun 
Java(TM) Development Kit (JDK) 6
ii  sun-java6-jre                   6-16-1                     Sun 
Java(TM) Runtime Environment (JRE) 6 (ar

hadoop@hadoop1:~/conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>
  <name>fs.default.name</name>
  <value>hdfs://hadoop1/</value>
</property>

</configuration>

hadoop@hadoop1:~/conf$ grep -v "#" hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export HADOOP_HEAPSIZE=768
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true      <Tried with and 
without this option>
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_JOBTRACKER_OPTS"



hadoop@hadoop1:~/logs$ cat hadoop-hadoop-namenode-hadoop1.log
************************************************************/
2009-11-17 17:25:06,889 INFO 
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop1.home.apics.co.uk/192.168.99.151
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build = 
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r 
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
2009-11-17 17:25:07,238 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
Initializing RPC Metrics with hostName=NameNode, port=8020
2009-11-17 17:25:07,254 INFO 
org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: 
hadoop1.home.apics.co.uk/192.168.99.151:8020
2009-11-17 17:25:07,265 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=NameNode, sessionId=null
2009-11-17 17:25:07,276 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: 
Initializing NameNodeMeterics using context 
object:org.apache.hadoop.metrics.spi.NullContext
2009-11-17 17:25:07,689 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
2009-11-17 17:25:07,690 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2009-11-17 17:25:07,690 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
isPermissionEnabled=true
2009-11-17 17:25:07,718 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: 
Initializing FSNamesystemMetrics using context 
object:org.apache.hadoop.metrics.spi.NullContext
2009-11-17 17:25:07,721 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered 
FSNamesystemStatusMBean
2009-11-17 17:25:07,838 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Number of files = 1
2009-11-17 17:25:07,846 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Number of files under construction = 0
2009-11-17 17:25:07,846 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Image file of size 96 loaded in 0 seconds.
2009-11-17 17:25:07,873 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Edits file /var/hadoop/dfs/name/current/edits of size 4 edits # 0 loaded 
in 0 seconds.
2009-11-17 17:25:08,215 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Image file of size 96 saved in 0 seconds.
2009-11-17 17:25:08,964 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading 
FSImage in 1613 msecs
2009-11-17 17:25:08,967 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of 
blocks = 0
2009-11-17 17:25:08,967 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid 
blocks = 0
2009-11-17 17:25:08,967 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of 
under-replicated blocks = 0
2009-11-17 17:25:08,967 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of 
over-replicated blocks = 0
2009-11-17 17:25:08,967 INFO org.apache.hadoop.hdfs.StateChange: STATE* 
Leaving safe mode after 1 secs.
2009-11-17 17:25:08,968 INFO org.apache.hadoop.hdfs.StateChange: STATE* 
Network topology has 0 racks and 0 datanodes
2009-11-17 17:25:08,968 INFO org.apache.hadoop.hdfs.StateChange: STATE* 
UnderReplicatedBlocks has 0 blocks
2009-11-17 17:25:11,090 INFO org.mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
2009-11-17 17:25:11,704 INFO org.apache.hadoop.http.HttpServer: Port 
returned by webServer.getConnectors()[0].getLocalPort() before open() is 
-1. Opening the listener on 50070
2009-11-17 17:25:11,780 INFO org.apache.hadoop.http.HttpServer: 
listener.getLocalPort() returned 50070 
webServer.getConnectors()[0].getLocalPort() returned 50070
2009-11-17 17:25:11,780 INFO org.apache.hadoop.http.HttpServer: Jetty 
bound to port 50070
2009-11-17 17:25:11,783 INFO org.mortbay.log: jetty-6.1.14      <<Just 
sits here, until I shut it back down
2009-11-17 17:29:03,471 INFO 
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at 
hadoop1.home.apics.co.uk/192.168.99.151
************************************************************/



Kind Regards & Thanks in advance.

Alex

alex@apics.co.uk

Homer Simpson: Facts are meaningless. You could use facts to prove 
anything that's even remotely true!
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message