hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pankil Doshi <forpan...@gmail.com>
Subject Datanodes fail to start
Date Fri, 15 May 2009 01:43:31 GMT
Hello Everyone,

Actually I had a cluster which was up.

But i stopped the cluster as i  wanted to format it.But cant start it back.

1)when i give "start-dfs.sh" I get following on screen

starting namenode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-namenode-hadoopmaster.out
slave1.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave1.out
slave3.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave3.out
slave4.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave4.out
slave2.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave2.out
slave5.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave5.out
slave6.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave6.out
slave9.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave9.out
slave8.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave8.out
slave7.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave7.out
slave10.local: starting datanode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave10.out
hadoopmaster.local: starting secondarynamenode, logging to
/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-secondarynamenode-hadoopmaster.out


2) from log file named "hadoop-hadoop-namenode-hadoopmaster.log" I get
following



2009-05-14 20:28:23,515 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoopmaster/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.18.3
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 736250;
compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
************************************************************/
2009-05-14 20:28:23,717 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=NameNode, port=9000
2009-05-14 20:28:23,728 INFO org.apache.hadoop.dfs.NameNode: Namenode up at:
hadoopmaster.local/192.168.0.1:9000
2009-05-14 20:28:23,733 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null
2009-05-14 20:28:23,743 INFO org.apache.hadoop.dfs.NameNodeMetrics:
Initializing NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NullContext
2009-05-14 20:28:23,856 INFO org.apache.hadoop.fs.FSNamesystem:
fsOwner=hadoop,hadoop,adm,dialout,fax,cdrom,floppy,tape,audio,dip,video,plugdev,fuse,lpadmin,admin,sambashare
2009-05-14 20:28:23,856 INFO org.apache.hadoop.fs.FSNamesystem:
supergroup=supergroup
2009-05-14 20:28:23,856 INFO org.apache.hadoop.fs.FSNamesystem:
isPermissionEnabled=true
2009-05-14 20:28:23,883 INFO org.apache.hadoop.dfs.FSNamesystemMetrics:
Initializing FSNamesystemMeterics using context
object:org.apache.hadoop.metrics.spi.NullContext
2009-05-14 20:28:23,885 INFO org.apache.hadoop.fs.FSNamesystem: Registered
FSNamesystemStatusMBean
2009-05-14 20:28:23,964 INFO org.apache.hadoop.dfs.Storage: Number of files
= 1
2009-05-14 20:28:23,971 INFO org.apache.hadoop.dfs.Storage: Number of files
under construction = 0
2009-05-14 20:28:23,971 INFO org.apache.hadoop.dfs.Storage: Image file of
size 80 loaded in 0 seconds.
2009-05-14 20:28:23,972 INFO org.apache.hadoop.dfs.Storage: Edits file edits
of size 4 edits # 0 loaded in 0 seconds.
2009-05-14 20:28:23,974 INFO org.apache.hadoop.fs.FSNamesystem: Finished
loading FSImage in 155 msecs
2009-05-14 20:28:23,976 INFO org.apache.hadoop.fs.FSNamesystem: Total number
of blocks = 0
2009-05-14 20:28:23,988 INFO org.apache.hadoop.fs.FSNamesystem: Number of
invalid blocks = 0
2009-05-14 20:28:23,988 INFO org.apache.hadoop.fs.FSNamesystem: Number of
under-replicated blocks = 0
2009-05-14 20:28:23,988 INFO org.apache.hadoop.fs.FSNamesystem: Number of
over-replicated blocks = 0
2009-05-14 20:28:23,988 INFO org.apache.hadoop.dfs.StateChange: STATE*
Leaving safe mode after 0 secs.
*2009-05-14 20:28:23,989 INFO org.apache.hadoop.dfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes*
2009-05-14 20:28:23,989 INFO org.apache.hadoop.dfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2009-05-14 20:28:29,128 INFO org.mortbay.util.Credential: Checking Resource
aliases
2009-05-14 20:28:29,243 INFO org.mortbay.http.HttpServer: Version
Jetty/5.1.4
2009-05-14 20:28:29,244 INFO org.mortbay.util.Container: Started
HttpContext[/static,/static]
2009-05-14 20:28:29,245 INFO org.mortbay.util.Container: Started
HttpContext[/logs,/logs]
2009-05-14 20:28:29,750 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@7fcebc9f
2009-05-14 20:28:29,838 INFO org.mortbay.util.Container: Started
WebApplicationContext[/,/]
2009-05-14 20:28:29,843 INFO org.mortbay.http.SocketListener: Started
SocketListener on 0.0.0.0:50070
2009-05-14 20:28:29,843 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.Server@61acfa31
2009-05-14 20:28:29,843 INFO org.apache.hadoop.fs.FSNamesystem: Web-server
up at: 0.0.0.0:50070
2009-05-14 20:28:29,843 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2009-05-14 20:28:29,844 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 9000: starting
2009-05-14 20:28:29,865 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 9000: starting
2009-05-14 20:28:29,876 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 9000: starting
2009-05-14 20:28:29,877 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 9000: starting
2009-05-14 20:28:29,877 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 9000: starting
2009-05-14 20:28:29,878 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 9000: starting
2009-05-14 20:28:29,879 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 9000: starting
2009-05-14 20:28:29,879 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 6 on 9000: starting
2009-05-14 20:28:29,881 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 7 on 9000: starting
2009-05-14 20:28:29,881 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 8 on 9000: starting
2009-05-14 20:28:29,882 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 9000: starting
2009-05-14 20:33:35,774 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
Log from 192.168.0.1
2009-05-14 20:33:35,775 INFO org.apache.hadoop.fs.FSNamesystem: Number of
transactions: 0 Total time for transactions(ms): 0 Number of syncs: 0
SyncTimes(ms): 0
2009-05-14 20:33:36,310 INFO org.apache.hadoop.fs.FSNamesystem: Roll FSImage
from 192.168.0.1
2009-05-14 20:33:36,311 INFO org.apache.hadoop.fs.FSNamesystem: Number of
transactions: 0 Total time for transactions(ms): 0 Number of syncs: 0
SyncTimes(ms): 0



3) my hadoop-site.xml for refrence

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://hadoopmaster.local:9000</value>
  </property>
  <property>
    <name>mapred.job.tracker</name>
    <value>hadoopmaster.local:9001</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx512m</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/Hadoop/Temp</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>dfs.data.dir</name>
  <value>/Hadoop/Data,/data/Hadoop</value>
  <description>Determines where on the local filesystem an DFS data node
  should store its blocks.  If this is a comma-delimited
  list of directories, then data will be stored in all named
  directories, typically on different devices.
  Directories that do not exist are ignored.
  </description>
</property>
</configuration>


the main thing i find in log it says "*2009-05-14 20:28:23,989 INFO
org.apache.hadoop.dfs.StateChange: STATE* Network topology has 0 racks and 0
datanodes" *which means it cannot start datanodes*.*but why is it so? I have
all my datanodes in my slave.xml and it detects that on start up screen when
i give start-dfs.sh.


Can anyone throw some lights on my problem.

Thanks
Pankil

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message