hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: data node stops on slave
Date Mon, 26 Apr 2010 13:42:51 GMT
Looks like your nodes share the same storage (NFS share or SAN?), and
only one DN can serve it (else it would be unmanageable).

J-D

On Mon, Apr 26, 2010 at 3:03 AM, Muhammad Mudassar <mudassark7@gmail.com> wrote:
> I have posted the problem in common-user but no one replied so now sending
> here to get some help on the issue.
>
> ---------- Forwarded message ----------
> From: Muhammad Mudassar <mudassark7@gmail.com>
> Date: Fri, Apr 23, 2010 at 4:59 PM
> Subject: data node stops on slave
> To: common-user@hadoop.apache.org
>
>
> Hi
>
> I am following tutorial running hadoop on ubuntu linux (multinode cluster)
> *
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)
> *<http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29>
> for  configuring 2 node cluster but i am facing problem data node on slave
> machine goes down after some time here I am sending log file of datanode on
> slave machine and log file of namenode at master machine kindly help me to
> solve the issue.
>
> *Log file of data node on slave machine*
>
> 2010-04-23 17:37:17,690 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = hadoop-desktop/127.0.1.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2010-04-23 17:37:19,115 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: master/10.3.31.221:54310. Already tried 0 time(s).
> 2010-04-23 17:37:25,303 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2010-04-23 17:37:25,305 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
> 2010-04-23 17:37:25,307 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2010-04-23 17:37:30,777 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2010-04-23 17:37:30,833 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50075
> 2010-04-23 17:37:30,833 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50075
> webServer.getConnectors()[0].getLocalPort() returned 50075
> 2010-04-23 17:37:30,833 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50075
> 2010-04-23 17:37:30,833 INFO org.mortbay.log: jetty-6.1.14
> 2010-04-23 17:37:31,242 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2010-04-23 17:37:31,279 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2010-04-23 17:37:36,608 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=DataNode, port=50020
> 2010-04-23 17:37:36,610 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2010-04-23 17:37:36,610 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2010-04-23 17:37:36,610 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: starting
> 2010-04-23 17:37:36,610 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: starting
> 2010-04-23 17:37:36,611 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: starting
> 2010-04-23 17:37:36,611 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
> DatanodeRegistration(hadoop-desktop:50010,
> storageID=DS-463609775-127.0.1.1-50010-1271833984369, infoPort=50075,
> ipcPort=50020)
> 2010-04-23 17:37:36,639 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 10.3.31.220:50010, storageID=DS-463609775-127.0.1.1-50010-1271833984369,
> infoPort=50075, ipcPort=50020)In DataNode.run, data =
> FSDataset{dirpath='/home/hadoop/Desktop/dfs/datahadoop/dfs/data/current'}
> 2010-04-23 17:37:36,639 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
> of 3600000msec Initial delay: 0msec
> 2010-04-23 17:37:36,653 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 17 blocks
> got processed in 6 msecs
> 2010-04-23 17:37:36,665 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
> scanner.
> 2010-04-23 17:37:39,641 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
> DNA_REGISTER
> 2010-04-23 17:37:42,645 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down:
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node
> 10.3.31.220:50010 is attempting to report storage ID
> DS-463609775-127.0.1.1-50010-1271833984369. Node 10.3.31.221:50010 is
> expected to serve this storage.
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3920)
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2891)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>    at org.apache.hadoop.ipc.Client.call(Client.java:740)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>    at $Proxy4.blockReport(Unknown Source)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:756)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
>    at java.lang.Thread.run(Thread.java:619)
>
> 2010-04-23 17:37:42,648 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 50020
> 2010-04-23 17:37:42,648 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 50020
> 2010-04-23 17:37:42,648 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: exiting
> 2010-04-23 17:37:42,649 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 1
> 2010-04-23 17:37:42,649 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: exiting
> 2010-04-23 17:37:42,649 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: exiting
> 2010-04-23 17:37:42,649 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2010-04-23 17:37:42,649 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 10.3.31.220:50010, storageID=DS-463609775-127.0.1.1-50010-1271833984369,
> infoPort=50075, ipcPort=50020):DataXceiveServer:
> java.nio.channels.AsynchronousCloseException
>    at
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
>    at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
>    at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
>    at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
>    at java.lang.Thread.run(Thread.java:619)
>
> 2010-04-23 17:37:42,680 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting
> DataBlockScanner thread.
> 2010-04-23 17:37:43,649 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2010-04-23 17:37:43,761 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 10.3.31.220:50010, storageID=DS-463609775-127.0.1.1-50010-1271833984369,
> infoPort=50075, ipcPort=50020):Finishing DataNode in:
> FSDataset{dirpath='/home/hadoop/Desktop/dfs/datahadoop/dfs/data/current'}
> 2010-04-23 17:37:43,761 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 50020
> 2010-04-23 17:37:43,761 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2010-04-23 17:37:43,762 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at hadoop-desktop/127.0.1.1
> ************************************************************/
>
> *
> log file of name node on master machine*
>
> 2010-04-23 16:37:17,736 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = mudassar-desktop/127.0.1.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2010-04-23 16:37:18,547 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2010-04-23 16:37:18,558 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: master/
> 10.3.31.221:54310
> 2010-04-23 16:37:18,560 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2010-04-23 16:37:18,560 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2010-04-23 16:37:18,634 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=hadoop,hadoop,admin
> 2010-04-23 16:37:18,634 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2010-04-23 16:37:18,634 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2010-04-23 16:37:18,648 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2010-04-23 16:37:18,649 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2010-04-23 16:37:18,775 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 40
> 2010-04-23 16:37:18,784 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2010-04-23 16:37:18,784 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 4763 loaded in 0 seconds.
> 2010-04-23 16:37:18,784 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /home/hadoop/Desktop/dfs/datahadoop/dfs/name/current/edits of
> size 4 edits # 0 loaded in 0 seconds.
> 2010-04-23 16:37:18,806 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 4763 saved in 0 seconds.
> 2010-04-23 16:37:18,835 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 230 msecs
> 2010-04-23 16:37:18,851 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe
> mode ON.
> The ratio of reported blocks 0.0000 has not reached the threshold 0.9990.
> Safe mode will be turned off automatically.
> 2010-04-23 16:37:24,338 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2010-04-23 16:37:24,389 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50070
> 2010-04-23 16:37:24,389 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2010-04-23 16:37:24,389 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50070
> 2010-04-23 16:37:24,390 INFO org.mortbay.log: jetty-6.1.14
> 2010-04-23 16:37:24,890 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2010-04-23 16:37:24,891 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2010-04-23 16:37:24,903 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2010-04-23 16:37:24,905 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2010-04-23 16:37:24,910 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2010-04-23 16:37:24,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2010-04-23 16:37:24,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2010-04-23 16:37:24,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2010-04-23 16:37:24,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2010-04-23 16:37:24,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2010-04-23 16:37:24,917 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2010-04-23 16:37:24,927 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2010-04-23 16:37:24,935 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2010-04-23 16:37:24,937 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2010-04-23 16:37:36,600 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from
> 10.3.31.220:50010storage DS-463609775-127.0.1.1-50010-1271833984369
> 2010-04-23 16:37:36,602 INFO org.apache.hadoop.net.NetworkTopology: Adding a
> new node: /default-rack/10.3.31.220:50010
> 2010-04-23 16:37:36,615 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.processReport: block blk_-6150136027661787226_1027 on
> 10.3.31.220:50010 size 518 does not belong to any file.
> 2010-04-23 16:37:36,615 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.addToInvalidates: blk_-6150136027661787226 is added to invalidSet
> of 10.3.31.220:50010
> 2010-04-23 16:37:36,615 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.processReport: block blk_-4836367697260863721_1028 on
> 10.3.31.220:50010 size 458 does not belong to any file.
> 2010-04-23 16:37:36,615 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.addToInvalidates: blk_-4836367697260863721 is added to invalidSet
> of 10.3.31.220:50010
> 2010-04-23 16:37:36,615 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.processReport: block blk_8159389050888609678_1025 on
> 10.3.31.220:50010 size 4 does not belong to any file.
> 2010-04-23 16:37:36,615 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.addToInvalidates: blk_8159389050888609678 is added to invalidSet
> of 10.3.31.220:50010
> 2010-04-23 16:37:38,641 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from
> 10.3.31.221:50010storage DS-463609775-127.0.1.1-50010-1271833984369
> 2010-04-23 16:37:38,641 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node 10.3.31.220:50010 is replaced by
> 10.3.31.221:50010 with the same storageID
> DS-463609775-127.0.1.1-50010-1271833984369
> 2010-04-23 16:37:38,642 INFO org.apache.hadoop.net.NetworkTopology: Removing
> a node: /default-rack/10.3.31.220:50010
> 2010-04-23 16:37:38,642 INFO org.apache.hadoop.net.NetworkTopology: Adding a
> new node: /default-rack/10.3.31.221:50010
> 2010-04-23 16:37:38,669 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe
> mode extension entered.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 29 seconds.
> 2010-04-23 16:37:39,604 FATAL org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.getDatanode: Data node 10.3.31.220:50010 is attempting to report
> storage ID DS-463609775-127.0.1.1-50010-1271833984369. Node
> 10.3.31.221:50010 is expected to serve this storage.
> 2010-04-23 16:37:39,605 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from
> 10.3.31.220:50010storage DS-463609775-127.0.1.1-50010-1271833984369
> 2010-04-23 16:37:39,605 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node 10.3.31.221:50010 is replaced by
> 10.3.31.220:50010 with the same storageID
> DS-463609775-127.0.1.1-50010-1271833984369
> 2010-04-23 16:37:39,605 INFO org.apache.hadoop.net.NetworkTopology: Removing
> a node: /default-rack/10.3.31.221:50010
> 2010-04-23 16:37:39,605 INFO org.apache.hadoop.net.NetworkTopology: Adding a
> new node: /default-rack/10.3.31.220:50010
> 2010-04-23 16:37:41,644 FATAL org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.getDatanode: Data node 10.3.31.221:50010 is attempting to report
> storage ID DS-463609775-127.0.1.1-50010-1271833984369. Node
> 10.3.31.220:50010 is expected to serve this storage.
> 2010-04-23 16:37:41,645 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from
> 10.3.31.221:50010storage DS-463609775-127.0.1.1-50010-1271833984369
> 2010-04-23 16:37:41,645 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node 10.3.31.220:50010 is replaced by
> 10.3.31.221:50010 with the same storageID
> DS-463609775-127.0.1.1-50010-1271833984369
> 2010-04-23 16:37:41,645 INFO org.apache.hadoop.net.NetworkTopology: Removing
> a node: /default-rack/10.3.31.220:50010
> 2010-04-23 16:37:41,646 INFO org.apache.hadoop.net.NetworkTopology: Adding a
> new node: /default-rack/10.3.31.221:50010
> 2010-04-23 16:37:42,606 FATAL org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.getDatanode: Data node 10.3.31.220:50010 is attempting to report
> storage ID DS-463609775-127.0.1.1-50010-1271833984369. Node
> 10.3.31.221:50010 is expected to serve this storage.
> 2010-04-23 16:37:42,607 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310, call blockReport(DatanodeRegistration(10.3.31.220:50010,
> storageID=DS-463609775-127.0.1.1-50010-1271833984369, infoPort=50075,
> ipcPort=50020), [J@1320a41) from 10.3.31.220:33776: error:
> org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node
> 10.3.31.220:50010 is attempting to report storage ID
> DS-463609775-127.0.1.1-50010-1271833984369. Node 10.3.31.221:50010 is
> expected to serve this storage.
> org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node
> 10.3.31.220:50010 is attempting to report storage ID
> DS-463609775-127.0.1.1-50010-1271833984369. Node 10.3.31.221:50010 is
> expected to serve this storage.
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3920)
>    at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2891)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2010-04-23 16:37:58,671 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe
> mode ON.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 9 seconds.
> 2010-04-23 16:38:08,673 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 15
> 2010-04-23 16:38:08,673 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2010-04-23 16:38:08,673 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 8
> 2010-04-23 16:38:08,673 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> over-replicated blocks = 0
> 2010-04-23 16:38:08,673 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 50 secs.
> 2010-04-23 16:38:08,673 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe
> mode is OFF.
> 2010-04-23 16:38:08,673 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 1 racks and 1 datanodes
> 2010-04-23 16:38:08,673 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 8 blocks
> 2010-04-23 16:38:09,856 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask
> 10.3.31.221:50010 to delete  blk_-6150136027661787226_1027
> blk_8159389050888609678_1025 blk_-4836367697260863721_1028
> 2010-04-23 16:42:37,019 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from
> 10.3.31.221
> 2010-04-23 16:42:37,019 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
> 0 Total time for transactions(ms): 0Number of transactions batched in Syncs:
> 0 Number of syncs: 0 SyncTimes(ms): 0
> 2010-04-23 16:42:37,398 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from
> 10.3.31.221
> 2010-04-23 16:42:37,398 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
> 0 Total time for transactions(ms): 0Number of transactions batched in Syncs:
> 0 Number of syncs: 1 SyncTimes(ms): 1
>
> --
> Regards
>
> Muhammad Mudassar
>
>
>
> --
> Regards
>
> Muhammad Mudassar
>

Mime
View raw message