hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: hbase/hadoop crashing while running nutch
Date Thu, 02 Jan 2014 15:33:35 GMT
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/hbase/3_webpage/9b4a1a1b189a78ef83d7b7c00ca791d0/recovered.edits/
0000000000000007585.temp
could only be replicated to 0 nodes, instead of 1

Looks like there was some issue in hdfs. Have you checked namenode log
during this time period ?

bq. Number of data-nodes:          1

Can you deploy more data nodes ?


On Thu, Jan 2, 2014 at 7:28 AM, Law-Firms-In.com <webmaster@law-firms-in.com
> wrote:

> Hi,
>
> after a few rounds of spidering with nutch (2.2.1) hbase (0.90.6) is
> constantly
> crashing with this error message below.
>
> I tried quite a lot already to change but no success. Any idea what I
> could try to adjust in order to make hbase work stable running on hadoop?
>
> Further below all the WARN/ERROR/FATAL logs from the most recent NUTCH
> spider I did which ended like:
>
> 0/50 spinwaiting/active, 49429 pages, 1429 errors, 26.6 0 pages/s, 15782
> 0 kb/s, 2500 URLs in 397 queues
> Aborting with 50 hung threads.
>
> Please keep in mind that hbase/nutch is working for a couple of rounds
> but then crashes suddently.
>
> ===============
>
>
>  2014-01-02 14:03:51,493 INFO org.apache.hadoop.hbase.master.HMaster:
> Stopping infoServer
> 2014-01-02 14:03:51,493 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> IPC Server Responder
> 2014-01-02 14:03:51,493 INFO org.apache.hadoop.hbase.master.LogCleaner:
> master-localhost:60000.oldLogCleaner exiting
> 2014-01-02 14:03:51,494 INFO org.mortbay.log: Stopped
> SelectChannelConnector@0.0.0.0:60010
> 2014-01-02 14:03:51,609 DEBUG
> org.apache.hadoop.hbase.catalog.CatalogTracker: Stopping catalog tracker
> org.apache.hadoop.hbase.catalog.CatalogTracker@329b5cdd
> 2014-01-02 14:03:51,609 DEBUG
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> The connection to hconnection-0x14352e9bb820003 has been closed.
> 2014-01-02 14:03:51,609 INFO
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Closed zookeeper sessionid=0x14352e9bb820003
> 2014-01-02 14:03:51,614 INFO org.apache.zookeeper.ZooKeeper: Session:
> 0x14352e9bb820003 closed
> 2014-01-02 14:03:51,614 DEBUG
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> The connection to null has been closed.
> 2014-01-02 14:03:51,614 INFO org.apache.zookeeper.ClientCnxn:
> EventThread shut down
> 2014-01-02 14:03:51,614 INFO
> org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor:
> localhost:60000.timeoutMonitor exiting
> 2014-01-02 14:03:51,620 INFO org.apache.zookeeper.ZooKeeper: Session:
> 0x14352e9bb820000 closed
> 2014-01-02 14:03:51,620 INFO org.apache.zookeeper.ClientCnxn:
> EventThread shut down
> 2014-01-02 14:03:51,620 INFO org.apache.hadoop.hbase.master.HMaster:
> HMaster main thread exiting
> 2014-01-02 14:03:51,642 ERROR org.apache.hadoop.hdfs.DFSClient: Failed
> to close file
>
> /hbase/3_webpage/9b4a1a1b189a78ef83d7b7c00ca791d0/recovered.edits/0000000000000007585.temp
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>
> /hbase/3_webpage/9b4a1a1b189a78ef83d7b7c00ca791d0/recovered.edits/0000000000000007585.temp
> could only be replicated to 0 nodes, instead of 1
>         at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>         at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>         at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>         at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>         at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
> 2014-01-02 14:03:51,655 ERROR org.apache.hadoop.hdfs.DFSClient: Failed
> to close file
>
> /hbase/3_webpage/5d7f60dbcb95c4b08e9a5c09a0ed4808/recovered.edits/0000000000000007587.temp
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>
> /hbase/3_webpage/5d7f60dbcb95c4b08e9a5c09a0ed4808/recovered.edits/0000000000000007587.temp
> could only be replicated to 0 nodes, instead of 1
>         at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>         at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>         at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>         at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>         at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
> 2014-01-02 14:03:51,656 ERROR org.apache.hadoop.hdfs.DFSClient: Failed
> to close file
>
> /hbase/3_webpage/2de53e798b85e736fa5dd86431746732/recovered.edits/0000000000000007586.temp
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>
> /hbase/3_webpage/2de53e798b85e736fa5dd86431746732/recovered.edits/0000000000000007586.temp
> could only be replicated to 0 nodes, instead of 1
>         at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>         at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>         at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>         at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>         at com.sun.proxy.$Proxy6.addBlock(Unknown Source)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>         at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
>
>
>
>
> 2014-01-02 15:29:08,328 WARN org.apache.hadoop.hbase.util.FSUtils:
> Running on HDFS without append enabled may result in data loss
> 2014-01-02 15:29:08,334 WARN org.apache.hadoop.hbase.util.FSUtils:
> Running on HDFS without append enabled may result in data loss
> 2014-01-02 15:29:08,340 WARN org.apache.hadoop.hbase.util.FSUtils:
> Running on HDFS without append enabled may result in data loss
> 2014-01-02 15:29:08,345 WARN org.apache.hadoop.hbase.util.FSUtils:
> Running on HDFS without append enabled may result in data loss
> 2014-01-02 15:29:08,352 WARN org.apache.hadoop.hbase.util.FSUtils:
> Running on HDFS without append enabled may result in data loss
> 2014-01-02 15:29:08,370 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
> /hbase/.META./1028785192/recovered.edits/0000000000000005582.temp could
> only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:29:08,370 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:29:08,370 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
> "/hbase/.META./1028785192/recovered.edits/0000000000000005582.temp" -
> Aborting...
>
>
> 2014-01-02 15:29:08,122 ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Error in log
> splitting write thread
> 2014-01-02 15:29:08,127 ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Error in log
> splitting write thread
> 2014-01-02 15:29:08,135 ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Error in log
> splitting write thread
> 2014-01-02 15:29:08,371 ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Couldn't close
> log at
>
> hdfs://localhost:9000/hbase/.META./1028785192/recovered.edits/0000000000000005582.temp
> 2014-01-02 15:29:08,404 ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Couldn't close
> log at
>
> hdfs://localhost:9000/hbase/3_webpage/7072d995482044f41db1121a8c2f4b30/recovered.edits/0000000000000005589.temp
> 2014-01-02 15:29:08,405 ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Couldn't close
> log at
>
> hdfs://localhost:9000/hbase/3_webpage/ab9bcbedd658fc21eed5794c1d5f2a8c/recovered.edits/0000000000000005591.temp
> 2014-01-02 15:29:09,461 ERROR org.apache.hadoop.hdfs.DFSClient: Failed
> to close file
>
> /hbase/3_webpage/ab9bcbedd658fc21eed5794c1d5f2a8c/recovered.edits/0000000000000005591.temp
> 2014-01-02 15:29:09,461 ERROR org.apache.hadoop.hdfs.DFSClient: Failed
> to close file
>
> /hbase/3_webpage/7072d995482044f41db1121a8c2f4b30/recovered.edits/0000000000000005589.temp
> 2014-01-02 15:29:09,462 ERROR org.apache.hadoop.hdfs.DFSClient: Failed
> to close file
> /hbase/.META./1028785192/recovered.edits/0000000000000005582.temp
>
>
>
> 2014-01-02 15:29:08,082 FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-2
> Got while writing log entry to log
> 2014-01-02 15:29:08,124 FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-1
> Got while writing log entry to log
> 2014-01-02 15:29:08,133 FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-0
> Got while writing log entry to log
> 2014-01-02 15:29:08,409 FATAL
> org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting
> hdfs://localhost:9000/hbase/.logs/localhost,60020,1388670608167
> 2014-01-02 15:29:08,410 FATAL org.apache.hadoop.hbase.master.HMaster:
> Shutting down HBase cluster: file system not available
>
>
>
>
> 2014-01-02 15:29:03,493 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server serverName=localhost,60020,1388670608167, load=(requests=178,
> regions=20, usedHeap=594, maxHeap=10667): Replay of HLog required.
> Forcing server shutdown
>
>
> 2014-01-02 15:28:49,681 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/6f16b76b2f93a85c3c9fb0dc203f44d1/splits/f967df14eca3c7def943cc53fbf9b8a9/ol/5290547552864737575.6f16b76b2f93a85c3c9fb0dc203f44d1"
> - Aborting...
> 2014-01-02 15:28:49,680 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/6f16b76b2f93a85c3c9fb0dc203f44d1/splits/f967df14eca3c7def943cc53fbf9b8a9/f/7286765113971143335.6f16b76b2f93a85c3c9fb0dc203f44d1"
> - Aborting...
> 2014-01-02 15:28:49,681 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/6f16b76b2f93a85c3c9fb0dc203f44d1/splits/f967df14eca3c7def943cc53fbf9b8a9/mk/8373295057322606514.6f16b76b2f93a85c3c9fb0dc203f44d1"
> - Aborting...
> 2014-01-02 15:28:51,815 WARN
> org.apache.hadoop.hbase.regionserver.wal.HLog: IPC Server handler 5 on
> 60020 took 1878 ms appending an edit to hlog; editcount=293, len~=411.9k
> 2014-01-02 15:28:56,693 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/f/1050327747216370479.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,693 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,694 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/f/1050327747216370479.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:28:56,695 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/ol/2034350034778366131.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,696 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,696 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/ol/2034350034778366131.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:28:56,697 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/h/2539869235400177968.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,702 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,702 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/h/2539869235400177968.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:28:56,702 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/mtdt/2511732635073603338.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,702 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,702 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/mtdt/2511732635073603338.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:28:56,702 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/mtdt/7051003187947316265.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,703 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,703 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/mtdt/7051003187947316265.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:28:56,738 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/h/7734322749783373345.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,741 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/mk/5913845763479035889.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,744 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,744 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/mk/5913845763479035889.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:28:56,744 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/mk/3685985664085252295.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,741 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,745 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,745 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
>
> /hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/f/2668655577841396270.d2e85aa82fc5ac7a31fcb384c8a19b28
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:28:56,746 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/mk/3685985664085252295.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:28:56,745 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/h/7734322749783373345.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:28:56,746 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:28:56,746 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/d2e85aa82fc5ac7a31fcb384c8a19b28/splits/07f7c06f7dd374934160a226cbd0cd36/f/2668655577841396270.d2e85aa82fc5ac7a31fcb384c8a19b28"
> - Aborting...
> 2014-01-02 15:29:03,492 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
> /hbase/3_webpage/c0b264674a5940ac91c851f567244dca/.tmp/3847601395962189673
> could only be replicated to 0 nodes, instead of 1
> 2014-01-02 15:29:03,492 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for null bad datanode[0] nodes == null
> 2014-01-02 15:29:03,492 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
>
> "/hbase/3_webpage/c0b264674a5940ac91c851f567244dca/.tmp/3847601395962189673"
> - Aborting...
>
>
>
> 2014-01-02 15:03:53,894 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client
> sessionid 0x1435337da24001d, likely client has closed socket
> 2014-01-02 15:03:53,894 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client
> sessionid 0x1435337da240020, likely client has closed socket
> 2014-01-02 15:03:53,895 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client
> sessionid 0x1435337da24001c, likely client has closed socket
> 2014-01-02 15:03:53,896 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client
> sessionid 0x1435337da24001a, likely client has closed socket
> 2014-01-02 15:03:53,896 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client
> sessionid 0x1435337da24001f, likely client has closed socket
> 2014-01-02 15:04:35,351 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client
> sessionid 0x1435337da24002c, likely client has closed socket
> 2014-01-02 15:23:00,374 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client
> sessionid 0x1435337da24002d, likely client has closed socket
>
>
>
> ===
>
>
> localhost:# hadoop fsck /hbase
> -                                          openforwrite
> FSCK started by root from /127.0.0.1 for path /hbase at Thu Jan 02
> 15:59:11 CET                                           2014
> ......../hbase/.META./1028785192/recovered.edits/0000000000000005582.temp 0
> byte                                          s, 0 block(s),
> OPENFORWRITE:
> ...................................................
>
> ................................./hbase/3_webpage/7072d995482044f41db1121a8c2f4b
> 30/recovered.edits/0000000000000005589.temp 0 bytes, 0 block(s),
> OPENFORWRITE: .                                          .....
>
> ............................./hbase/3_webpage/ab9bcbedd658fc21eed5794c1d5f2a8c/r
> ecovered.edits/0000000000000005591.temp 0 bytes, 0 block(s),
> OPENFORWRITE: .....
>
> ..../hbase/3_webpage/c0b264674a5940ac91c851f567244dca/.tmp/3847601395962189673
> 0                                           bytes, 0 block(s),
> OPENFORWRITE:
> ..............................................
> ..............
> .................................Status: HEALTHY
>  Total size:    6080372900 B
>  Total dirs:    231
>  Total files:   233
>  Total blocks (validated):      272 (avg. block size 22354312 B)
>  Minimally replicated blocks:   272 (100.0 %)
>  Over-replicated blocks:        0 (0.0 %)
>  Under-replicated blocks:       0 (0.0 %)
>  Mis-replicated blocks:         0 (0.0 %)
>  Default replication factor:    1
>  Average block replication:     1.0
>  Corrupt blocks:                0
>  Missing replicas:              0 (0.0 %)
>  Number of data-nodes:          1
>  Number of racks:               1
> FSCK ended at Thu Jan 02 15:59:11 CET 2014 in 70 milliseconds
>
>
> The filesystem under path '/hbase' is HEALTHY
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message