hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From marjana <mivko...@us.ibm.com>
Subject Re: MasterProcWALs not cleaning up
Date Mon, 11 Sep 2017 17:40:40 GMT
I do have logs, and I see a lot of regionserver crashes (connection issues to
zookeeper). Comparing the logs on the two clusters, I see both have this
warning (the other cluster is prod and has the logs not cleaned since Aug
5th.

2017-09-08 14:28:58,962 WARN 
[wdc01is-ja-prod-hbase5:16000.activeMasterManager] master.SplitLogManager:
Returning success without actually splitting and deleting all the log files
in path
hdfs://hbase5:9000/hbase/WALs/wdc01is-ja-prod-hbase9.adm01.com,16020,1504096956944-splitting:
[FileStatus{path=hdfs://hbase5:9000/hbase/WALs/wdc01is-ja-prod-hbase9.adm01.com,16020,1504096956944-splitting/wdc01is-ja-prod-hbase9.adm01.com%2C16020%2C1504096956944.default.1504165359326;
isDirectory=false; length=973; replication=3; blocksize=134217728;
modification_time=1504167419585; access_time=1504165359327; owner=hadoop;
group=supergroup; permission=rw-r--r--; isSymlink=false}]
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException):
`/hbase/WALs/wdc01is-ja-prod-hbase9.adm01.com,16020,1504096956944-splitting
is non empty': Directory is not empty
        at
org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:85)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3718)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:947)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:611)


On the non-prod cluster, I see this too:
2017-09-08 02:19:21,783 WARN 
[wdc01is-ja-prod-hbase5:16000.activeMasterManager] hdfs.DFSClient: Failed to
connect to /10.125.19.39:50010 for block, add to deadNodes and continue.
java.io.EOFException: Premature EOF: no length prefix available
java.io.EOFException: Premature EOF: no length prefix available
        at
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2000)
        at
org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408)


Prod cluster has some warnings related to meta table inconsistancies
(suggesting to run hbck to fix inconsistencies)
Under what conditions would these masterProcWALs not clean up?
Thanks



--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html

Mime
View raw message