hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brahma Reddy Battula (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum
Date Tue, 10 Nov 2015 07:27:10 GMT

    [ https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998095#comment-14998095
] 

Brahma Reddy Battula commented on HBASE-11625:
----------------------------------------------

[~apurtell]/[~syuanjiang]/[~ndimiduk]/[~qian wang]/[~ram_krish]

Any update on this ..? Same issue we faced in production environment and we are using hbase1.0
and hadoop2.6.

Hfile replcation is 2,one good replica and one corrupted replica. Noticed following.

 *HDFS client:*  When using hdfs connection client reads HFile A to Bad DataNode (where data
is corrupted), the DataNode reported checksum error,
hdfs client attempts to read data from another DataNode, and read success.
---DFSClient with hdfs check sum

 *RegionServer:*  RegionServer closing the hdfs clients, no check sum hdfs enabled client
reads HFile A.
---then use DFSClient without hdfs check sum.

Please correct me If I am wrong..

Bytheway following is the trace. *Major compaction failed.* 
{noformat}
2015-09-16 13:03:07,307 | INFO  | regionserver21302-smallCompactions-1441158778210 | Starting
compaction of 6 file(s) in d of TB_HTTP_201509,820,1441041221494.1c80ce3eddc7b463b1f9525d2f440798.
into tmpdir=hdfs://hacluster/hbase/data/default/TB_HTTP_201509/1c80ce3eddc7b463b1f9525d2f440798/.tmp,
totalSize=22.6 M | org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1077)
2015-09-16 13:03:07,311 | ERROR | regionserver21302-smallCompactions-1441158778210 | Compaction
failed Request = regionName=TB_HTTP_201509,820,1441041221494.1c80ce3eddc7b463b1f9525d2f440798.,
storeName=d, fileCount=6, fileSize=22.6 M (3.7 M, 3.8 M, 3.8 M, 3.8 M, 3.8 M, 3.8 M), priority=-1701,
time=10101672953052926 | org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:510)
java.io.IOException: Could not seek StoreFileScanner[HFileScanner for reader reader=hdfs://hacluster/hbase/data/default/TB_HTTP_201509/1c80ce3eddc7b463b1f9525d2f440798/d/63cc160d266748599d338b7c9e390a23,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false]
[cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false][prefetchOnOpen=false],
firstKey=8200005291_460000132556183_9223372035413272744_1919/d:aa/1441530258037/Put, lastKey=8249864621_460020161249953_9223372035413273715_6377/d:di/1441530155261/Put,
avgKeyLen=65, avgValueLen=5, entries=319088, length=3964840, cur=null] to key /d:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164)
	at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
	at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:240)
	at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:202)
	at org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:257)
	at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
	at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:109)
	at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1086)
	at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1480)
	at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:495)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Invalid HFile block magic: \x00\x00\x00\x00\x00\x00\x00\x00
	at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
	at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock.<init>(HFileBlock.java:239)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1486)
	at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:392)
	at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1090)
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
	at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
	... 12 more
{noformat}

> Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

> -----------------------------------------------------------------------------------------
>
>                 Key: HBASE-11625
>                 URL: https://issues.apache.org/jira/browse/HBASE-11625
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 0.94.21, 0.98.4, 0.98.5
>            Reporter: qian wang
>         Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it could happen
file corruption but it only can switch to hdfs checksum inputstream till validateBlockChecksum().
If the datablock's header corrupted when b = new HFileBlock(),it throws the exception "Invalid
HFile block magic" and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message