hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vladimir Rodionov <vladrodio...@gmail.com>
Subject Re: Could not reseek StoreFileScanner
Date Sat, 27 Jun 2015 18:03:32 GMT
Check your NameNode/DataNode log files. There will be some additional bits
of information there.

-Vlad

On Sat, Jun 27, 2015 at 6:14 AM, Ted Yu <yuzhihong@gmail.com> wrote:

> Please provide a bit more information:
>
> the hbase / hadoop release you use
> the type of data block encoding for the table
>
> How often did this happen ?
>
> thanks
>
>
> On Sat, Jun 27, 2015 at 3:44 AM, غلامرضا <g.reza@chmail.ir> wrote:
>
> > hi
> >
> > i got this  exception in reduce task when task try to incement table.
> >
> > Jun 27 12:42:56 10.3.72.94 [INFO]-2015/06/27
> > 12:42:56-AsyncProcess.logAndResubmit(713) - #10486, table=table1,
> > attempt=10/35 failed 4 ops, last exception: java.io.IOException:
> > java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for
> > reader
> >
> reader=hdfs://m2/hbase2/data/default/table1/d52beedee15de2e7bb380f14bb0929fb/c2/daa0269a1f1c44f3811a25976b9278c8_SeqId_95_,
> > compression=snappy, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> > [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> > [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> > [cacheDataCompressed=false] [prefetchOnOpen=false], ...
> > Jun 27 12:42:56 10.3.72.94
> > ...firstKey=\x00KEY1\x013yQ/c2:\x03\x00\x03^D\xA9\xC4/1435136203460/Put,
> > lastKey=\x00KEYN\x013yS/c2:\x03\x00\x02\xAE~A\xE0/1435136896864/Put,
> > avgKeyLen=36, avgValueLen=68, entries=15350817, length=466678923,
> > cur=\x00KEY2\x013yT/c2:/OLDEST_TIMESTAMP/Minimum/vlen=0/mvcc=0] to key
> > \x00KEY3\x013yT/c2:\x00fhamrah/LATEST_TIMESTAMP/Maximum/vlen=0/mvcc=0
> >     at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:184)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> >     at org...
> > Jun 27 12:42:56 10.3.72.94
> >
> ....apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:741)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:729)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:546)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> >     at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerIm...
> > Jun 27 12:42:56 10.3.72.94 ...pl.populateResult(HRegion.java:4103)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4183)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4061)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:4030)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:4017)
> >     at
> org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:5010)
> >     at
> > org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5611)
> >     at org.apache.hado...
> > Jun 27 12:42:56 10.3.72.94
> > ...op.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4452)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3673)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3607)
> >     at
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30954)
> >     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
> >     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >     at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcE...
> > Jun 27 12:42:56 10.3.72.94 ...xecutor.java:130)
> >     at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >     at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.IOException: Failed to read compressed block at
> > 302546975, onDiskSizeWithoutHeader=67898, preReadHeaderSize=33,
> > header.length=33, header bytes:
> >
> DATABLK*\x00\x00D\x16\x00\x01\x00Q\x00\x00\x00\x00\x01}\x1D\x98\x01\x00\x00@
> > \x00\x00\x00D/
> >     at
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1549)
> >     at
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:141...
> > Jun 27 12:42:56 10.3.72.94 ...3)
> >     at
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:394)
> >     at
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> >     at
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:539)
> >     at
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:587)
> >     at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> >     at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(St...
> > Jun 27 12:42:56 10.3.72.94 ...oreFileScanner.java:173)
> >     ... 23 more
> > Caused by: java.io.IOException: Invalid HFile block magic:
> > \x00\x00\x00\x00\x00\x00\x00\x00
> >     at
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> >     at
> org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165)
> >     at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock.<init>(HFileBlock.java:252)
> >     at
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1546)
> >     ... 30 more
> >  on n129,60020,1434448472071, tracking started Sat Jun 27 12:42:28 IRDT
> > 2015, retrying after 10056 ms, replay 4 ops.
> >
> > but when i checked the store file with &quot;hbase hfile&quot;, it is ok
> >
> > hbase hfile -v -f
> >
> /hbase2/data/default/table1/d52beedee15de2e7bb380f14bb0929fb/c2/daa0269a1f1c44f3811a25976b9278c8_SeqId_95_
> > Scanning ->
> >
> /hbase2/data/default/table1/d52beedee15de2e7bb380f14bb0929fb/c2/daa0269a1f1c44f3811a25976b9278c8_SeqId_95_
> > 2015-06-27 14:02:39,241 INFO  [main] util.ChecksumType: Checksum using
> > org.apache.hadoop.util.PureJavaCrc32
> > 2015-06-27 14:02:39,392 WARN  [main] snappy.LoadSnappy: Snappy native
> > library is available
> > 2015-06-27 14:02:39,394 INFO  [main] util.NativeCodeLoader: Loaded the
> > native-hadoop library
> > 2015-06-27 14:02:39,394 INFO  [main] snappy.LoadSnappy: Snappy native
> > library loaded
> > 2015-06-27 14:02:39,397 INFO  [main] compress.CodecPool: Got brand-new
> > decompressor
> > Scanned kv count -> 15350817
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message