hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Newman (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-1495) IllegalArgumentException in halfhfilereader#next
Date Tue, 09 Jun 2009 00:12:07 GMT

    [ https://issues.apache.org/jira/browse/HBASE-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12717493#action_12717493
] 

Alex Newman commented on HBASE-1495:
------------------------------------

computation.
Regions On FS   20      Number of regions on FileSystem. Rough count.
:       servers: 7              requests=0, regions=13
 
 
node 5
2009-06-07 20:14:59,702 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_8054294304707090727_6626586
from any node:  java.io.IOException: No live nodes contain current block
2009-06-07 20:15:23,007 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -8896201081780238142
lease expired
2009-06-07 20:15:27,879 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 1596079652788274335
lease expired
2009-06-07 20:15:40,059 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -2733198405792644962
lease expired
2009-06-07 20:16:10,248 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -7860104214020620019
lease expired
2009-06-07 20:16:25,149 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_8054294304707090727_6626586
from any node:  java.io.IOException: No live nodes contain current block
2009-06-07 20:16:40,440 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 6217323943013460033
lease expired
2009-06-07 20:17:10,680 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -6711331116393695890
lease expired
2009-06-07 20:17:40,835 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 3871771505311058479
lease expired
2009-06-07 20:18:10,069 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_8054294304707090727_6626586
from any node:  java.io.IOException: No live nodes contain current block
2009-06-07 20:18:11,037 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -8973011642755732600
lease expired
2009-06-07 20:18:41,201 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 3807906792818606834
lease expired
2009-06-07 20:19:13,095 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 579239649970047854
lease expired
2009-06-07 20:19:14,773 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException:
Could not obtain block: blk_8054294304707090727_6626586 file=/hbase/.META./1028785192/info/8853829816968996247
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1757)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1585)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1712)
        at java.io.DataInputStream.read(DataInputStream.java:132)
        at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:99)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:950)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:906)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1222)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1105)
        at org.apache.hadoop.hbase.regionserver.StoreFileGetScan.getStoreFile(StoreFileGetScan.java:80)
        at org.apache.hadoop.hbase.regionserver.StoreFileGetScan.get(StoreFileGetScan.java:65)
        at org.apache.hadoop.hbase.regionserver.Store.get(Store.java:1480)
        at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1037)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1706)
        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:643)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:913)
....
node 1
2009-06-07 20:22:33,342 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException:
Canno
t open filename /hbase/t3/601977017/block/4848245505122296259
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1444)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1769)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1585)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1712)
        at java.io.DataInputStream.read(DataInputStream.java:132)
        at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:99)
        at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:96)
        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:86)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:74)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:950)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:906)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.next(HFile.java:1082)
        at org.apache.hadoop.hbase.io.HalfHFileReader$1.next(HalfHFileReader.java:108)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:52)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:79)
        at org.apache.hadoop.hbase.regionserver.MinorCompactingStoreScanner.next(MinorCompactingStoreScanner.java:101)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:849)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:714)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:766)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:723)
        at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:105)
 
2009-06-07 20:22:33,367 ERROR org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction
failed for region t3,*******************,1244420117045
java.lang.IllegalArgumentException
        at java.nio.Buffer.position(Buffer.java:218)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.next(HFile.java:1072)
        at org.apache.hadoop.hbase.io.HalfHFileReader$1.next(HalfHFileReader.java:108)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:52)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:79)
        at org.apache.hadoop.hbase.regionserver.MinorCompactingStoreScanner.next(MinorCompactingSto
reScanner.java:101)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:849)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:714)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:766)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:723)
        at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:105)
2009-06-07 20:22:33,368 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction
on region t3,,1244420449037
2009-06-07 20:22:33,368 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction
on region t3,,1244420449037
2009-06-07 20:22:33,373 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2009-06-07 20:25:01,766 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: compactions
no longer limited
2009-06-07 20:27:47,938 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction completed
on region t3,,1244420449037 in 5mins, 14sec
2009-06-07 20:27:47,938 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting split
of region  t3,,1244420449037
2009-06-07 20:27:47,942 INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed t3,,1244420449037
2009-06-07 20:27:48,319 INFO org.apache.hadoop.hbase.regionserver.HRegion: region t3,,1244420867941/2082155757
available; sequence id is 9633520
 
2009-06-07 20:27:48,319 INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed t3,,1244420867941
2009-06-07 20:27:48,465 INFO org.apache.hadoop.hbase.regionserver.HRegion: region t3,******************,1244420867941/1057048264
available; sequence id is 9633521
2009-06-07 20:27:48,465 INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed t3,U*****************,1244420867941
2009-06-07 20:27:48,470 INFO org.apache.hadoop.hbase.regionserver.CompactSplitThread: region
split, META updated, and report to master all successful. Old region=REGION => {NAME =>
't3,,1244420449037', STARTKEY => '', ENDKEY => '*****************', ENCODED => 2120511499,
OFFLINE => true, SPLIT => true, TABLE => {{NAME => 't3', FAMILIES => [{NAME
=> '*****************', COMPRESSION => 'GZ', VERSIONS => '3', TTL => '2147483647',
BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:
t3,,1244420867941, t3,*****************,1244420867941. Split took 0sec
2009-06-07 20:27:48,470 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction
on region t3,*****************,1244420449037
2009-06-07 20:27:58,106 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_7797131986764001105_6627592
from any node:  java.io.IOException: No live nodes contain current block
2009-06-07 20:28:01,111 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_7797131986764001105_6627592
from any node:  java.io.IOException: No live nodes contain current block
2009-06-07 20:28:04,113 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException:
Cannot open filename /hbase/t3/82566562/block/7165913658187924750
  at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1444)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1769)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1585)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1712)
        at java.io.DataInputStream.read(DataInputStream.java:132)
        at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:99)
        at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:96)
        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:86)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:74)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:950)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:906)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.next(HFile.java:1082)
        at org.apache.hadoop.hbase.io.HalfHFileReader$1.next(HalfHFileReader.java:108)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:52)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:79)
        at org.apache.hadoop.hbase.regionserver.MinorCompactingStoreScanner.next(MinorCompactingStoreScanner.java:101)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:849)
   at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:714)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:766)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:723)
        at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:105)
 
2009-06-07 20:28:04,124 ERROR org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction
failed for region t3,*****************,1244420449037
java.lang.IllegalArgumentException
        at java.nio.Buffer.position(Buffer.java:218)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.next(HFile.java:1072)
        at org.apache.hadoop.hbase.io.HalfHFileReader$1.next(HalfHFileReader.java:108)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:52)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:79)
        at org.apache.hadoop.hbase.regionserver.MinorCompactingStoreScanner.next(MinorCompactingStoreScanner.java:101)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:849)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:714)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:766)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:723)
        at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:105)
2009-06-07 20:07:36,330 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.110       cmd=mkdirs      src=/hbase/.META./1028785192/historian  dst=null
       perm=ts:ticker:rwxr-xr-x
2009-06-07 20:07:36,335 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.110       cmd=create      src=/hbase/.META./1028785192/historian/3542463731429695625
     dst=null        perm=ts:ticker:
rw-r--r--
2009-06-07 20:07:36,340 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock:
/hbase/.META./1028785192/historian/3542463731429695625. blk_-4350004983998257175_6626584
2009-06-07 20:07:36,348 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.111:50010 is added to blk_7773188474590228904_6626583 size 778360
2009-06-07 20:07:36,348 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.107:50010 is added to blk_7773188474590228904_6626583 size 778360
2009-06-07 20:07:36,349 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.112:50010 is added to blk_7773188474590228904_6626583 size 778360
2009-06-07 20:07:36,354 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.112       cmd=create      src=/hbase/.logs/*************,60020,1244418229161/hlog.dat.1244419656351
dst=null
        perm=ts:ticker:rw-r--r--
2009-06-07 20:07:36,365 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.111:50010 is added to blk_-4350004983998257175_6626584 size 3190
2009-06-07 20:07:36,365 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.107:50010 is added to blk_-4350004983998257175_6626584 size 3190
2009-06-07 20:07:36,365 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.110:50010 is added to blk_-4350004983998257175_6626584 size 3190
2009-06-07 20:07:36,373 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.110       cmd=open        src=/hbase/.META./1028785192/historian/3542463731429695625
     dst=null        perm=null
2009-06-07 20:07:36,379 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.110       cmd=mkdirs      src=/hbase/.META./1028785192/info       dst=null
       perm=ts:ticker:rwxr-xr-x
2009-06-07 20:07:36,382 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.110       cmd=create      src=/hbase/.META./1028785192/info/8853829816968996247
  dst=null        perm=ts:ticker:rw-r--r-
-
2009-06-07 20:07:36,384 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock:
/hbase/.META./1028785192/info/8853829816968996247. blk_8054294304707090727_6626586
2009-06-07 20:07:36,387 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.107:50010 is added to blk_8054294304707090727_6626586 size 7296
2009-06-07 20:07:36,387 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.111:50010 is added to blk_8054294304707090727_6626586 size 7296
2009-06-07 20:07:36,388 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.110:50010 is added to blk_8054294304707090727_6626586 size 7296
2009-06-07 20:07:36,392 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.110       cmd=open        src=/hbase/.META./1028785192/info/8853829816968996247
  dst=null        perm=null
2009-06-07 20:07:36,409 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock:
/hbase/.logs/**************,60020,1244418229161/hlog.dat.1244419656351. blk_-249701538603762950_6626586
2009-06-07 20:07:37,238 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.110:50010 is added to blk_-249701538603762950_6626586 size 783411
2009-06-07 20:07:37,239 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.107:50010 is added to blk_-249701538603762950_6626586 size 783411
2009-06-07 20:07:37,239 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.112:50010 is added to blk_-249701538603762950_6626586 size 783411
2009-06-07 20:07:37,244 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.112       cmd=create      src=/hbase/.logs/************,60020,1244418229161/hlog.dat.1244419657242
dst=null
        perm=ts:ticker:rw-r--r--
2009-06-07 20:07:37,354 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock:
/hbase/.logs/************,60020,1244418229161/hlog.dat.1244419657242. blk_-4367384643419626092_6626587
2009-06-07 20:07:37,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.29:50010 is added to blk_4530508892235409778_6626487 size 778943
2009-06-07 20:07:37,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.112:50010 is added to blk_4530508892235409778_6626487 size 778943
2009-06-07 20:07:37,831 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: ***.**.**.109:50010 is added to blk_4530508892235409778_6626487 size 778943
2009-06-07 20:07:37,834 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=*****,*****
      ip=/***.**.**.109       cmd=create      src=/hbase/.logs/************,60020,1244418229042/hlog.dat.1244419657833
dst=null
        perm=ts:ticker:rw-r--r--
 less /home/fds/ts/logs/*datanode*.log.2009-06-07
node 6
2009-06-07 20:07:36,384 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_8054294304707090727_6626586 src: /***.**.**.110:57200 dest: /***.**.**.111:50010
2009-06-07 20:07:36,387 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:
src: /***.**.**.110:57200, dest: /***.**.**.111:50010, bytes: 7296, op: HDFS_WRITE, cliID:
DFSClient_-930264054, srvID: DS-118466857-***.**.**.111-50
010-1234138704820, blockid: blk_8054294304707090727_6626586
2009-06-07 20:07:36,387 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
1 for block blk_8054294304707090727_6626586 terminating
....
 
 
2009-06-07 20:14:28,313 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:
src: /***.**.**.111:50010, dest: /***.**.**.110:54059, bytes: 7356, op: HDFS_READ, cliID:
DFSClient_-930264054, srvID: DS-118466857-***.**.**.111-500
10-1234138704820, blockid: blk_8054294304707090727_6626586
repeated thousands of times
....
2009-06-07 21:02:02,894 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleting block
blk_8054294304707090727_6626586 file /data/2/hadoop/current/subdir42/blk_8054294304707090727
 
node 5
2009-06-07 20:07:36,384 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_8054294304707090727_6626586 src: /***.**.**.110:50289 dest: /***.**.**.110:50010
2009-06-07 20:07:36,388 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:
src: /***.**.**.110:50289, dest: /***.**.**.110:50010, bytes: 7296, op: HDFS_WRITE, cliID:
DFSClient_-930264054, srvID: DS-793422389-***.**.**.110-50
010-1234138704958, blockid: blk_8054294304707090727_6626586
2009-06-07 20:07:36,388 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
2 for block blk_8054294304707090727_6626586 terminating
 
2009-06-07 20:07:36,394 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:
src: /***.**.**.110:50010, dest: /***.**.**.110:50291, bytes: 132, op: HDFS_READ, cliID: DFSClient_-930264054,
srvID: DS-793422389-***.**.**.110-5001
0-1234138704958, blockid: blk_8054294304707090727_6626586
node 2
2009-06-07 20:07:36,385 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_8054294304707090727_6626586 src: /***.**.**.111:35284 dest: /***.**.**.107:50010
2009-06-07 20:07:36,386 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:
src: /***.**.**.111:35284, dest: /***.**.**.107:50010, bytes: 7296, op: HDFS_WRITE, cliID:
DFSClient_-930264054, srvID: DS-274843024-***.**.**.107-50
010-1234138705859, blockid: blk_8054294304707090727_6626586
2009-06-07 20:07:36,386 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_8054294304707090727_6626586 terminating
....
2009-06-07 20:14:03,364 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:
src: /***.**.**.107:50010, dest: /***.**.**.110:55798, bytes: 7356, op: HDFS_READ, cliID:
DFSClient_-930264054, srvID: DS-274843024-***.**.**.107-500
10-1234138705859, blockid: blk_8054294304707090727_6626586
repeated thousands of times
2009-06-07 21:02:07,245 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleting block
blk_8054294304707090727_6626586 file /data/1/hadoop/current/blk_8054294304707090727


> IllegalArgumentException in halfhfilereader#next
> ------------------------------------------------
>
>                 Key: HBASE-1495
>                 URL: https://issues.apache.org/jira/browse/HBASE-1495
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: stack
>             Fix For: 0.20.0
>
>
> From posix4e up on IRC
> {code}
> #
> 2009-06-07 20:22:33,367 ERROR org.apache.hadoop.hbase.regionserver.CompactSplitThread:
Compaction failed for region t3,*******************,1244420117045
> #
> java.lang.IllegalArgumentException
> #
>         at java.nio.Buffer.position(Buffer.java:218)
> #
>         at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.next(HFile.java:1072)
> #
>         at org.apache.hadoop.hbase.io.HalfHFileReader$1.next(HalfHFileReader.java:108)
> #
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:52)
> #
>         at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:79)
> #
>         at org.apache.hadoop.hbase.regionserver.MinorCompactingStoreScanner.next(MinorCompactingSto
> #
> reScanner.java:101)
> #
>         at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:849)
> #
>         at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:714)
> #
>         at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:766)
> #
>         at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:723)
> #
>         at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:105)
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message