hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrey Stepachev <oct...@gmail.com>
Subject Re: hbase-0.89/trunk: org.apache.hadoop.fs.ChecksumException: Checksum error
Date Wed, 22 Sep 2010 09:08:49 GMT
But why it is bad? Split/compaction? I make my own RetryResultIterator
which reopen scanner on timeout. But what is best way to reopen scanner.
Can you point me where i can find all this exceptions? Or may be
here already some sort for recoveratble iterator?

2010/9/22 Ryan Rawson <ryanobjc@gmail.com>:
> ah ok i think i get it... basically at this point your scanner is bad
> and iterating on it again won't work.  the scanner should probably
> self close itself so you get tons of additional exceptions but instead
> we dont.
>
> there is probably a better fix for this, i'll ponder
>
> On Wed, Sep 22, 2010 at 1:57 AM, Ryan Rawson <ryanobjc@gmail.com> wrote:
>> very strange... looks like a bad block ended up in your scanner and
>> subsequent nexts were failing due to that short read.
>>
>> did you have to kill the regionserver or did things recover and
>> continue normally?
>>
>> -ryan
>>
>> On Wed, Sep 22, 2010 at 1:37 AM, Andrey Stepachev <octo47@gmail.com> wrote:
>>> Hi All.
>>>
>>> I get org.apache.hadoop.fs.ChecksumException for a table on heavy
>>> write in standalone mode.
>>> table tmp.bsn.main created 2010-09-22 10:42:28,860 and then 5 threads
>>> writes data to it.
>>> At some moment exception thrown.
>>>
>>> Andrey.
>>>
>>
>

Mime
View raw message