hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mathias Herberts (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HBASE-1784) Missing rows after medium intensity insert
Date Wed, 26 Aug 2009 07:12:59 GMT

     [ https://issues.apache.org/jira/browse/HBASE-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Mathias Herberts updated HBASE-1784:

    Attachment: HBASE-1784.log

I reran my test to try to corner the problem.

My last run *only* lost around 2 million rows out of 866 millions. Interestingly the logs
(attached) only show one compaction failure.

A side effect observed was that inserting the rows this the WB disabled was almost twice as
fast as with the WB set to 1Mb.

> Missing rows after medium intensity insert
> ------------------------------------------
>                 Key: HBASE-1784
>                 URL: https://issues.apache.org/jira/browse/HBASE-1784
>             Project: Hadoop HBase
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: Jean-Daniel Cryans
>            Priority: Blocker
>         Attachments: DataLoad.java, HBASE-1784.log
> This bug was uncovered by Mathias in his mail "Issue on data load with 0.20.0-rc2". Basically,
somehow, after a medium intensity insert a lot of rows goes missing. Easy way to reproduce
: PE. Doing a PE scan or randomRead afterwards won't uncover anything since it doesn't bother
about null rows. Simply do a count in the shell, easy to test (I changed my scanner caching
in the shell to do it faster).
> I tested some light insertions with force flush/compact/split in the shell and it doesn't

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message