hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-5074) support checksums in HBase block cache
Date Tue, 20 Dec 2011 08:05:30 GMT

    [ https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13173012#comment-13173012

Todd Lipcon commented on HBASE-5074:

bq. One minor disadvantage of this approach is that checksums would be computed twice, once
by the hbase regionserver and once by the hdfs client. How bad is this cpu overhead?

You mean on write? The native CRC32C implementation in HDFS trunk right now can do somewhere
around 6GB/sec - I clocked it at about 16% overhead compared to the non-checksummed path a
while ago. So I think overhead is fairly minimal.

bq. I am proposing that HBase disk format V3 have a 4 byte checksum for every hbase block

4 byte checksum for 64KB+ of data seems pretty low. IMO we should continue to do "chunked
checksums" - maybe a CRC32 for every 1KB in the block. This allows people to use larger block
sizes without compromising checksum effectiveness. The reason to choose chunked CRC32 over
a wider hash is that CRC32 has a very efficient hardware implementation in SSE4.2. Plus, we
can share all the JNI code already developed for Hadoop to calculate and verify these style
of checksums :)
> support checksums in HBase block cache
> --------------------------------------
>                 Key: HBASE-5074
>                 URL: https://issues.apache.org/jira/browse/HBASE-5074
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
> The current implementation of HDFS stores the data in one block file and the metadata(checksum)
in another block file. This means that every read into the HBase block cache actually consumes
two disk iops, one to the datafile and one to the checksum file. This is a major problem for
scaling HBase, because HBase is usually bottlenecked on the number of random disk iops that
the storage-hardware offers.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message