hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phabricator (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-5074) support checksums in HBase block cache
Date Tue, 07 Feb 2012 05:49:02 GMT

    [ https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13202097#comment-13202097
] 

Phabricator commented on HBASE-5074:
------------------------------------

dhruba has commented on the revision "[jira] [HBASE-5074] Support checksums in HBase block
cache".

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java:160 my choice would be to make
java's crc32 be the default. PureJavacrc32 is compatible with java's crc32. However, purejavacrc32C
is not compatible with either of these.

  Although PureJavaCRC32 is not part of 1.0, if and when you move to hadoop 2.0, you will
automatically get the better performant algorithm via Purejavacrc32.

  For the adventurous, one can manually pull in PureJavaCRC32C inot one's own hbase deployment
by explicitly setting hbase.hstore.checksum.algorithm to be "CRC32C".

  Does that sound reasonable?
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java:257 sounds good, will make
this change.

REVISION DETAIL
  https://reviews.facebook.net/D1521

                
> support checksums in HBase block cache
> --------------------------------------
>
>                 Key: HBASE-5074
>                 URL: https://issues.apache.org/jira/browse/HBASE-5074
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, D1521.2.patch, D1521.3.patch,
D1521.3.patch
>
>
> The current implementation of HDFS stores the data in one block file and the metadata(checksum)
in another block file. This means that every read into the HBase block cache actually consumes
two disk iops, one to the datafile and one to the checksum file. This is a major problem for
scaling HBase, because HBase is usually bottlenecked on the number of random disk iops that
the storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message