hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phabricator (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-5074) support checksums in HBase block cache
Date Mon, 06 Feb 2012 17:08:01 GMT

    [ https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13201400#comment-13201400

Phabricator commented on HBASE-5074:

tedyu has commented on the revision "[jira] [HBASE-5074] Support checksums in HBase block

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java:425 This cast is not safe. See

  Caused by: java.lang.ClassCastException: org.apache.hadoop.hdfs.DistributedFileSystem cannot
be cast to org.apache.hadoop.hbase.util.HFileSystem
  	at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:425)
  	at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:433)
  	at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:407)
  	at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:328)
  	at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:326)
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java:160 Should we default to CRC32C
  src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java:2 No year is needed.
  src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java:59 Shall we name this variable
ctor ?

  Similar comment applies to other meth variables in this patch.


> support checksums in HBase block cache
> --------------------------------------
>                 Key: HBASE-5074
>                 URL: https://issues.apache.org/jira/browse/HBASE-5074
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, D1521.2.patch, D1521.3.patch,
> The current implementation of HDFS stores the data in one block file and the metadata(checksum)
in another block file. This means that every read into the HBase block cache actually consumes
two disk iops, one to the datafile and one to the checksum file. This is a major problem for
scaling HBase, because HBase is usually bottlenecked on the number of random disk iops that
the storage-hardware offers.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message