hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phabricator (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-5074) support checksums in HBase block cache
Date Fri, 10 Feb 2012 09:48:02 GMT

    [ https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13205317#comment-13205317

Phabricator commented on HBASE-5074:

dhruba has commented on the revision "[jira] [HBASE-5074] Support checksums in HBase block

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1545 This is the initialization
code in the constructor that assumes that we always verify hbase checksums. In the next line,
it will be set to false if the minor version is an old one. Similarly, If there is a HFileSystem
and the called has voluntarily cleared hfs.useHBaseChecksum, then we respect the caller's
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java:1 I do not know of nay performance
penalty. For hbase code, this wrapper is traversed only once when an HFile is opened of an
HLog is created. Since the number of times we open/create a file is miniscule compared to
the number of reads/writes to those files, the overhead (if any) should not show up in any
benchmark. I will validate this on my cluster and report if I see any.
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java:1 I do not yet see a package
o.apache.hadoop.hbase.fs Do you want m to create it? There is a pre-exising class o.a.h.h.utils.FSUtils,
that's why I created HFileSystem inside that package.
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java:40 We would create a method
HFileSystem.getLogFs(). The implementation of this method can open a new filesystem object
(for storing transaction logs) Then, HRegionServer will pass in HFileSystem.getLogFs() into
the constructor of HLog().
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java:49 Currently, the only place
HFileSystem is created is inside HRegionServer
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java:107 You would see that readfs
is the filesystem object that will be used to avoid checksum verification inside of hdfs.
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java:172 The hadoop code base recently
introduced the method FileSystem.createNonRecursive. But whoever added it to FileSystem forgot
to add it to FilterFileSystem. Apache hadoop trunk should roll out a patch for this one soon.


> support checksums in HBase block cache
> --------------------------------------
>                 Key: HBASE-5074
>                 URL: https://issues.apache.org/jira/browse/HBASE-5074
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, D1521.2.patch, D1521.3.patch,
D1521.3.patch, D1521.4.patch, D1521.4.patch, D1521.5.patch, D1521.5.patch
> The current implementation of HDFS stores the data in one block file and the metadata(checksum)
in another block file. This means that every read into the HBase block cache actually consumes
two disk iops, one to the datafile and one to the checksum file. This is a major problem for
scaling HBase, because HBase is usually bottlenecked on the number of random disk iops that
the storage-hardware offers.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message