hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HBASE-10052) use HDFS advisory caching to avoid caching HFiles that are not going to be read again (because they are being compacted)
Date Wed, 27 Nov 2013 21:50:35 GMT

    [ https://issues.apache.org/jira/browse/HBASE-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834194#comment-13834194
] 

Andrew Purtell edited comment on HBASE-10052 at 11/27/13 9:49 PM:
------------------------------------------------------------------

Looks like we could do this easily with a bit of reflection.

Edit: Hit enter or something, oops. Not sure what you mean about compacted files not being
read again, though. We will open readers on the new file (eventually). In fact we might want
to preload the blocks (HBASE-9857 - just an example). Maybe you meant drop the data of the
old discarded HFiles from blockcache, but the API you point out is for DataOutputStream?


was (Author: apurtell):
Looks like we could do this easily with a bit of reflection.

> use HDFS advisory caching to avoid caching HFiles that are not going to be read again
(because they are being compacted)
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-10052
>                 URL: https://issues.apache.org/jira/browse/HBASE-10052
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Colin Patrick McCabe
>
> HBase can benefit from doing dropbehind during compaction since compacted files are not
read again.  HDFS advisory caching, introduced in HDFS-4817, can help here.  The right API
here is {{DataOutputStream#setDropBehind}}.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message