hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-3649) Separate compression setting for flush files
Date Wed, 16 Mar 2011 17:03:29 GMT

    [ https://issues.apache.org/jira/browse/HBASE-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13007563#comment-13007563

stack commented on HBASE-3649:

I thought the problem was that compression slowed the flush. If problem is rather the count
of files, yeah, compression doesn't factor.

bq. I think the better solution would be "merging flushes"?

Its about time we did this (Its only 5 years since it was described in BT paper).  I made

> Separate compression setting for flush files
> --------------------------------------------
>                 Key: HBASE-3649
>                 URL: https://issues.apache.org/jira/browse/HBASE-3649
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Andrew Purtell
>            Assignee: Andrew Purtell
>             Fix For: 0.90.2, 0.92.0
> In this thread on user@hbase: http://search-hadoop.com/m/WUnLM6ojHm1 J-D conjectures
that compressing flush files leads to a suboptimal situation where "the puts are sometimes
blocked on the memstores which are blocked by the flusher thread which is blocked because
there's too many files to compact because the compactor is given too many small files to compact
and has to compact the same data a bunch of times."
> We have a separate compression setting already for major compaction vs store files written
during minor compaction, for background/archival apps. Add a separate compression setting
for flush files, default to none, to avoid the above condition.

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message