hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Busbey (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-12324) Improve compaction speed and process for immutable short lived datasets
Date Fri, 24 Oct 2014 19:26:34 GMT

    [ https://issues.apache.org/jira/browse/HBASE-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183342#comment-14183342

Sean Busbey commented on HBASE-12324:

The only issue I see with TS is if old data come late. But in those cases, the data will get
deleted later which seems same as running major compaction late.

It's actually worse than that, because the clock could adjust and we could have a file timestamp
that is older than the cell timestamps within it. That would result in deleting data that
isn't yet expired. (presuming the timestamp will be set based on when the server calls close())

Do you mean to say that every file will have latest timestamp of any cell in it. And we could
use that TS to identify files to delete instead of looking at file timestamp ? That sounds

Yes exactly, we use protobufs and  have a bunch of padded space in the fixed trailer so that
we can make optimizations without having to increment the file version. We already track some
other cell stats as we make a file, seems like adding the info about the timestamps inside
the file should be straight forward.

> Improve compaction speed and process for immutable short lived datasets
> -----------------------------------------------------------------------
>                 Key: HBASE-12324
>                 URL: https://issues.apache.org/jira/browse/HBASE-12324
>             Project: HBase
>          Issue Type: New Feature
>          Components: Compaction
>    Affects Versions: 0.98.0, 0.96.0
>            Reporter: Sheetal Dolas
>         Attachments: OnlyDeleteExpiredFilesCompactionPolicy.java
> We have seen multiple cases where HBase is used to store immutable data and the data
lives for short period of time (few days)
> On very high volume systems, major compactions become very costly and slowdown ingestion
> In all such use cases (immutable data, high write rate and moderate read rates and shorter
ttl), avoiding any compactions and just deleting old data brings lot of performance benefits.
> We should have a compaction policy that can only delete/archive files older than TTL
and not compact any files.
> Also attaching a patch that can do so.

This message was sent by Atlassian JIRA

View raw message