accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adam Fuchs <afu...@apache.org>
Subject Re: [jira] [Commented] (ACCUMULO-2232) Combiners can cause deleted data to come back
Date Thu, 23 Jan 2014 01:27:47 GMT
Ugh, jira is down, but let me get my thoughts out while I'm having them:

I agree with Josh (and everyone else) on both counts: the performance
implications will be huge and this is enough rope for people to hang
themselves with.  However, I think a lot of people use combiners on tables
that are append-only and never delete (at least not a record at a time).
The *warning unsafe doom will ensue* bypass is pretty important to support
those uses, but I also think it is best to default to accuracy while we
implement a better fix.

It seems like the right way to fix this in the long run is to keep track of
timestamp ranges of files and calculate two properties on the set of files
being compacted:
1. Is the time range contiguous, or are there other files not being
compacted that overlap the range?
2. Are there any files with an older timestamp?
This way we can run combiners on any compactions that satisfy property #1,
and preserve the most recent deletes in any compaction that satisfies
property #2. This generally makes minor compactions safe for running
combiners (assuming Accumulo sets the timestamps and there is no bulk
loading), although the most recent delete needs to be preserved. If I were
to speculate about general major compactions, I would say that when splits
are rare most other compactions also have property #1.

I think we could expose these properties in the iterator environment. We
could even come up with a compaction strategy that biases compactions
towards contiguous time ranges if we were ambitious.

Adam



On Wed, Jan 22, 2014 at 3:45 PM, Josh Elser (JIRA) <jira@apache.org> wrote:

>
>     [
> https://issues.apache.org/jira/browse/ACCUMULO-2232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13879162#comment-13879162]
>
> Josh Elser commented on ACCUMULO-2232:
> --------------------------------------
>
> I'm a little worried about implications (sorry for using that phrase) that
> only running combiners on full MajC would have on performance since, for
> heavy combination, you're going to be persisting and later re-reading many
> records instead of just once for a potentially very long time (if you
> assume that full MajCs are few and far between).
>
> I can't come up with another easy way to fix it though for the
> SummingCombiner example, so accuracy is still better than being slow.
> Anything else I can think of would involve persisting deletes across
> non-full compactions which would require quite a bit more work to get
> correct, I imagine.
>
> > Combiners can cause deleted data to come back
> > ---------------------------------------------
> >
> >                 Key: ACCUMULO-2232
> >                 URL: https://issues.apache.org/jira/browse/ACCUMULO-2232
> >             Project: Accumulo
> >          Issue Type: Bug
> >          Components: client, tserver
> >            Reporter: John Vines
> >
> > The case-
> > 3 files with-
> > * 1 with a key, k, with timestamp 0, value 3
> > * 1 with a delete of k with timestamp 1
> > * 1 with k with timestamp 2, value 2
> > The column of k has a summing combiner set on it. The issue here is that
> depending on how the major compactions play out, differing values with
> result. If all 3 files compact, the correct value of 2 will result.
> However, if 1 & 3 compact first, they will aggregate to 5. And then the
> delete will fall after the combined value, resulting in the result 5 to
> persist.
> > First and foremost, this should be documented. I think to remedy this,
> combiners should only be used on full MajC, not not full ones. This may
> necessitate a special flag or a new combiner that implemented the proper
> semantics.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.1.5#6160)
>

Mime
View raw message