accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Fuchs (JIRA)" <>
Subject [jira] [Commented] (ACCUMULO-652) support block-based filtering within RFile
Date Mon, 09 Jul 2012 20:21:34 GMT


Adam Fuchs commented on ACCUMULO-652:

We uncovered another tricky point today: if we use timestamp ranges to filter out blocks that
contain deletes, we might re-introduce entries that have been deleted. This would break the
established semantics that support guaranteeing a logical, irreversible purge of entries (almost
as bad as crossing the streams). In the same family of problems, a TimestampRangeFilter iterator
would not be commutative with the VersioningIterator or any Aggregator because it would lead
to incomplete/inconsistent results.

In the delete case, we need to add an index block stat that keeps track of the greatest timestamp
of any delete entry. Then when we do the filtering we can include any blocks that might have
deletes that are greater than the minimum timestamp in the timestamp range. This is a "must

The second case is a bit trickier. In the general case, we need to pull back any blocks whose
key range includes any of the keys that match the given timestamp range. In the VersioningIterator
case, an alternative solution would be to extend the timestamp range to include anything greater
than the minimum timestamp, ignoring the max timestamp. For now, I think we need to punt on
the general case and just say that the TimestampRangeFilter and other versioning/aggregation
iterators are simply not compatible in all caps in the javadocs. Longer term, this should
be an exemplar when we rewrite the iterator framework.
> support block-based filtering within RFile
> ------------------------------------------
>                 Key: ACCUMULO-652
>                 URL:
>             Project: Accumulo
>          Issue Type: Bug
>            Reporter: Adam Fuchs
>            Assignee: Adam Fuchs
> If we keep some stats about what is in an RFile block, we might be able to efficiently
[O(log N)], with high probability, implement filters that currently require linear table scans.
Two use cases of this include timestamp range filtering (i.e. give me everything from last
Tuesday) and cell-level security filtering (i.e. give me everything that I can see with my
> For the timestamp range filter, we can keep minimum and maximum timestamps across all
keys used in a block within the index entry for that block. For the cell-level security filter,
we can keep an aggregate label. This could be done using a simplified disjunction of all of
the labels in the block. The extra block statistics information can propagate up the index
hierarchy as well, giving nice performance characteristics for finding the next matching entry
in a file.
> In general, this is a heuristic technique that is good if data tends to naturally cluster
in blocks with respect to the way it is queried. Testing its efficacy will require closely
emulating real-world use cases -- tests like the continuous ingest test will not be sufficient.
We will have to test for a few things:
> # The cost for storing the extra stats in the index are not too expensive.
> # The performance benefit for common use cases is significant.
> # We shouldn't introduce any unacceptable worst-case behavior, like bloating the index
to ridiculous proportions for any data set.
> Eventually this will all need to be exposed through the Iterator API to be useful, which
will be another ticket. 

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message