accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Drob (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-2827) HeapIterator optimization
Date Fri, 20 Jun 2014 17:11:25 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14039039#comment-14039039
] 

Mike Drob commented on ACCUMULO-2827:
-------------------------------------

Very cool, Keith.

So the worst case scenario is data with a single column per row, and highly interleaved, right?
It shouldn't be hard to generate the absolute worst data set possible for this, right? Create
one file with all odd numbered rows from 1 to 10M (or whatever) and another file with all
even numbered rows. This is probably too pedantic, and regardless of the results there I don't
think it would be a reason to -1 the patch since it is obviously artificial, but at this point
I'm curious about what happens.

> HeapIterator optimization
> -------------------------
>
>                 Key: ACCUMULO-2827
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-2827
>             Project: Accumulo
>          Issue Type: Improvement
>    Affects Versions: 1.5.1, 1.6.0
>            Reporter: Jonathan Park
>            Assignee: Jonathan Park
>            Priority: Minor
>             Fix For: 1.5.2, 1.6.1, 1.7.0
>
>         Attachments: ACCUMULO-2827-compaction-performance-test.patch, ACCUMULO-2827.0.patch.txt,
accumulo-2827.raw_data, new_heapiter.png, old_heapiter.png, together.png
>
>
> We've been running a few performance tests of our iterator stack and noticed a decent
amount of time spent in the HeapIterator specifically related to add/removal into the heap.
> This may not be a general enough optimization but we thought we'd see what people thought.
Our assumption is that it's more probable that the current "top iterator" will supply the
next value in the iteration than not. The current implementation takes the other assumption
by always removing + inserting the minimum iterator back into the heap. With the implementation
of a binary heap that we're using, this can get costly if our assumption is wrong because
we pay the log penalty of percolating up the iterator in the heap upon insertion and again
when percolating down upon removal.
> We believe our assumption is a fair one to hold given that as major compactions create
a log distribution of file sizes, it's likely that we may see a long chain of consecutive
entries coming from 1 iterator. Understandably, taking this assumption comes at an additional
cost in the case that we're wrong. Therefore, we've run a few benchmarking tests to see how
much of a cost we pay as well as what kind of benefit we see. I've attached a potential patch
(which includes a test harness) + image that captures the results of our tests. The x-axis
represents # of repeated keys before switching to another iterator. The y-axis represents
iteration time. The sets of blue + red lines varies in # of iterators present in the heap.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message