accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Fuchs (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-3067) scan performance degrades after compaction
Date Thu, 21 Aug 2014 19:15:12 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-3067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14105814#comment-14105814
] 

Adam Fuchs commented on ACCUMULO-3067:
--------------------------------------

I saw that post, and I don't think they really got all the reasons for code becoming "not
entrant". https://gist.github.com/chrisvest/2932907 contains the following:

{code}
  // What condition caused the deoptimization?
  enum DeoptReason {
    Reason_many = -1,             // indicates presence of several reasons
    Reason_none = 0,              // indicates absence of a relevant deopt.
    Reason_null_check,            // saw unexpected null or zero divisor (@bci)
    Reason_null_assert,           // saw unexpected non-null or non-zero (@bci)
    Reason_range_check,           // saw unexpected array index (@bci)
    Reason_class_check,           // saw unexpected object class (@bci)
    Reason_array_check,           // saw unexpected array class (aastore @bci)
    Reason_intrinsic,             // saw unexpected operand to intrinsic (@bci)
    Reason_unloaded,              // unloaded class or constant pool entry
    Reason_uninitialized,         // bad class state (uninitialized)
    Reason_unreached,             // code is not reached, compiler
    Reason_unhandled,             // arbitrary compiler limitation
    Reason_constraint,            // arbitrary runtime constraint violated
    Reason_div0_check,            // a null_check due to division by zero
    Reason_age,                   // nmethod too old; tier threshold reached
    Reason_LIMIT,
    // Note:  Keep this enum in sync. with _trap_reason_name.
    Reason_RECORDED_LIMIT = Reason_unloaded   // some are not recorded per bc
    // Note:  Reason_RECORDED_LIMIT should be < 8 to fit into 3 bits of
    // DataLayout::trap_bits.  This dependency is enforced indirectly
    // via asserts, to avoid excessive direct header-to-header dependencies.
    // See Deoptimization::trap_state_reason and class DataLayout.
  };
{code}

This also might not be a complete list of reasons to deoptimize, but I think it might be more
likely we're hitting some kind of a null check or range check invariant violation.

> scan performance degrades after compaction
> ------------------------------------------
>
>                 Key: ACCUMULO-3067
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3067
>             Project: Accumulo
>          Issue Type: Bug
>          Components: tserver
>         Environment: Macbook Pro 2.6 GHz Intel Core i7, 16GB RAM, SSD, OSX 10.9.4, single
tablet server process, single client process
>            Reporter: Adam Fuchs
>         Attachments: Screen Shot 2014-08-19 at 4.19.37 PM.png, accumulo_query_perf_test.tar.gz,
jit_log_during_compaction.txt
>
>
> I've been running some scan performance tests on 1.6.0, and I'm running into an interesting
situation in which query performance starts at a certain level and then degrades by ~15% after
an event. The test follows roughly the following scenario:
>  # Single tabletserver instance
>  # Load 100M small (~10byte) key/values into a tablet and let it finish major compacting
>  # Disable the garbage collector (this makes the time to _the event_ longer)
>  # Restart the tabletserver
>  # Repeatedly scan from the beginning to the end of the table in a loop
>  # Something happens on the tablet server, like one of {idle compaction of metadata table,
forced flush of metadata table, forced compaction of metadata table, forced flush of trace
table}
>  # Observe that scan rates dropped by 15-20%
>  # Observe that restarting the scan will not improve performance back to original level.
Performance only gets better upon restarting the tablet server.
> I've been able to get this not to happen by removing iterators from the iterator tree.
It doesn't seem to matter which iterators, but removing a certain number both improves performance
(significantly) and eliminates the degradation problem. The default iterator tree includes:

>  * SourceSwitchingIterator
>  ** VersioningIterator
>  *** SynchronizedIterator
>  **** VisibilityFilter
>  ***** ColumnQualifierFilter
>  ****** ColumnFamilySkippingIterator
>  ******* DeletingIterator
>  ******** StatsIterator
>  ********* MultiIterator
>  ********** MemoryIterator
>  ********** ProblemReportingIterator
>  *********** HeapIterator
>  ************ RFile.LocalityGroupReader
> We can eliminate the weird condition by narrowing the set of iterators to:
>  * SourceSwitchingIterator
>  ** VisibilityFilter
>  *** ColumnFamilySkippingIterator
>  **** DeletingIterator
>  ***** StatsIterator
>  ****** MultiIterator
>  ******* MemoryIterator
>  ******* ProblemReportingIterator
>  ******** HeapIterator
>  ********* RFile.LocalityGroupReader
> There are other combinations that also perform much better than the default. I haven't
been able to isolate this problem to a single iterator, despite removing each iterator one
at a time.
> Anybody know what might be happening here? Best theory so far: the JVM learns that iterators
can be used in a different way after a compaction, and some JVM optimization like JIT compilation,
branch prediction, or automatic inlining stops happening.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message