spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From cloud-fan <...@git.apache.org>
Subject [GitHub] spark pull request #22621: [SPARK-25602][SQL] range metrics can be wrong if ...
Date Wed, 03 Oct 2018 13:52:33 GMT
GitHub user cloud-fan opened a pull request:

    https://github.com/apache/spark/pull/22621

    [SPARK-25602][SQL] range metrics can be wrong if the result rows are not fully consumed

    ## What changes were proposed in this pull request?
    
    This is a long-standing bug. When `Range` is whole stage codegened, it updates metrics
before producing records of each batch. However, when producing records of a batch, the loop
can be interrupted and then the metrics can be wrong.
    
    To fix this bug, this PR proposes to update `Range` metrics after a batch(or part of it
if the loop is interrupted) is consumed.
    
    Since the bug is only about metrics, and the fix is non-trivial, and it's not a regression
for 2.4, I'm targeting this PR to master only.
    
    ## How was this patch tested?
    
    new tests


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/cloud-fan/spark range

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/22621.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #22621
    
----
commit 01c1738b934ea79f2ee54fde884501140b9854e4
Author: Wenchen Fan <wenchen@...>
Date:   2018-10-03T05:22:07Z

    range metrics can be wrong if the result rows are not fully consumed

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message