hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <>
Subject [jira] [Work logged] (HIVE-22239) Scale data size using column value ranges
Date Tue, 08 Oct 2019 08:48:02 GMT


ASF GitHub Bot logged work on HIVE-22239:

                Author: ASF GitHub Bot
            Created on: 08/Oct/19 08:47
            Start Date: 08/Oct/19 08:47
    Worklog Time Spent: 10m 
      Work Description: kgyrtkirk commented on pull request #787: HIVE-22239

 File path: ql/src/java/org/apache/hadoop/hive/ql/optimizer/stats/annotation/
 @@ -967,13 +979,23 @@ private long evaluateComparator(Statistics stats, AnnotateStatsProcCtx
aspCtx, E
               if (minValue > value) {
                 return 0;
+              if (uniformWithinRange) {
+                // Assuming uniform distribution, we can use the range to calculate
+                // new estimate for the number of rows
+                return Math.round(((double) (value - minValue) / (maxValue - minValue)) *
 Review comment:
   I think we will probably hit a divide by zero here when max=min; I don't see any preceeding
conditionals covering for that (however there can be...)
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

Issue Time Tracking

    Worklog Id:     (was: 324953)
    Time Spent: 2h  (was: 1h 50m)

> Scale data size using column value ranges
> -----------------------------------------
>                 Key: HIVE-22239
>                 URL:
>             Project: Hive
>          Issue Type: Improvement
>          Components: Physical Optimizer
>            Reporter: Jesus Camacho Rodriguez
>            Assignee: Jesus Camacho Rodriguez
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-22239.01.patch, HIVE-22239.02.patch, HIVE-22239.patch
>          Time Spent: 2h
>  Remaining Estimate: 0h
> Currently, min/max values for columns are only used to determine whether a certain range
filter falls out of range and thus filters all rows or none at all. If it does not, we just
use a heuristic that the condition will filter 1/3 of the input rows. Instead of using that
heuristic, we can use another one that assumes that data will be uniformly distributed across
that range, and calculate the selectivity for the condition accordingly.
> This patch also includes the propagation of min/max column values from statistics to
the optimizer for timestamp type.

This message was sent by Atlassian Jira

View raw message