hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "liyunzhang_intel (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-16600) Refactor SetSparkReducerParallelism#needSetParallelism to enable parallel order by in multi_insert cases
Date Tue, 06 Jun 2017 06:47:18 GMT

    [ https://issues.apache.org/jira/browse/HIVE-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16038276#comment-16038276
] 

liyunzhang_intel commented on HIVE-16600:
-----------------------------------------

[~lirui]: sorry for reply late.
bq.I prefer to fall back to the initial method here: to check LIM before RS/FS in all branches,
which is simpler and safer.
in HIVE-16600.10.patch, it checks all branches of multi-insert case.  
multi-insert=false
check LIMIT appears before next RS/FS
multi-insert= true
1. if current branch ends with FS, check whether LIMIT appears before jointOperator(means
from where branch starts)
2. if current branch ends with nonFS, check LIMIT appears before next RS/FS

Although currently i can not find the case you mentioned above in hive queries.
{noformat}
RS - ... - FS
   - ... - FS
   - ... - Non FS
{noformat}

I create some unit test files(TestSetSparkReduceParallelism_MultiInsertCase.java, Node.java)
which mocks the different cases(include the case you mentioned)

> Refactor SetSparkReducerParallelism#needSetParallelism to enable parallel order by in
multi_insert cases
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-16600
>                 URL: https://issues.apache.org/jira/browse/HIVE-16600
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: liyunzhang_intel
>            Assignee: liyunzhang_intel
>         Attachments: HIVE-16600.1.patch, HIVE-16600.2.patch, HIVE-16600.3.patch, HIVE-16600.4.patch,
HIVE-16600.5.patch, HIVE-16600.6.patch, HIVE-16600.7.patch, HIVE-16600.8.patch, HIVE-16600.9.patch,
mr.explain, mr.explain.log.HIVE-16600
>
>
> multi_insert_gby.case.q
> {code}
> set hive.exec.reducers.bytes.per.reducer=256;
> set hive.optimize.sampling.orderby=true;
> drop table if exists e1;
> drop table if exists e2;
> create table e1 (key string, value string);
> create table e2 (key string);
> FROM (select key, cast(key as double) as keyD, value from src order by key) a
> INSERT OVERWRITE TABLE e1
>     SELECT key, value
> INSERT OVERWRITE TABLE e2
>     SELECT key;
> select * from e1;
> select * from e2;
> {code} 
> the parallelism of Sort is 1 even we enable parallel order by("hive.optimize.sampling.orderby"
is set as "true").  This is not reasonable because the parallelism  should be calcuated by
 [Utilities.estimateReducers|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java#L170]
> this is because SetSparkReducerParallelism#needSetParallelism returns false when [children
size of RS|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java#L207]
is greater than 1.
> in this case, the children size of {{RS[2]}} is two.
> the logical plan of the case
> {code}
>    TS[0]-SEL[1]-RS[2]-SEL[3]-SEL[4]-FS[5]
>                             -SEL[6]-FS[7]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message