hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brock Noland" <br...@cloudera.com>
Subject Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job
Date Tue, 05 Aug 2014 03:48:39 GMT

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/#review49557
-----------------------------------------------------------


This looks great! I've made a few comments below, all of which are minor.


ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java
<https://reviews.apache.org/r/24221/#comment86698>

    Can this be final?



ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java
<https://reviews.apache.org/r/24221/#comment86699>

    = null is not required



ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java
<https://reviews.apache.org/r/24221/#comment86700>

    I know this method name came from another section of code. However shall we rename it
determineNumberOfReducers or configureNumberOfReducers since it's not a setter?



ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java.orig
<https://reviews.apache.org/r/24221/#comment86695>

    let's remove the .orig file



ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java
<https://reviews.apache.org/r/24221/#comment86696>

    I think we have these public member variables since this code was copied from Tez? However
public member variables are not standard. Can you generate accessors?



ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java
<https://reviews.apache.org/r/24221/#comment86697>

    Can you put a TODO as to why this is still commented out and open a JIRA for to fix this?


- Brock Noland


On Aug. 5, 2014, 3:43 a.m., chengxiang li wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24221/
> -----------------------------------------------------------
> 
> (Updated Aug. 5, 2014, 3:43 a.m.)
> 
> 
> Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.
> 
> 
> Bugs: HIVE-7567
>     https://issues.apache.org/jira/browse/HIVE-7567
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> support automatic adjusting reducer number same as MR, configure through 3 following
parameters:
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=<number>
> 
> 
> Diffs
> -----
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java abd4718 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java f262065 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 73553ee 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 75a1033 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java.orig PRE-CREATION

>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java PRE-CREATION

>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 3840318 
> 
> Diff: https://reviews.apache.org/r/24221/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> chengxiang li
> 
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message