hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lefty Leverenz (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-7567) support automatic calculating reduce task number [Spark Branch]
Date Wed, 06 Aug 2014 03:56:12 GMT

    [ https://issues.apache.org/jira/browse/HIVE-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087210#comment-14087210
] 

Lefty Leverenz commented on HIVE-7567:
--------------------------------------

Should we create a new label, TODOC-Spark?  This seems worth documenting (eventually):

{quote}
support automatic adjusting reducer number same as MR, configure through 3 following parameters:

1.  In order to change the average load for a reducer (in bytes):
{{set hive.exec.reducers.bytes.per.reducer=<number>}}
2.  In order to limit the maximum number of reducers:
{{set hive.exec.reducers.max=<number>}}
3.  In order to set a constant number of reducers:
{{set mapreduce.job.reduces=<number>}}
{quote}

Oops, I just created a plain TODOC label by accident.  I'll leave it there for now.

> support automatic calculating reduce task number [Spark Branch]
> ---------------------------------------------------------------
>
>                 Key: HIVE-7567
>                 URL: https://issues.apache.org/jira/browse/HIVE-7567
>             Project: Hive
>          Issue Type: Task
>          Components: Spark
>            Reporter: Chengxiang Li
>            Assignee: Chengxiang Li
>              Labels: TODOC
>             Fix For: spark-branch
>
>         Attachments: HIVE-7567.1-spark.patch, HIVE-7567.2-spark.patch, HIVE-7567.3-spark.patch,
HIVE-7567.4-spark.patch, HIVE-7567.5-spark.patch, HIVE-7567.6-spark.patch
>
>
> Hive have its own machenism to calculate reduce task number, we need to implement it
on spark job.
> NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message