hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (Jira)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-7185) Parallelize part files move in FileOutputCommitter
Date Tue, 15 Oct 2019 16:56:01 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952113#comment-16952113
] 

Hadoop QA commented on MAPREDUCE-7185:
--------------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  0s{color} | {color:blue}
Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  1s{color} | {color:red}
Docker failed to build yetus/hadoop:104ccca9169. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | MAPREDUCE-7185 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958315/MAPREDUCE-7185.patch
|
| Console output | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/7674/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Parallelize part files move in FileOutputCommitter
> --------------------------------------------------
>
>                 Key: MAPREDUCE-7185
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7185
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>    Affects Versions: 3.2.0, 2.9.2
>            Reporter: Igor Dvorzhak
>            Assignee: Igor Dvorzhak
>            Priority: Major
>         Attachments: MAPREDUCE-7185.patch
>
>
> If map task outputs multiple files it could be slow to move them from temp directory
to output directory in object stores (GCS, S3, etc).
> To improve performance we need to parallelize move of more than 1 file in FileOutputCommitter.
> Repro:
>  Start spark-shell:
> {code}
> spark-shell --num-executors 2 --executor-memory 10G --executor-cores 4 --conf spark.dynamicAllocation.maxExecutors=2
> {code}
> From spark-shell:
> {code}
> val df = (1 to 10000).toList.toDF("value").withColumn("p", $"value" % 10).repartition(50)
> df.write.partitionBy("p").mode("overwrite").format("parquet").options(Map("path" ->
s"gs://some/path")).saveAsTable("parquet_partitioned_bench")
> {code}
> With the fix execution time reduces from 130 seconds to 50 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org


Mime
View raw message