From commits-return-13122-archive-asf-public=cust-asf.ponee.io@hudi.apache.org Tue Mar 10 23:19:02 2020 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 5DD9318066C for ; Wed, 11 Mar 2020 00:19:02 +0100 (CET) Received: (qmail 70601 invoked by uid 500); 10 Mar 2020 23:19:01 -0000 Mailing-List: contact commits-help@hudi.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hudi.apache.org Delivered-To: mailing list commits@hudi.apache.org Received: (qmail 70590 invoked by uid 99); 10 Mar 2020 23:19:01 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Mar 2020 23:19:01 +0000 Received: from jira-he-de.apache.org (static.172.67.40.188.clients.your-server.de [188.40.67.172]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id D175AE0239 for ; Tue, 10 Mar 2020 23:19:00 +0000 (UTC) Received: from jira-he-de.apache.org (localhost.localdomain [127.0.0.1]) by jira-he-de.apache.org (ASF Mail Server at jira-he-de.apache.org) with ESMTP id 4CFC878007D for ; Tue, 10 Mar 2020 23:19:00 +0000 (UTC) Date: Tue, 10 Mar 2020 23:19:00 +0000 (UTC) From: "ASF GitHub Bot (Jira)" To: commits@hudi.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HUDI-656) Write Performance - Driver spends too much time creating Parquet DataSource after writes MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HUDI-656?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HUDI-656: -------------------------------- Labels: pull-request-available (was: ) > Write Performance - Driver spends too much time creating Parquet DataSour= ce after writes > -------------------------------------------------------------------------= --------------- > > Key: HUDI-656 > URL: https://issues.apache.org/jira/browse/HUDI-656 > Project: Apache Hudi (incubating) > Issue Type: Improvement > Components: Performance, Spark Integration > Reporter: Udit Mehrotra > Assignee: Udit Mehrotra > Priority: Major > Labels: pull-request-available > Fix For: 0.6.0 > > > h2. Problem Statement > We have noticed this performance bottleneck at EMR, and it has been repor= ted here as well [https://github.com/apache/incubator-hudi/issues/1371] > Hudi for writes through DataSource API uses=C2=A0[this|https://github.com= /apache/incubator-hudi/blob/master/hudi-spark/src/main/scala/org/apache/hud= i/DefaultSource.scala#L85] to create the spark relation. Here it uses Hoodi= eSparkSqlWriter to write the dataframe and after it tries to=C2=A0[return|h= ttps://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/sca= la/org/apache/hudi/DefaultSource.scala#L92] a relation by creating it throu= gh parquet data source [here|https://github.com/apache/incubator-hudi/blob/= master/hudi-spark/src/main/scala/org/apache/hudi/DefaultSource.scala#L72] > In the process of creating this parquet data source, Spark creates an *In= MemoryFileIndex*=C2=A0[here|https://github.com/apache/spark/blob/v2.4.4/sql= /core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.= scala#L371] as part of which it performs file listing of the base path. Whi= le the listing itself is [parallelized|https://github.com/apache/spark/blob= /v2.4.4/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/= InMemoryFileIndex.scala#L289], the filter that we pass which is *HoodieROTa= blePathFilter* is applied=C2=A0[sequentially|https://github.com/apache/spar= k/blob/v2.4.4/sql/core/src/main/scala/org/apache/spark/sql/execution/dataso= urces/InMemoryFileIndex.scala#L294] on the driver side on all the 1000s of = files returned during listing. This part is not parallelized by spark, and = it takes a lot of time probably because of the filters logic. This causes t= he driver to just spend time filtering. We have seen it take 10-12 minutes = to do this process for just 50 partitions in S3, and this time is spent aft= er the writing has finished. > Solving this will significantly reduce the writing time across all sorts = of writes. This time is essentially getting wasted, because we do not reall= y have to return a relation after the write. This relation is never really = used by Spark either ways=C2=A0[here|https://github.com/apache/spark/blob/v= 2.4.4/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/Sa= veIntoDataSourceCommand.scala#L45] and writing process returns empty set of= rows.. > h2. Proposed Solution > Proposal is to return an Empty Spark relation after the write, which will= cut down all this unnecessary time spent to create a parquet relation that= never gets used. > =C2=A0 > =C2=A0 > =C2=A0 -- This message was sent by Atlassian Jira (v8.3.4#803005)