From commits-return-12589-archive-asf-public=cust-asf.ponee.io@hudi.apache.org Thu Mar 5 01:18:03 2020 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 456A618063F for ; Thu, 5 Mar 2020 02:18:03 +0100 (CET) Received: (qmail 32571 invoked by uid 500); 5 Mar 2020 01:18:02 -0000 Mailing-List: contact commits-help@hudi.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hudi.apache.org Delivered-To: mailing list commits@hudi.apache.org Received: (qmail 32562 invoked by uid 99); 5 Mar 2020 01:18:02 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Mar 2020 01:18:02 +0000 Received: from jira-he-de.apache.org (static.172.67.40.188.clients.your-server.de [188.40.67.172]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 30456E2D8D for ; Thu, 5 Mar 2020 01:18:01 +0000 (UTC) Received: from jira-he-de.apache.org (localhost.localdomain [127.0.0.1]) by jira-he-de.apache.org (ASF Mail Server at jira-he-de.apache.org) with ESMTP id 509FF78033A for ; Thu, 5 Mar 2020 01:18:00 +0000 (UTC) Date: Thu, 5 Mar 2020 01:18:00 +0000 (UTC) From: "Udit Mehrotra (Jira)" To: commits@hudi.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (HUDI-656) Write Performance - Driver spends too much time creating Parquet DataSource after writes MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Udit Mehrotra created HUDI-656: ---------------------------------- Summary: Write Performance - Driver spends too much time creat= ing Parquet DataSource after writes Key: HUDI-656 URL: https://issues.apache.org/jira/browse/HUDI-656 Project: Apache Hudi (incubating) Issue Type: Improvement Components: Performance, Spark Integration Reporter: Udit Mehrotra h2. Problem Statement We have noticed this performance bottleneck at EMR, and it has been reporte= d here as well [https://github.com/apache/incubator-hudi/issues/1371] Hudi for writes through DataSource API uses=C2=A0[this|https://github.com/a= pache/incubator-hudi/blob/master/hudi-spark/src/main/scala/org/apache/hudi/= DefaultSource.scala#L85] to create the spark relation. Here it uses HoodieS= parkSqlWriter to write the dataframe and after it tries to=C2=A0[return|htt= ps://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/scala= /org/apache/hudi/DefaultSource.scala#L92] a relation by creating it through= parquet data source [here|https://github.com/apache/incubator-hudi/blob/ma= ster/hudi-spark/src/main/scala/org/apache/hudi/DefaultSource.scala#L72] In the process of creating this parquet data source, Spark creates an *InMe= moryFileIndex*=C2=A0[here|https://github.com/apache/spark/blob/v2.4.4/sql/c= ore/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.sc= ala#L371] as part of which it performs file listing of the base path. While= the listing itself is [parallelized|https://github.com/apache/spark/blob/v= 2.4.4/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/In= MemoryFileIndex.scala#L289], the filter that we pass which is *HoodieROTabl= ePathFilter* is applied=C2=A0[sequentially|https://github.com/apache/spark/= blob/v2.4.4/sql/core/src/main/scala/org/apache/spark/sql/execution/datasour= ces/InMemoryFileIndex.scala#L294] on the driver side on all the 1000s of fi= les returned during listing. This part is not parallelized by spark, and it= takes a lot of time probably because of the filters logic. This causes the= driver to just spend time filtering. We have seen it take 10-12 minutes to= do this process for just 50 partitions in S3, and this time is spent after= the writing has finished. Solving this will significantly reduce the writing time across all sorts of= writes. This time is essentially getting wasted, because we do not really = have to return a relation after the write. This relation is never really us= ed by Spark either ways=C2=A0[here|https://github.com/apache/spark/blob/v2.= 4.4/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/Save= IntoDataSourceCommand.scala#L45] and writing process returns empty set of r= ows.. h2. Proposed Solution Proposal is to return an Empty Spark relation after the write, which will c= ut down all this unnecessary time spent to create a parquet relation that n= ever gets used. =C2=A0 =C2=A0 =C2=A0 -- This message was sent by Atlassian Jira (v8.3.4#803005)