Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 4E637200D51 for ; Fri, 22 Dec 2017 12:26:05 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 4CB70160C19; Fri, 22 Dec 2017 11:26:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 8ABB1160BFD for ; Fri, 22 Dec 2017 12:26:04 +0100 (CET) Received: (qmail 5067 invoked by uid 500); 22 Dec 2017 11:26:03 -0000 Mailing-List: contact issues-help@carbondata.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@carbondata.apache.org Delivered-To: mailing list issues@carbondata.apache.org Received: (qmail 5058 invoked by uid 99); 22 Dec 2017 11:26:03 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 22 Dec 2017 11:26:03 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id A8705DFD43; Fri, 22 Dec 2017 11:26:03 +0000 (UTC) From: KanakaKumar To: issues@carbondata.apache.org Reply-To: issues@carbondata.apache.org References: In-Reply-To: Subject: [GitHub] carbondata pull request #1702: [CARBONDATA-1896] Clean files operation impro... Content-Type: text/plain Message-Id: <20171222112603.A8705DFD43@git1-us-west.apache.org> Date: Fri, 22 Dec 2017 11:26:03 +0000 (UTC) archived-at: Fri, 22 Dec 2017 11:26:05 -0000 Github user KanakaKumar commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1702#discussion_r158474524 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -178,6 +178,7 @@ case class CarbonLoadDataCommand( // First system has to partition the data first and then call the load data LOGGER.info(s"Initiating Direct Load for the Table : ($dbName.$tableName)") GlobalDictionaryUtil.updateTableMetadataFunc = updateTableMetadata + DataLoadingUtil.deleteLoadsAndUpdateMetadata(isForceDeletion = false, table) --- End diff -- Please add the purpose why we have to move deletion call from CarbonDataRDDFactory.scala to here ---