Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id EF855200D36 for ; Mon, 6 Nov 2017 13:36:44 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id EDF5B160BEC; Mon, 6 Nov 2017 12:36:44 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 3E1E91609E0 for ; Mon, 6 Nov 2017 13:36:44 +0100 (CET) Received: (qmail 18358 invoked by uid 500); 6 Nov 2017 12:36:43 -0000 Mailing-List: contact issues-help@carbondata.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@carbondata.apache.org Delivered-To: mailing list issues@carbondata.apache.org Received: (qmail 18347 invoked by uid 99); 6 Nov 2017 12:36:43 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Nov 2017 12:36:43 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 4572ADFBC7; Mon, 6 Nov 2017 12:36:43 +0000 (UTC) From: jackylk To: issues@carbondata.apache.org Reply-To: issues@carbondata.apache.org References: In-Reply-To: Subject: [GitHub] carbondata pull request #1470: [CARBONDATA-1572] Support streaming ingest an... Content-Type: text/plain Message-Id: <20171106123643.4572ADFBC7@git1-us-west.apache.org> Date: Mon, 6 Nov 2017 12:36:43 +0000 (UTC) archived-at: Mon, 06 Nov 2017 12:36:45 -0000 Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1470#discussion_r149067739 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonScanRDD.scala --- @@ -82,8 +84,43 @@ class CarbonScanRDD( // get splits val splits = format.getSplits(job) - val result = distributeSplits(splits) - result + + // separate split + // 1. for batch splits, invoke distributeSplits method to create partitions + // 2. for stream splits, create partition for each split by default + val columnarSplits = new ArrayList[InputSplit]() + val streamSplits = new ArrayBuffer[InputSplit]() + for(i <- 0 until splits.size()) { + val carbonInputSplit = splits.get(i).asInstanceOf[CarbonInputSplit] + if ("row-format".equals(carbonInputSplit.getFormat)) { + streamSplits += splits.get(i) + } else { + columnarSplits.add(splits.get(i)) + } + } + val batchPartitions = distributeSplits(columnarSplits) + if (streamSplits.isEmpty) { + batchPartitions + } else { + val index = batchPartitions.length + val streamPartitions: ArrayBuffer[Partition] = + streamSplits.zipWithIndex.map { splitWithIndex => + val multiBlockSplit = + new CarbonMultiBlockSplit(identifier, + Seq(splitWithIndex._1.asInstanceOf[CarbonInputSplit]).asJava, + splitWithIndex._1.getLocations) + multiBlockSplit.setStream(true) --- End diff -- I think you can set the same DATA_FILE_FORMAT enum in `multiBlockSplit` ---