Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id CE04C200D5A for ; Thu, 14 Dec 2017 08:38:21 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id CC79C160C04; Thu, 14 Dec 2017 07:38:21 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 1FDE9160C01 for ; Thu, 14 Dec 2017 08:38:20 +0100 (CET) Received: (qmail 77774 invoked by uid 500); 14 Dec 2017 07:38:20 -0000 Mailing-List: contact issues-help@carbondata.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@carbondata.apache.org Delivered-To: mailing list issues@carbondata.apache.org Received: (qmail 77760 invoked by uid 99); 14 Dec 2017 07:38:20 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Dec 2017 07:38:20 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 21B97DFBEC; Thu, 14 Dec 2017 07:38:20 +0000 (UTC) From: jackylk To: issues@carbondata.apache.org Reply-To: issues@carbondata.apache.org References: In-Reply-To: Subject: [GitHub] carbondata pull request #1559: [CARBONDATA-1805][Dictionary] Optimize prunin... Content-Type: text/plain Message-Id: <20171214073820.21B97DFBEC@git1-us-west.apache.org> Date: Thu, 14 Dec 2017 07:38:20 +0000 (UTC) archived-at: Thu, 14 Dec 2017 07:38:22 -0000 Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1559#discussion_r156871399 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/GlobalDictionaryUtil.scala --- @@ -348,36 +347,53 @@ object GlobalDictionaryUtil { } /** - * load CSV files to DataFrame by using datasource "com.databricks.spark.csv" + * load and prune dictionary Rdd from csv file or input dataframe * - * @param sqlContext SQLContext - * @param carbonLoadModel carbon data load model + * @param sqlContext sqlContext + * @param carbonLoadModel carbonLoadModel + * @param inputDF input dataframe + * @param requiredCols names of dictionary column + * @param hadoopConf hadoop configuration + * @return rdd that contains only dictionary columns */ - def loadDataFrame(sqlContext: SQLContext, - carbonLoadModel: CarbonLoadModel, - hadoopConf: Configuration): DataFrame = { - CommonUtil.configureCSVInputFormat(hadoopConf, carbonLoadModel) - hadoopConf.set(FileInputFormat.INPUT_DIR, carbonLoadModel.getFactFilePath) - val columnNames = carbonLoadModel.getCsvHeaderColumns - val schema = StructType(columnNames.map[StructField, Array[StructField]] { column => - StructField(column, StringType) - }) - val values = new Array[String](columnNames.length) - val row = new StringArrayRow(values) - val jobConf = new JobConf(hadoopConf) - SparkHadoopUtil.get.addCredentials(jobConf) - TokenCache.obtainTokensForNamenodes(jobConf.getCredentials, - Array[Path](new Path(carbonLoadModel.getFactFilePath)), - jobConf) - val rdd = new NewHadoopRDD[NullWritable, StringArrayWritable]( - sqlContext.sparkContext, - classOf[CSVInputFormat], - classOf[NullWritable], - classOf[StringArrayWritable], - jobConf).setName("global dictionary").map[Row] { currentRow => - row.setValues(currentRow._2.get()) + private def loadInputDataAsDictRdd(sqlContext: SQLContext, carbonLoadModel: CarbonLoadModel, --- End diff -- please move parameter to separate line, one parameter one line ---