carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jack...@apache.org
Subject [1/2] incubator-carbondata git commit: Don't persist rdd if it is only use once.
Date Fri, 21 Apr 2017 17:07:50 GMT
Repository: incubator-carbondata
Updated Branches:
  refs/heads/master c208951a1 -> abc807dbd


Don't persist rdd if it is only use once.


Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/commit/2295f70b
Tree: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/tree/2295f70b
Diff: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/diff/2295f70b

Branch: refs/heads/master
Commit: 2295f70b3be40a6dd0df49e7ac1a757e9e6cabd0
Parents: c208951
Author: Yadong Qi <qiyadong2010@gmail.com>
Authored: Fri Apr 21 10:05:53 2017 +0800
Committer: jackylk <jacky.likun@huawei.com>
Committed: Fri Apr 21 11:06:39 2017 -0600

----------------------------------------------------------------------
 .../org/apache/carbondata/spark/util/GlobalDictionaryUtil.scala    | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/2295f70b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/GlobalDictionaryUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/GlobalDictionaryUtil.scala
b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/GlobalDictionaryUtil.scala
index 491926c..f690eef 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/GlobalDictionaryUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/GlobalDictionaryUtil.scala
@@ -630,7 +630,7 @@ object GlobalDictionaryUtil {
     try {
       // read local dictionary file, and spilt (columnIndex, columnValue)
       val basicRdd = sqlContext.sparkContext.textFile(allDictionaryPath)
-        .map(x => parseRecord(x, accumulator, csvFileColumns)).persist()
+        .map(x => parseRecord(x, accumulator, csvFileColumns))
 
       // group by column index, and filter required columns
       val requireColumnsList = requireColumns.toList


Mime
View raw message