carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From chenliang...@apache.org
Subject [1/2] incubator-carbondata git commit: in case of group by queries, we'll get the node locality as 0
Date Sun, 01 Jan 2017 05:58:48 GMT
Repository: incubator-carbondata
Updated Branches:
  refs/heads/master 20a0b9ec5 -> 7508cba29


in case of group by queries, we'll get the node locality as 0


Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/commit/f902f8db
Tree: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/tree/f902f8db
Diff: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/diff/f902f8db

Branch: refs/heads/master
Commit: f902f8dbca2efead36cfea39ac887c28970da89e
Parents: 20a0b9e
Author: vincentchenfei <vincent.chenfei@huawei.com>
Authored: Sat Dec 31 00:30:53 2016 +0530
Committer: sraghunandan <carbondatacontributions@gmail.com>
Committed: Sat Dec 31 00:30:53 2016 +0530

----------------------------------------------------------------------
 .../main/scala/org/apache/spark/sql/hive/DistributionUtil.scala  | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/f902f8db/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
----------------------------------------------------------------------
diff --git a/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
b/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
index 8950862..63a3b8f 100644
--- a/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/spark/sql/hive/DistributionUtil.scala
@@ -141,7 +141,9 @@ object DistributionUtil {
     val nodesOfData = nodeMapping.size()
     val confExecutors: Int = getConfiguredExecutors(sparkContext)
     LOGGER.info(s"Executors configured : $confExecutors")
-    val requiredExecutors = if (nodesOfData < 1 || nodesOfData > confExecutors) {
+    val requiredExecutors = if (nodesOfData < 1) {
+      1
+    } else if (nodesOfData > confExecutors) {
       confExecutors
     } else if (confExecutors > nodesOfData) {
       var totalExecutorsToBeRequested = nodesOfData


Mime
View raw message