hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter Vary (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-17270) Qtest results show wrong number of executors
Date Thu, 10 Aug 2017 15:05:01 GMT

    [ https://issues.apache.org/jira/browse/HIVE-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121768#comment-16121768
] 

Peter Vary commented on HIVE-17270:
-----------------------------------

Thanks [~lirui] for the pointers. Based on that i was able to finally identify the source
of the problem {{yarn.scheduler.minimum-allocation-mb}}. By default it is 1024 (MB), so the
FairScheduler will use it as an increment. If we set it to 512, the we will have 4 reducers
as expected.

[~xuefuz]: If I understand the code correctly, when {{spark.dynamicAllocation.enabled}} is
set to {{true}} then {{SetSparkReducerParallelism.getSparkMemoryAndCores()}} will set {{SetSparkReducerParallelism.sparkMemoryAndCores}}
to {{null}}. And if this is null, then the number of reducers should be based only on the
size of the data. So everything should be great.
In code:
{code:title=SetSparkReducerParallelism.process()}
[..]
          // Divide it by 2 so that we can have more reducers
          long bytesPerReducer = context.getConf().getLongVar(HiveConf.ConfVars.BYTESPERREDUCER)
/ 2;
          int numReducers = Utilities.estimateReducers(numberOfBytes, bytesPerReducer,
              maxReducers, false);

          getSparkMemoryAndCores(context);        <-- this will return null, if dynamicAllocation
is enabled
          if (sparkMemoryAndCores != null &&
              sparkMemoryAndCores.getFirst() > 0 && sparkMemoryAndCores.getSecond()
> 0) {
            // warn the user if bytes per reducer is much larger than memory per task
            if ((double) sparkMemoryAndCores.getFirst() / bytesPerReducer < 0.5) {
              LOG.warn("Average load of a reducer is much larger than its available memory.
" +
                  "Consider decreasing hive.exec.reducers.bytes.per.reducer");
            }

            // If there are more cores, use the number of cores
            numReducers = Math.max(numReducers, sparkMemoryAndCores.getSecond());
          }
          numReducers = Math.min(numReducers, maxReducers);
          LOG.info("Set parallelism for reduce sink " + sink + " to: " + numReducers +
              " (calculated)");
          desc.setNumReducers(numReducers);
[..]
{code}

So I will provide a patch to use the configuration values if dynamicAllocation is turned off,
and the {{SparkSessionImpl.getExecutorCount()}} returns negative number.

One more question remains - or after [~lirui] comment - we have a better  option:
How to handle the misconfiguration in the tests:
# Change the spark.executor.instances to 1 in the hive-site.xml, or
# Change the yarn.scheduler.minimum-allocation-mb to 512, so we really will have 2 executors

Thanks for your help [~lirui], [~xuefuz]!

Peter


> Qtest results show wrong number of executors
> --------------------------------------------
>
>                 Key: HIVE-17270
>                 URL: https://issues.apache.org/jira/browse/HIVE-17270
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 3.0.0
>            Reporter: Peter Vary
>            Assignee: Peter Vary
>
> The hive-site.xml shows, that the TestMiniSparkOnYarnCliDriver uses 2 cores, and 2 executor
instances to run the queries. See: https://github.com/apache/hive/blob/master/data/conf/spark/yarn-client/hive-site.xml#L233
> When reading the log files for the query tests, I see the following:
> {code}
> 2017-08-08T07:41:03,315  INFO [0381325d-2c8c-46fb-ab51-423defaddd84 main] session.SparkSession:
Spark cluster current has executors: 1, total cores: 2, memory per executor: 512M, memoryFraction:
0.4
> {code}
> See: http://104.198.109.242/logs/PreCommit-HIVE-Build-6299/succeeded/171-TestMiniSparkOnYarnCliDriver-insert_overwrite_directory2.q-scriptfile1.q-vector_outer_join0.q-and-17-more/logs/hive.log
> When running the tests against a real cluster, I found that running an explain query
for the first time I see 1 executor, but running it for the second time I see 2 executors.
> Also setting some spark configuration on the cluster resets this behavior. For the first
time I will see 1 executor, and for the second time I will see 2 executors again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message