ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeff Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (AMBARI-14767) SparkThriftServer may fail to start when the default memory requirement exceed yarn.scheduler.maximum-allocation-mb
Date Fri, 22 Jan 2016 00:12:40 GMT

     [ https://issues.apache.org/jira/browse/AMBARI-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jeff Zhang updated AMBARI-14767:
--------------------------------
    Summary: SparkThriftServer may fail to start when the default memory requirement exceed
yarn.scheduler.maximum-allocation-mb   (was: SparkThriftServer may fails to start when the
default memory requirement exceed yarn.scheduler.maximum-allocation-mb )

> SparkThriftServer may fail to start when the default memory requirement exceed yarn.scheduler.maximum-allocation-mb

> --------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-14767
>                 URL: https://issues.apache.org/jira/browse/AMBARI-14767
>             Project: Ambari
>          Issue Type: Improvement
>            Reporter: Jeff Zhang
>            Assignee: Jeff Zhang
>
> {noformat}
> 16/01/22 00:07:33 ERROR SparkContext: Error initializing SparkContext.
> java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the
max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb'
and/or 'yarn.nodemanager.resource.memory-mb'.
>         at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:283)
>         at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:139)
>         at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
>         at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:56)
>         at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:76)
>         at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:497)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message