hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "JoneZhang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HIVE-12649) Hive on Spark will resubmitted application when not enough resouces to launch yarn application master
Date Fri, 11 Dec 2015 04:04:10 GMT

     [ https://issues.apache.org/jira/browse/HIVE-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

JoneZhang updated HIVE-12649:
-----------------------------
    Description: 
Hive on spark will estimate reducer number when the query is not set reduce number,which cause
a application submit.The application will pending if the yarn queue's resources is insufficient.
So there are more than one pending applications probably because 
there are more than one estimate call.The failure is soft, so it doesn't prevent subsequent
processings. We can make that a hard failure

That code is found in 
at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:112)
at org.apache.hadoop.hive.ql.optimizer.spark.SetSparkReducerParallelism.process(SetSparkReducerParallelism.java:115)


  was:
Hive on spark will estimate reducer number when the query is not set reduce number,which cause
a application submit.The application will pending if the yarn queue's resources is insufficient.
So there are more than one pending applications probably because 
there are more than one estimate call.The failure is soft, so it doesn't prevent subsequent
processings. We can make that a hard failure

That code is found in 
728237     at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:112)
 728238     at org.apache.hadoop.hive.ql.optimizer.spark.SetSparkReducerParallelism.process(SetSparkReducerParallelism.java:115)



> Hive on Spark will resubmitted application when not enough resouces to launch yarn application
master
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-12649
>                 URL: https://issues.apache.org/jira/browse/HIVE-12649
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.2.1
>            Reporter: JoneZhang
>            Assignee: Xuefu Zhang
>
> Hive on spark will estimate reducer number when the query is not set reduce number,which
cause a application submit.The application will pending if the yarn queue's resources is insufficient.
> So there are more than one pending applications probably because 
> there are more than one estimate call.The failure is soft, so it doesn't prevent subsequent
processings. We can make that a hard failure
> That code is found in 
> at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:112)
> at org.apache.hadoop.hive.ql.optimizer.spark.SetSparkReducerParallelism.process(SetSparkReducerParallelism.java:115)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message