ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-10963) Change default for hive conditional task size to 52428800
Date Wed, 06 May 2015 19:49:00 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531279#comment-14531279
] 

Hudson commented on AMBARI-10963:
---------------------------------

SUCCESS: Integrated in Ambari-trunk-Commit #2531 (See [https://builds.apache.org/job/Ambari-trunk-Commit/2531/])
AMBARI-10963. Change default for hive conditional task size to 52428800 (smohanty: http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=6182ef3c86ba30618f2653116a8cfc9e82b8954c)
* ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/configuration/hive-site.xml.orig
* ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/configuration/hive-site.xml


> Change default for hive conditional task size to 52428800
> ---------------------------------------------------------
>
>                 Key: AMBARI-10963
>                 URL: https://issues.apache.org/jira/browse/AMBARI-10963
>             Project: Ambari
>          Issue Type: Bug
>          Components: stacks
>    Affects Versions: 2.1.0
>            Reporter: Sumit Mohanty
>            Assignee: Sumit Mohanty
>             Fix For: 2.1.0
>
>         Attachments: AMBARI-10963.patch
>
>
> Hive query failure due to OOM error in mr mode
> Noticed following error while running several join queries in mr mode:
> {noformat}
>  FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError:
GC overhead limit exceeded
> 	at org.apache.hadoop.hive.serde2.typeinfo.HiveDecimalUtils.enforcePrecisionScale(HiveDecimalUtils.java:59)
> 	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.enforcePrecisionScale(WritableHiveDecimalObjectInspector.java:105)
> 	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveWritableObject(WritableHiveDecimalObjectInspector.java:41)
> 	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveWritableObject(WritableHiveDecimalObjectInspector.java:26)
> 	at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:305)
> 	at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:340)
> 	at org.apache.hadoop.hive.ql.exec.persistence.MapJoinEagerRowContainer.read(MapJoinEagerRowContainer.java:129)
> 	at org.apache.hadoop.hive.ql.exec.persistence.MapJoinEagerRowContainer.read(MapJoinEagerRowContainer.java:122)
> 	at org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:79)
> 	at org.apache.hadoop.hive.ql.exec.mr.HashTableLoader.load(HashTableLoader.java:98)
> 	at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:190)
> 	at org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:216)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
> 	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:176)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> 	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> 	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> {noformat}
> This is definitely a configuration issue, noconditionaltask.size is ~1GB while mapper
container size is 750MB.
> From mapred-site.xml
> {code}
>     <property>
>       <name>mapreduce.map.memory.mb</name>
>       <value>768</value>
>     </property>
>     <property>
>       <name>mapreduce.reduce.memory.mb</name>
>       <value>1536</value>
>     </property>
> {code}    
> From hive-site.xml
> {code}
>     <property>
>       <name>hive.auto.convert.join.noconditionaltask.size</name>
>       <value>1000000000</value>
>     </property>
> {code}    
> Please modify hive-site.xml with 
> {code}
>     <property>
>       <name>hive.auto.convert.join.noconditionaltask.size</name>
>       <value>52428800</value>
>     </property>
> {code}    



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message