hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tony Murphy (JIRA)" <>
Subject [jira] [Created] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators
Date Mon, 17 Jun 2013 16:52:20 GMT
Tony Murphy created HIVE-4745:

             Summary: java.lang.RuntimeException: Hive Runtime Error while closing operators
                 Key: HIVE-4745
             Project: Hive
          Issue Type: Sub-task
    Affects Versions: vectorization-branch
            Reporter: Tony Murphy
             Fix For: vectorization-branch

       (SUM(L_QUANTITY) + -1.30000000000000000000E+000),
       (-2.20000000000000020000E+000 % (SUM(L_QUANTITY) + -1.30000000000000000000E+000)),
FROM   lineitem_orc

executed over tpch line item with scale factor 1gb

13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local
no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you
are connecting to a remote metastore.

Logging initialized using configuration in file:/C:/Hadoop/hive-0.9.0/conf/
Hive history file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>

Starting Job = job_201306142329_0098, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill job_201306142329_0098
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201306142329_0098 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
Examining task ID: task_201306142329_0098_m_000002 (and more) from job job_201306142329_0098

Task with the most failures(4): 
Task ID:

Diagnostic Messages for this Task:
java.lang.RuntimeException: Hive Runtime Error while closing operators
	at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(
	at org.apache.hadoop.mapred.MapTask.runOldMapper(
	at org.apache.hadoop.mapred.Child$
	at Method)
	at org.apache.hadoop.mapred.Child.main(
Caused by: java.lang.ClassCastException: cannot be cast
	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(
	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(
	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(
	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(
	at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(
	at org.apache.hadoop.hive.ql.exec.Operator.process(
	at org.apache.hadoop.hive.ql.exec.Operator.forward(
	at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(
	at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(
	at org.apache.hadoop.hive.ql.exec.Operator.close(
	at org.apache.hadoop.hive.ql.exec.Operator.close(
	at org.apache.hadoop.hive.ql.exec.Operator.close(
	at org.apache.hadoop.hive.ql.exec.Operator.close(
	at org.apache.hadoop.hive.ql.exec.Operator.close(
	at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(
	... 8 more

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec


This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message