hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephen Watt <>
Subject "Select count(1) from Table" Failing with class cast exception
Date Fri, 16 Jul 2010 16:17:11 GMT
Hi Folks

This issue occurs on Hive 0.4 and 0.5. I wanted to wait on opening a JIRA 
ticket until I ran it by the community first.

I'm testing Hive 0.5 running on Apache Hadoop 0.20.2 which is using IBM 
Java 6  (32 bit x86 Java SR8 : which can be obtained here - 

To recreate this I'm using the pokes table loaded with data from the 
examples directory, per the tutorial and I run the following in the Hive 
CLI (bin/hive) : select count(1) from pokes; 

This works just fine on Sun/Oracle Java 6, but when I change the 
Hadoop-env to point to IBM Java 6 it fails in the Map with the following 
exception :

Caused by: java.lang.ClassCastException: 
incompatible with 
        ... 14 more

Note, the line number in GenericUDAFCount here is off by 4 based on a 
couple of calls I added for debugging purposes.  The net of it is 
that it is failing when it attempts to do the following cast in the merge 

This is where it gets weird. In SUN Java, this method gets called in the 
Reducer. In IBM Java, it gets called in the Mapper. If I use EXPLAIN in 
the Hive CLI, the execution plans are identical regardless of which JRE is 
being used in Hadoop. In SUN Java, the type for inputOI is a BigInt which 
is being derived off of a single column schema called _col0_ in the 
reducer (likely the output tuple of the count result) and casts to a Long 
with no problem. In IBM Java, this call is happening in the Map and 
inputOI is being derived off of what appears to be the first column of the 
Spokes table schema, which is an int and is therefore failing when being 
cats to a Long. It appears the cast is merely symptomatic of a difference 
in the execution plans. 

Debugging from this point, really requires someone who understands HIVE 
execution plans better than I do. Is there anyone that can help with this 
issue? This is really easy to replicate. Download the IBM JDK, mod your 
hadoop env to point to the  extracted dir of the IBM JDK and do a select 
count from any table.

Steve Watt
View raw message