hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhichun Wu (JIRA)" <>
Subject [jira] [Commented] (HIVE-6784) parquet-hive should allow column type change
Date Tue, 12 Aug 2014 14:09:12 GMT


Zhichun Wu commented on HIVE-6784:

@ [~tongjie] ,

bq. The exception raised from changing type actually only happens to non-partitioned tables.
For partitioned tables, if there is type change in table level, there will be an ObjectInspectorConverter
(in parquet's case — StructConverter) to convert type between partition and table. 

I've test changing type with parquet partitioned table in hive 0.13 and found that this issue
also existed in partitioned tables.  After changing type, when I query old partition data,
it would throw following exception:
 Cannot inspect java.util.ArrayList
        at org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector.getStructFieldData(
        at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(
        at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(
        at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(
        at org.apache.hadoop.hive.ql.exec.MapOperator.process(
        at org.apache.hadoop.mapred.MapTask.runOldMapper(
        at org.apache.hadoop.mapred.YarnChild$
        at Method)
        at org.apache.hadoop.mapred.YarnChild.main(

> parquet-hive should allow column type change
> --------------------------------------------
>                 Key: HIVE-6784
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: File Formats, Serializers/Deserializers
>    Affects Versions: 0.13.0
>            Reporter: Tongjie Chen
>             Fix For: 0.14.0
>         Attachments: HIVE-6784.1.patch.txt, HIVE-6784.2.patch.txt
> see also in the following parquet issue:
> Currently, if we change parquet format hive table using "alter table parquet_table change
c1 c1 bigint " ( assuming original type of c1 is int), it will result in exception thrown
from SerDe: " cannot be cast to"
in query runtime.
> This is different behavior from hive (using other file format), where it will try to
perform cast (null value in case of incompatible type).
> Parquet Hive's RecordReader returns an ArrayWritable (based on schema stored in footers
of parquet files); ParquetHiveSerDe also creates an corresponding ArrayWritableObjectInspector
(but using column type info from metastore). Whenever there is column type change, the objector
inspector will throw exception, since WritableLongObjectInspector cannot inspect an IntWritable
> Conversion has to happen somewhere if we want to allow type change. SerDe's deserialize
method seems a natural place for it.
> Currently, serialize method calls createStruct (then createPrimitive) for every record,
but it creates a new object regardless, which seems expensive. I think that could be optimized
a bit by just returning the object passed if already of the right type. deserialize also reuse
this method, if there is a type change, there will be new object to be created, which I think
is inevitable. 

This message was sent by Atlassian JIRA

View raw message