hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhichun Wu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-6784) parquet-hive should allow column type change
Date Tue, 12 Aug 2014 14:09:12 GMT

    [ https://issues.apache.org/jira/browse/HIVE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094075#comment-14094075
] 

Zhichun Wu commented on HIVE-6784:
----------------------------------

@ [~tongjie] ,

bq. The exception raised from changing type actually only happens to non-partitioned tables.
For partitioned tables, if there is type change in table level, there will be an ObjectInspectorConverter
(in parquet's case — StructConverter) to convert type between partition and table. 

I've test changing type with parquet partitioned table in hive 0.13 and found that this issue
also existed in partitioned tables.  After changing type, when I query old partition data,
it would throw following exception:
{code}
 Cannot inspect java.util.ArrayList
        at org.apache.hadoop.hive.ql.io.parquet.serde.ArrayWritableObjectInspector.getStructFieldData(ArrayWritableObjectInspector.java:139)
        at org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector.getStructFieldData(UnionStructObjectInspector.java:135)
        at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:349)
        at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:193)
        at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:179)
        at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:545)
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
{code}



> parquet-hive should allow column type change
> --------------------------------------------
>
>                 Key: HIVE-6784
>                 URL: https://issues.apache.org/jira/browse/HIVE-6784
>             Project: Hive
>          Issue Type: Bug
>          Components: File Formats, Serializers/Deserializers
>    Affects Versions: 0.13.0
>            Reporter: Tongjie Chen
>             Fix For: 0.14.0
>
>         Attachments: HIVE-6784.1.patch.txt, HIVE-6784.2.patch.txt
>
>
> see also in the following parquet issue:
> https://github.com/Parquet/parquet-mr/issues/323
> Currently, if we change parquet format hive table using "alter table parquet_table change
c1 c1 bigint " ( assuming original type of c1 is int), it will result in exception thrown
from SerDe: "org.apache.hadoop.io.IntWritable cannot be cast to org.apache.hadoop.io.LongWritable"
in query runtime.
> This is different behavior from hive (using other file format), where it will try to
perform cast (null value in case of incompatible type).
> Parquet Hive's RecordReader returns an ArrayWritable (based on schema stored in footers
of parquet files); ParquetHiveSerDe also creates an corresponding ArrayWritableObjectInspector
(but using column type info from metastore). Whenever there is column type change, the objector
inspector will throw exception, since WritableLongObjectInspector cannot inspect an IntWritable
etc...
> Conversion has to happen somewhere if we want to allow type change. SerDe's deserialize
method seems a natural place for it.
> Currently, serialize method calls createStruct (then createPrimitive) for every record,
but it creates a new object regardless, which seems expensive. I think that could be optimized
a bit by just returning the object passed if already of the right type. deserialize also reuse
this method, if there is a type change, there will be new object to be created, which I think
is inevitable. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message