hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sergio Pena <sergio.p...@cloudera.com>
Subject Re: Schema evolution for parquet table
Date Wed, 07 Oct 2015 15:58:14 GMT
Hi Mohammad,

Currently, Hive + Parquet does not support auto casting for wider types.
That is be a very good idea to implement in Hive.
I'll investigate the hive + parquet code, and see if it is something we can
add in a future release.

- Sergio


On Tue, Oct 6, 2015 at 7:23 PM, Mohammad Islam <mislam77@yahoo.com.invalid>
wrote:

>
> Any hive+parquet user/dev to address this?
>
>
> Regards,
> Mohammad
>
> On Monday, October 5, 2015 3:41 PM, Mohammad Islam <mislam77@yahoo.com>
> wrote:
>
>
>
> Hi,
> Does the parquet table support auto casting to wider data types? For
> example, If I have a parquet table where some parquet data files which have
> "int"  as data type and other files have "long" data type for the same
> field.
>
> The table schema has type "bigint" for the same field.
> Does hive can read the file that was written with type "int"?
>
> I got this exception "Failed with exception
> java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException:
> java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be
> cast to org.apache.hadoop.io.LongWritable".
>
> Regards,
> Mohammad
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message