hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohammad Islam <misla...@yahoo.com.INVALID>
Subject Re: Schema evolution for parquet table
Date Thu, 08 Oct 2015 02:37:32 GMT


Hi Sergio,
Thanks for your reply.

I found one such effort : 
https://issues.apache.org/jira/browse/HIVE-6784

I consider to try it differently.
Similar stuffs in ORC:
https://issues.apache.org/jira/browse/HIVE-10591


Regards,
Mohammad


On Wednesday, October 7, 2015 8:58 AM, Sergio Pena <sergio.pena@cloudera.com> wrote:
Hi Mohammad,

Currently, Hive + Parquet does not support auto casting for wider types.
That is be a very good idea to implement in Hive.
I'll investigate the hive + parquet code, and see if it is something we can
add in a future release.

- Sergio



On Tue, Oct 6, 2015 at 7:23 PM, Mohammad Islam <mislam77@yahoo.com.invalid>
wrote:

>
> Any hive+parquet user/dev to address this?
>
>
> Regards,
> Mohammad
>
> On Monday, October 5, 2015 3:41 PM, Mohammad Islam <mislam77@yahoo.com>
> wrote:
>
>
>
> Hi,
> Does the parquet table support auto casting to wider data types? For
> example, If I have a parquet table where some parquet data files which have
> "int"  as data type and other files have "long" data type for the same
> field.
>
> The table schema has type "bigint" for the same field.
> Does hive can read the file that was written with type "int"?
>
> I got this exception "Failed with exception
> java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException:
> java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be
> cast to org.apache.hadoop.io.LongWritable".
>
> Regards,
> Mohammad
>

Mime
View raw message