hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David Chen (JIRA)" <>
Subject [jira] [Commented] (HIVE-4329) HCatalog should use getHiveRecordWriter rather than getRecordWriter
Date Thu, 31 Jul 2014 01:06:53 GMT


David Chen commented on HIVE-4329:

Some notes about this patch:

 * {{\*OutputFormatContainer}} classes now wrap a {{HiveOutputFormat}} rather than a mapred
 * {{\*RecordWriterContainer}} classes now wrap a {{FileSinkOperator.RecordWriter}} rather
than a mapred {{RecordWriter}}.
 * {{InternalUtil.initializeOutputSerDe}} and {{InternalUtil.initializeDeserializer}} now
take the properties from the {{TableDesc}} created from the table contained in {{HCatTableInfo}}
rather than creating the properties manually. As a result, {{InternalUtil.setSerDeProperties}}
has been removed.
 * Fixed a {{NullPointerException}} in {{AvroSerDe.initialize}} that occurrs if {{columnCommentProperty}}
is null.

Test coverage:

 * Remove disabled Serde list from {{HCatMapReduceTest}} so that all {{HCatMapReduceTest}}
suites are also run against {{AvroSerDe}} and {{ParquetHiveSerDe}}

To do:

 * Fix case where static partitioning is used.
 * Clean up if necessary
 * Remove diagnostic print statements.

> HCatalog should use getHiveRecordWriter rather than getRecordWriter
> -------------------------------------------------------------------
>                 Key: HIVE-4329
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: HCatalog, Serializers/Deserializers
>    Affects Versions: 0.14.0
>         Environment: discovered in Pig, but it looks like the root cause impacts all
non-Hive users
>            Reporter: Sean Busbey
>            Assignee: David Chen
>         Attachments: HIVE-4329.0.patch
> Attempting to write to a HCatalog defined table backed by the AvroSerde fails with the
following stacktrace:
> {code}
> java.lang.ClassCastException: cannot be cast to
> 	at$1.write(
> 	at org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(
> 	at org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(
> 	at org.apache.hcatalog.pig.HCatBaseStorer.putNext(
> 	at org.apache.hcatalog.pig.HCatStorer.putNext(
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(
> 	at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(
> 	at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(
> {code}
> The proximal cause of this failure is that the AvroContainerOutputFormat's signature
mandates a LongWritable key and HCat's FileRecordWriterContainer forces a NullWritable. I'm
not sure of a general fix, other than redefining HiveOutputFormat to mandate a WritableComparable.
> It looks like accepting WritableComparable is what's done in the other Hive OutputFormats,
and there's no reason AvroContainerOutputFormat couldn't also be changed, since it's ignoring
the key. That way fixing things so FileRecordWriterContainer can always use NullWritable could
get spun into a different issue?
> The underlying cause for failure to write to AvroSerde tables is that AvroContainerOutputFormat
doesn't meaningfully implement getRecordWriter, so fixing the above will just push the failure
into the placeholder RecordWriter.

This message was sent by Atlassian JIRA

View raw message