hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pete Wyckoff (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4065) support for reading binary data from flat files
Date Fri, 19 Sep 2008 17:36:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12632752#action_12632752

Pete Wyckoff commented on HADOOP-4065:

bq. I'm still not convinced about the utility of this class outside of Hive. What is the advantage
of storing the data this way?

1. You don't need a loader.  
2. Tools outside of hadoop can use the data - python, perl, c++, ...
3. There are other file formats that are splittable and self or non self-describing. Hadoop
is generally pretty pluggable, but not at the file level. Would be nice to have generic file
interfaces that one can implement to get *First Class* hadoop treatment for any file format.
To be clear, Hive writes and reads binary data to sequence files only now. We load all binary
data into sequence files.

bq. i really don't care - we can put this into Hive. 


This is a general FlatFileRecordReader - HADOOP-3566 seems to be a non-general version of
this? (with the issue of that being <String, Void>)

And note my intention is to put this in contrib/serialization

> support for reading binary data from flat files
> -----------------------------------------------
>                 Key: HADOOP-4065
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4065
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Joydeep Sen Sarma
>         Attachments: FlatFileReader.java, HADOOP-4065.0.txt, HADOOP-4065.1.txt, HADOOP-4065.1.txt,
> like textinputformat - looking for a concrete implementation to read binary records from
a flat file (that may be compressed).
> it's assumed that hadoop can't split such a file. so the inputformat can set splittable
to false.
> tricky aspects are:
> - how to know what class the file contains (has to be in a configuration somewhere).
> - how to determine EOF (would be nice if hadoop can determine EOF and not have the deserializer
throw an exception  (which is hard to distinguish from a exception due to corruptions?)).
this is easy for non-compressed streams - for compressed streams - DecompressorStream has
a useful looking getAvailable() call - except the class is marked package private.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message