hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tom White (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4065) support for reading binary data from flat files
Date Thu, 18 Sep 2008 14:03:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12632217#action_12632217
] 

Tom White commented on HADOOP-4065:
-----------------------------------

A few comments:

Could the types be called FlatFileInputFormat and FlatFileRecordReader?

Is a SerializationContext class needed? The Serialization can be got from the SerializationFactory.
It just needs to know the base class (Writable, TBase etc). A second configuration parameter
is needed to specify the concrete class, but I don't see why the FlatFileDeserializerRecordReader
can't just get these two classes from the Configuration itself.

Can the classes go in the org.apache.hadoop.contrib.serialization.mapred package to echo the
main mapred package? When HADOOP-1230 is done an equivalent could then go in the mapreduce
package.

I agree it would be good to have tests for Writable, Java Serialization and Thrift to test
the abstraction.

Shouldn't keys be file offsets, similar to TextInputFormat? The row numbers you have are actually
the row number within the split, which might be confusing (and they're not unique per file).

> support for reading binary data from flat files
> -----------------------------------------------
>
>                 Key: HADOOP-4065
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4065
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Joydeep Sen Sarma
>         Attachments: FlatFileReader.java, HADOOP-4065.0.txt, HADOOP-4065.1.txt, HADOOP-4065.1.txt,
ThriftFlatFile.java
>
>
> like textinputformat - looking for a concrete implementation to read binary records from
a flat file (that may be compressed).
> it's assumed that hadoop can't split such a file. so the inputformat can set splittable
to false.
> tricky aspects are:
> - how to know what class the file contains (has to be in a configuration somewhere).
> - how to determine EOF (would be nice if hadoop can determine EOF and not have the deserializer
throw an exception  (which is hard to distinguish from a exception due to corruptions?)).
this is easy for non-compressed streams - for compressed streams - DecompressorStream has
a useful looking getAvailable() call - except the class is marked package private.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message