hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Subramaniam Krishnan (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3387) Custom Splitter for handling many small files
Date Thu, 15 May 2008 06:29:55 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12597021#action_12597021

Subramaniam Krishnan commented on HADOOP-3387:

The MultiFileInputFormat has the following deficiencies:

1) Assume you have 5 small(100KB) files & the number of Maps specified is 10, 5 Maps will
still be allocated when 1 would have been sufficient. This is the case whenever you have small
files(size less than DFS Block Size) & number of Maps specified is more than the number
of files.
2)  The MultiFileInputFormat doesn't handle large files efficiently. Assume the opposite scenario
- you have 5 large(~1GB) file & the number of Maps specified is 3, only 3 Maps will be
allocated which is insufficient. This is the case whenever you have large files(size more
than DFS Block Size) & number of Maps specified is less than the number of files.
3) The MultiFileInputFormat also doesn't handle a mixed bag of large & small files efficiently
as explained above.

The Custom Splitter is a Balanced Multi File Splitter in the sense that it tries to align
the splits closest to the DFS Block Size even if the files are small, large or a mixed bag.
We have an implementation for text/sequence record readers for our Balanced Multi File Splitter.

We also have a additional functionality that allows splitting based on user specified split
size(defaulted to DFS Block Size, of course) rather than on number of splits.

> Custom Splitter for handling many small files
> ---------------------------------------------
>                 Key: HADOOP-3387
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3387
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Subramaniam Krishnan
>            Assignee: Subramaniam Krishnan
>             Fix For: 0.18.0
> Hadoop by default allocates a Map to a file irrespective of size. This is not optimal
if you have a large number of small files, for e.g:- If you 2000 100KB files, 2000 Maps will
be allocated for the job.
> The Custom Multi File Splitter collapses all the small files to a single split till the
DFS Block Size is hit. 
> It also take care of handling big files by splitting them on Block Size and adding up
all the reminders(if any) to a further splits of Block Size. 

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message