hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joydeep Sen Sarma (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-2046) A CombineFileInputSplit cannot be less than a dfs block
Date Thu, 02 Sep 2010 18:55:56 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12905637#action_12905637
] 

Joydeep Sen Sarma commented on MAPREDUCE-2046:
----------------------------------------------

one concern is whether will we have a lot of small 'runts' broken out and that will lead to
inefficient IO (when those are combined into other splits). suggestion:

when carving up the block - if the remainder (R) is between max and 2*max - then instead of
creating splits of size <max, R-max> - create splits of size <R/2, R/2>.

also when packing blocks within a node/rack - it would be better to sort them by size first
(ascending). i think it will lead to better packing - what do you think?

> A CombineFileInputSplit cannot be less than a dfs block 
> --------------------------------------------------------
>
>                 Key: MAPREDUCE-2046
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2046
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>            Reporter: Namit Jain
>            Assignee: dhruba borthakur
>         Attachments: combineFileInputFormatMaxSize.txt
>
>
> I ran into this while testing some hive features.
> Whether we use hiveinputformat or combinehiveinputformat, a split cannot be less than
a dfs block size.
> This is a problem if we want to increase the block size for older data to reduce memory
consumption for the
> name node.
> It would be useful if the input split was independent of the dfs block size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message