hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2705) io.file.buffer.size should default to a value larger than 4k
Date Thu, 24 Jan 2008 21:34:37 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12562229#action_12562229
] 

Doug Cutting commented on HADOOP-2705:
--------------------------------------

This could have a significant impact on heap size for applications that keep lots of files
open.  For example, Nutch and HBase.  So, if we do commit this, we should list it as an incompatible
change and suggest that folks who are short on RAM might use the prior value.

> io.file.buffer.size should default to a value larger than 4k
> ------------------------------------------------------------
>
>                 Key: HADOOP-2705
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2705
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: conf
>            Reporter: Chris Douglas
>            Priority: Minor
>             Fix For: 0.16.0
>
>         Attachments: 2705-0.patch
>
>
> Tests using HADOOP-2406 suggest that increasing this to 32k from 4k improves read times
for block, lzo compressed SequenceFiles by over 40%; 32k is a relatively conservative bump.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message