hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1268) Extract blockInvalidateLimit as a seperated configuration
Date Mon, 02 Aug 2010 03:40:17 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12894458#action_12894458
] 

Todd Lipcon commented on HDFS-1268:
-----------------------------------

Jinglong: can you try applying HDFS-611 and HADOOP-5124 to this cluster? Those two JIRAs solved
most of the block deletion related issues under load with HBase in our testing.

> Extract blockInvalidateLimit as a seperated configuration
> ---------------------------------------------------------
>
>                 Key: HDFS-1268
>                 URL: https://issues.apache.org/jira/browse/HDFS-1268
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.22.0
>            Reporter: jinglong.liujl
>         Attachments: patch.diff
>
>
>       If there're many file piled up in recentInvalidateSets, only Math.max(blockInvalidateLimit,

> 20*(int)(heartbeatInterval/1000)) invalid blocks can be carried in a heartbeat.(By default,
It's 100). In high write stress, it'll cause process of invalidate blocks removing can not
catch up with  speed of writing. 
>     We extract blockInvalidateLimit  to a sperate config parameter that user can make
the right configure for your cluster. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message