hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1268) Extract blockInvalidateLimit as a seperated configuration
Date Fri, 25 Jun 2010 20:51:50 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12882710#action_12882710
] 

Konstantin Shvachko commented on HDFS-1268:
-------------------------------------------

I did not quite understand the motivation for that. Do you want file deletes to go faster
or slower. Why deletes need to catch up with writes?
HDFS already has way too many config parameters. It is important to understand why yet another
one is needed.

> Extract blockInvalidateLimit as a seperated configuration
> ---------------------------------------------------------
>
>                 Key: HDFS-1268
>                 URL: https://issues.apache.org/jira/browse/HDFS-1268
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.22.0
>            Reporter: jinglong.liujl
>         Attachments: patch.diff
>
>
>       If there're many file piled up in recentInvalidateSets, only Math.max(blockInvalidateLimit,

> 20*(int)(heartbeatInterval/1000)) invalid blocks can be carried in a heartbeat.(By default,
It's 100). In high write stress, it'll cause process of invalidate blocks removing can not
catch up with  speed of writing. 
>     We extract blockInvalidateLimit  to a sperate config parameter that user can make
the right configure for your cluster. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message