Return-Path: Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: (qmail 16805 invoked from network); 28 Jun 2010 15:16:46 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 28 Jun 2010 15:16:46 -0000 Received: (qmail 62989 invoked by uid 500); 28 Jun 2010 15:16:46 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 62891 invoked by uid 500); 28 Jun 2010 15:16:45 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 62883 invoked by uid 99); 28 Jun 2010 15:16:45 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Jun 2010 15:16:45 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Jun 2010 15:16:42 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id o5SF8pbv018995 for ; Mon, 28 Jun 2010 15:08:51 GMT Message-ID: <28220416.93081277737731337.JavaMail.jira@thor> Date: Mon, 28 Jun 2010 11:08:51 -0400 (EDT) From: "jinglong.liujl (JIRA)" To: hdfs-issues@hadoop.apache.org Subject: [jira] Commented: (HDFS-1268) Extract blockInvalidateLimit as a seperated configuration In-Reply-To: <7851932.55261277456510195.JavaMail.jira@thor> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HDFS-1268?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D12883= 176#action_12883176 ]=20 jinglong.liujl commented on HDFS-1268: -------------------------------------- In my case, if I want to delete 600 blocks, I have to wait 6 heartbeats pe= riods. During this period, disk maybe reach its capacity. Then, too slow b= lock fetching will cause write failure.=20 In general case, default value (100 can work well)=EF=BC=8C but in this ext= remely case, default value is not enough. Currently, this parameter can be = computed by heartbeatInterval, but in the case before, "slower heartbeat + = per heartbeat carry more blocks " can not carry more blocks in the same per= iod. Why not make this parameter can be configured? > Extract blockInvalidateLimit as a seperated configuration > --------------------------------------------------------- > > Key: HDFS-1268 > URL: https://issues.apache.org/jira/browse/HDFS-1268 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Affects Versions: 0.22.0 > Reporter: jinglong.liujl > Attachments: patch.diff > > > If there're many file piled up in recentInvalidateSets, only Math.m= ax(blockInvalidateLimit,=20 > 20*(int)(heartbeatInterval/1000)) invalid blocks can be carried in a hear= tbeat.(By default, It's 100). In high write stress, it'll cause process of = invalidate blocks removing can not catch up with speed of writing.=20 > We extract blockInvalidateLimit to a sperate config parameter that u= ser can make the right configure for your cluster.=20 --=20 This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.