Return-Path: Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: (qmail 17850 invoked from network); 30 Mar 2011 16:45:44 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 30 Mar 2011 16:45:44 -0000 Received: (qmail 73331 invoked by uid 500); 30 Mar 2011 16:45:44 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 73299 invoked by uid 500); 30 Mar 2011 16:45:44 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 73291 invoked by uid 99); 30 Mar 2011 16:45:44 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Mar 2011 16:45:44 +0000 X-ASF-Spam-Status: No, hits=-1996.4 required=5.0 tests=ALL_TRUSTED,FS_REPLICA,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Mar 2011 16:45:43 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 4F3B28A6A2 for ; Wed, 30 Mar 2011 16:45:06 +0000 (UTC) Date: Wed, 30 Mar 2011 16:45:06 +0000 (UTC) From: "Matt Foley (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <1554034226.21650.1301503506321.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13013538#comment-13013538 ] Matt Foley commented on HDFS-1172: ---------------------------------- Fixing this issue will not only remove a performance issue, it will also help with memory management. Every over-replicated block gets its "triplets" array re-allocated. This is a set of (3 x replication) object references used to link the block into each datanode's blockList. If the replica count becomes greater than the replication factor, this array gets re-allocated, and it never gets shrunk if the replica count decreases. If this is happening with essentially every new block, then there's an awful lot of excess memory being wasted on unused triplets. In a 200M block namenode, one excess triplet per block is 4.8GB! > Blocks in newly completed files are considered under-replicated too quickly > --------------------------------------------------------------------------- > > Key: HDFS-1172 > URL: https://issues.apache.org/jira/browse/HDFS-1172 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 0.21.0 > Reporter: Todd Lipcon > Assignee: Hairong Kuang > Attachments: HDFS-1172.patch, replicateBlocksFUC.patch > > > I've seen this for a long time, and imagine it's a known issue, but couldn't find an existing JIRA. It often happens that we see the NN schedule replication on the last block of files very quickly after they're completed, before the other DNs in the pipeline have a chance to report the new block. This results in a lot of extra replication work on the cluster, as we replicate the block and then end up with multiple excess replicas which are very quickly deleted. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira