Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 40142 invoked from network); 4 Dec 2006 18:39:44 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 4 Dec 2006 18:39:44 -0000 Received: (qmail 73486 invoked by uid 500); 4 Dec 2006 18:39:52 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 73262 invoked by uid 500); 4 Dec 2006 18:39:52 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 73253 invoked by uid 99); 4 Dec 2006 18:39:52 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 04 Dec 2006 10:39:51 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 04 Dec 2006 10:39:42 -0800 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 5C1E47142BF for ; Mon, 4 Dec 2006 10:39:22 -0800 (PST) Message-ID: <21595267.1165257562374.JavaMail.jira@brutus> Date: Mon, 4 Dec 2006 10:39:22 -0800 (PST) From: "Raghu Angadi (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-774) Datanodes fails to heartbeat when a directory with a large number of blocks is deleted In-Reply-To: <10916275.1165019421088.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ http://issues.apache.org/jira/browse/HADOOP-774?page=comments#action_12455384 ] Raghu Angadi commented on HADOOP-774: ------------------------------------- I think (1) is simpler, from namenode point of view. One modification for (1) is that, Datanode should first delete the mapping inline and queue the physical file deletion on a separate thread. This is required so that Datanode does not send the already invalidated blocks in in its next heartbeat. Of course, datanode should not create one thread for each RPC call. It could delete them inline if the number is small (say < 20), otherwise queue them up to be deleted by a thread (It will create one if one does not exist). The thread exits if there are no files to be deleted. > Datanodes fails to heartbeat when a directory with a large number of blocks is deleted > -------------------------------------------------------------------------------------- > > Key: HADOOP-774 > URL: http://issues.apache.org/jira/browse/HADOOP-774 > Project: Hadoop > Issue Type: Bug > Components: dfs > Reporter: dhruba borthakur > Assigned To: dhruba borthakur > > If a user removes a few files that are huge, it causes the namenode to send BlockInvalidate command to the relevant Datanodes. The Datanode process the blockInvalidate command as part of its heartbeat thread. If the number of blocks to be invalidated is huge, the datanode takes a long time to process it. This causes the datanode to not send new heartbeats to the namenode. The namenode declares the datanode as dead! > 1. One option is to process the blockInvalidate as a separate thread from the heartbeat thread in the Datanode. > 2. Another option would be to constrain the namenode to send a max (e.g. 500) blocks per blockInvalidate message. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira