Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C109198CC for ; Mon, 21 May 2012 05:16:42 +0000 (UTC) Received: (qmail 9773 invoked by uid 500); 21 May 2012 05:16:42 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 9737 invoked by uid 500); 21 May 2012 05:16:42 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 9400 invoked by uid 99); 21 May 2012 05:16:41 -0000 Received: from issues-vm.apache.org (HELO issues-vm) (140.211.11.160) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 21 May 2012 05:16:41 +0000 Received: from isssues-vm.apache.org (localhost [127.0.0.1]) by issues-vm (Postfix) with ESMTP id B009A14280B for ; Mon, 21 May 2012 05:16:41 +0000 (UTC) Date: Mon, 21 May 2012 05:16:41 +0000 (UTC) From: "Hadoop QA (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <619007756.2416.1337577401722.JavaMail.jiratomcat@issues-vm> In-Reply-To: <2028499580.28651.1332945447674.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Commented] (HDFS-3157) Error in deleting block is keep on coming from DN even after the block report and directory scanning has happened MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279964#comment-13279964 ] Hadoop QA commented on HDFS-3157: --------------------------------- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12528379/HDFS-3157-1.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2492//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2492//console This message is automatically generated. > Error in deleting block is keep on coming from DN even after the block report and directory scanning has happened > ----------------------------------------------------------------------------------------------------------------- > > Key: HDFS-3157 > URL: https://issues.apache.org/jira/browse/HDFS-3157 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 0.23.0, 0.24.0 > Reporter: J.Andreina > Assignee: Ashish Singhi > Fix For: 2.0.0, 3.0.0 > > Attachments: HDFS-3157-1.patch, HDFS-3157-1.patch, HDFS-3157.patch, HDFS-3157.patch, HDFS-3157.patch > > > Cluster setup: > 1NN,Three DN(DN1,DN2,DN3),replication factor-2,"dfs.blockreport.intervalMsec" 300,"dfs.datanode.directoryscan.interval" 1 > step 1: write one file "a.txt" with sync(not closed) > step 2: Delete the blocks in one of the datanode say DN1(from rbw) to which replication happened. > step 3: close the file. > Since the replication factor is 2 the blocks are replicated to the other datanode. > Then at the NN side the following cmd is issued to DN from which the block is deleted > ------------------------------------------------------------------------------------- > {noformat} > 2012-03-19 13:41:36,905 INFO org.apache.hadoop.hdfs.StateChange: BLOCK NameSystem.addToCorruptReplicasMap: duplicate requested for blk_2903555284838653156 to add as corrupt on XX.XX.XX.XX by /XX.XX.XX.XX because reported RBW replica with genstamp 1002 does not match COMPLETE block's genstamp in block map 1003 > 2012-03-19 13:41:39,588 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* Removing block blk_2903555284838653156_1003 from neededReplications as it has enough replicas. > {noformat} > From the datanode side in which the block is deleted the following exception occured > {noformat} > 2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to delete block blk_2903555284838653156_1003. BlockInfo not found in volumeMap. > 2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing datanode Command > java.io.IOException: Error in deleting blocks. > at org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:2061) > at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:581) > at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:545) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:690) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:522) > at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:662) > at java.lang.Thread.run(Thread.java:619) > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira