Return-Path: Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: (qmail 49552 invoked from network); 9 Jun 2010 06:20:37 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 9 Jun 2010 06:20:37 -0000 Received: (qmail 40823 invoked by uid 500); 9 Jun 2010 06:20:37 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 40725 invoked by uid 500); 9 Jun 2010 06:20:35 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 40712 invoked by uid 99); 9 Jun 2010 06:20:34 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Jun 2010 06:20:34 +0000 X-ASF-Spam-Status: No, hits=-1507.9 required=10.0 tests=ALL_TRUSTED,AWL X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Jun 2010 06:20:33 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id o596KDC4010274 for ; Wed, 9 Jun 2010 06:20:13 GMT Message-ID: <29727674.41681276064413140.JavaMail.jira@thor> Date: Wed, 9 Jun 2010 02:20:13 -0400 (EDT) From: "Hadoop QA (JIRA)" To: hdfs-issues@hadoop.apache.org Subject: [jira] Commented: (HDFS-811) Add metrics, failure reporting and additional tests for HDFS-457 In-Reply-To: <104253428.1259968280645.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12876969#action_12876969 ] Hadoop QA commented on HDFS-811: -------------------------------- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12445732/hdfs-811-4.patch against trunk revision 952861. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 19 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/403/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/403/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/403/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/403/console This message is automatically generated. > Add metrics, failure reporting and additional tests for HDFS-457 > ---------------------------------------------------------------- > > Key: HDFS-811 > URL: https://issues.apache.org/jira/browse/HDFS-811 > Project: Hadoop HDFS > Issue Type: Test > Components: test > Affects Versions: 0.21.0, 0.22.0 > Reporter: Ravi Phulari > Assignee: Eli Collins > Priority: Minor > Fix For: 0.21.0, 0.22.0 > > Attachments: hdfs-811-1.patch, hdfs-811-2.patch, hdfs-811-3.patch, hdfs-811-4.patch > > > HDFS-457 introduced a improvement which allows datanode to continue if a volume for replica storage fails. Previously a datanode resigned if any volume failed. > Description of HDFS-457 > {quote} > Current implementation shuts DataNode down completely when one of the configured volumes of the storage fails. > This is rather wasteful behavior because it decreases utilization (good storage becomes unavailable) and imposes extra load on the system (replication of the blocks from the good volumes). These problems will become even more prominent when we move to mixed (heterogeneous) clusters with many more volumes per Data Node. > {quote} > I suggest following additional tests for this improvement. > #1 Test successive volume failures ( Minimum 4 volumes ) > #2 Test if each volume failure reports reduction in available DFS space and remaining space. > #3 Test if failure of all volumes on a data nodes leads to the data node failure. > #4 Test if correcting failed storage disk brings updates and increments available DFS space. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.