Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 310B76526 for ; Wed, 22 Jun 2011 10:35:10 +0000 (UTC) Received: (qmail 6614 invoked by uid 500); 22 Jun 2011 10:35:10 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 6414 invoked by uid 500); 22 Jun 2011 10:35:09 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 6403 invoked by uid 99); 22 Jun 2011 10:35:08 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Jun 2011 10:35:08 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Jun 2011 10:35:07 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 67AC4428526 for ; Wed, 22 Jun 2011 10:34:47 +0000 (UTC) Date: Wed, 22 Jun 2011 10:34:47 +0000 (UTC) From: "Vitalii Tymchyshyn (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <1276358901.29036.1308738887421.JavaMail.tomcat@hel.zones.apache.org> In-Reply-To: <126673632.24049.1308654707763.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Updated] (HDFS-2095) org.apache.hadoop.hdfs.server.datanode.DataNode#checkDiskError produces check storm making data node unavailable MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Tymchyshyn updated HDFS-2095: ------------------------------------- Attachment: patch.diff A trunk patch. Node: Better looking into code it seems that it is not needed to rethrow InterruptedIOException > org.apache.hadoop.hdfs.server.datanode.DataNode#checkDiskError produces check storm making data node unavailable > ---------------------------------------------------------------------------------------------------------------- > > Key: HDFS-2095 > URL: https://issues.apache.org/jira/browse/HDFS-2095 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node > Affects Versions: 0.21.0 > Reporter: Vitalii Tymchyshyn > Attachments: patch.diff, patch.diff, patch2.diff > > > I can see that if data node receives some IO error, this can cause checkDir storm. > What I mean: > 1) any error produces DataNode.checkDiskError call > 2) this call locks volume: > java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.getBooleanAttributes0(Native Method) > at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:228) > at java.io.File.exists(File.java:733) > at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsCheck(DiskChecker.java:65) > at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:86) > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.checkDirTree(FSDataset.java:228) > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.checkDirTree(FSDataset.java:232) > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.checkDirTree(FSDataset.java:232) > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.checkDirTree(FSDataset.java:232) > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.checkDirs(FSDataset.java:414) > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.checkDirs(FSDataset.java:617) > - locked <0x000000080a8faec0> (a org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet) > at org.apache.hadoop.hdfs.server.datanode.FSDataset.checkDataDir(FSDataset.java:1681) > at org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:745) > at org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:735) > at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.close(BlockReceiver.java:202) > at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:151) > at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:167) > at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:646) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:352) > at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390) > at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111) > at java.lang.Thread.run(Thread.java:619) > 3) This produces timeouts on other calls, e.g. > 2011-06-17 17:35:03,922 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError: exception: > java.io.InterruptedIOException > at java.io.FileOutputStream.writeBytes(Native Method) > at java.io.FileOutputStream.write(FileOutputStream.java:260) > at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) > at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) > at java.io.DataOutputStream.flush(DataOutputStream.java:106) > at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.close(BlockReceiver.java:183) > at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:151) > at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:167) > at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:646) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:352) > at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390) > at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111) > at java.lang.Thread.run(Thread.java:619) > 4) This, in turn, produces more "dir check calls". > 5) All the cluster works very slow because of half-working node. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira