Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 42171 invoked from network); 8 Jul 2008 21:02:25 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 8 Jul 2008 21:02:25 -0000 Received: (qmail 68145 invoked by uid 500); 8 Jul 2008 21:02:24 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 68121 invoked by uid 500); 8 Jul 2008 21:02:24 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 68110 invoked by uid 99); 8 Jul 2008 21:02:24 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Jul 2008 14:02:24 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Jul 2008 21:01:40 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id B99A5234C15D for ; Tue, 8 Jul 2008 14:01:32 -0700 (PDT) Message-ID: <528580402.1215550892759.JavaMail.jira@brutus> Date: Tue, 8 Jul 2008 14:01:32 -0700 (PDT) From: "Raghu Angadi (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Assigned: (HADOOP-3707) Frequent DiskOutOfSpaceException on almost-full datanodes In-Reply-To: <407995319.1215451351564.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raghu Angadi reassigned HADOOP-3707: ------------------------------------ Assignee: Raghu Angadi > Frequent DiskOutOfSpaceException on almost-full datanodes > --------------------------------------------------------- > > Key: HADOOP-3707 > URL: https://issues.apache.org/jira/browse/HADOOP-3707 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.17.0 > Reporter: Koji Noguchi > Assignee: Raghu Angadi > > On a datanode which is completely full (leaving reserve space), we frequently see > target node reporting, > {noformat} > 2008-07-07 16:54:44,707 INFO org.apache.hadoop.dfs.DataNode: Receiving block blk_3328886742742952100 src: /11.1.11.111:22222 dest: /11.1.11.111:22222 > 2008-07-07 16:54:44,708 INFO org.apache.hadoop.dfs.DataNode: writeBlock blk_3328886742742952100 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient space for an additional block > 2008-07-07 16:54:44,708 ERROR org.apache.hadoop.dfs.DataNode: 33.3.33.33:22222:DataXceiver: org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient space for an additional block > at org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getNextVolume(FSDataset.java:444) > at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:716) > at org.apache.hadoop.dfs.DataNode$BlockReceiver.(DataNode.java:2187) > at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1113) > at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976) > at java.lang.Thread.run(Thread.java:619) > {noformat} > Sender reporting > {noformat} > 2008-07-07 16:54:44,712 INFO org.apache.hadoop.dfs.DataNode: 11.1.11.111:22222:Exception writing block blk_3328886742742952100 to mirror 33.3.33.33:22222 > java.io.IOException: Broken pipe > at sun.nio.ch.FileDispatcher.write0(Native Method) > at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) > at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104) > at sun.nio.ch.IOUtil.write(IOUtil.java:75) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334) > at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:53) > at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140) > at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:144) > at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:105) > at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) > at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) > at java.io.DataOutputStream.write(DataOutputStream.java:90) > at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveChunk(DataNode.java:2292) > at org.apache.hadoop.dfs.DataNode$BlockReceiver.receivePacket(DataNode.java:2411) > at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:2476) > at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1204) > at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976) > at java.lang.Thread.run(Thread.java:619) > {noformat} > Since it's not constantly happening, my guess is whenever datanode gets some small space available, namenode over-assigns blocks which can fail the block > pipeline. > (Note, before 0.17, namenode was much slower in assigning blocks) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.