Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 55711 invoked from network); 7 Jul 2008 17:23:30 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 7 Jul 2008 17:23:30 -0000 Received: (qmail 27303 invoked by uid 500); 7 Jul 2008 17:23:23 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 27279 invoked by uid 500); 7 Jul 2008 17:23:23 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 27237 invoked by uid 99); 7 Jul 2008 17:23:23 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 07 Jul 2008 10:23:23 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 07 Jul 2008 17:22:39 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 8D2CB234C155 for ; Mon, 7 Jul 2008 10:22:31 -0700 (PDT) Message-ID: <407995319.1215451351564.JavaMail.jira@brutus> Date: Mon, 7 Jul 2008 10:22:31 -0700 (PDT) From: "Koji Noguchi (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Created: (HADOOP-3707) Frequent DiskOutOfSpaceException on almost-full datanodes MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org Frequent DiskOutOfSpaceException on almost-full datanodes --------------------------------------------------------- Key: HADOOP-3707 URL: https://issues.apache.org/jira/browse/HADOOP-3707 Project: Hadoop Core Issue Type: Bug Components: dfs Affects Versions: 0.17.0 Reporter: Koji Noguchi On a datanode which is completely full (leaving reserve space), we frequently see target node reporting, {noformat} 2008-07-07 16:54:44,707 INFO org.apache.hadoop.dfs.DataNode: Receiving block blk_3328886742742952100 src: /11.1.11.111:22222 dest: /11.1.11.111:22222 2008-07-07 16:54:44,708 INFO org.apache.hadoop.dfs.DataNode: writeBlock blk_3328886742742952100 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient space for an additional block 2008-07-07 16:54:44,708 ERROR org.apache.hadoop.dfs.DataNode: 33.3.33.33:22222:DataXceiver: org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient space for an additional block at org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getNextVolume(FSDataset.java:444) at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:716) at org.apache.hadoop.dfs.DataNode$BlockReceiver.(DataNode.java:2187) at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1113) at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976) at java.lang.Thread.run(Thread.java:619) {noformat} Sender reporting {noformat} 2008-07-07 16:54:44,712 INFO org.apache.hadoop.dfs.DataNode: 11.1.11.111:22222:Exception writing block blk_3328886742742952100 to mirror 33.3.33.33:22222 java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104) at sun.nio.ch.IOUtil.write(IOUtil.java:75) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:53) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:144) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:105) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) at java.io.DataOutputStream.write(DataOutputStream.java:90) at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveChunk(DataNode.java:2292) at org.apache.hadoop.dfs.DataNode$BlockReceiver.receivePacket(DataNode.java:2411) at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:2476) at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1204) at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976) at java.lang.Thread.run(Thread.java:619) {noformat} Since it's not constantly happening, my guess is whenever datanode gets some small space available, namenode over-assigns blocks which can fail the block pipeline. (Note, before 0.17, namenode was much slower in assigning blocks) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.