Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 80204 invoked from network); 9 Jul 2008 21:47:54 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 9 Jul 2008 21:47:54 -0000 Received: (qmail 5842 invoked by uid 500); 9 Jul 2008 21:47:52 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 5816 invoked by uid 500); 9 Jul 2008 21:47:52 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 5805 invoked by uid 99); 9 Jul 2008 21:47:52 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Jul 2008 14:47:52 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Jul 2008 21:47:09 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id AB182234C162 for ; Wed, 9 Jul 2008 14:47:31 -0700 (PDT) Message-ID: <29954850.1215640051699.JavaMail.jira@brutus> Date: Wed, 9 Jul 2008 14:47:31 -0700 (PDT) From: "Hairong Kuang (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Commented: (HADOOP-3707) Frequent DiskOutOfSpaceException on almost-full datanodes In-Reply-To: <407995319.1215451351564.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12612296#action_12612296 ] Hairong Kuang commented on HADOOP-3707: --------------------------------------- Raghu, could you please upload a patch for the trunk? > Frequent DiskOutOfSpaceException on almost-full datanodes > --------------------------------------------------------- > > Key: HADOOP-3707 > URL: https://issues.apache.org/jira/browse/HADOOP-3707 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.17.0 > Reporter: Koji Noguchi > Assignee: Raghu Angadi > Priority: Blocker > Fix For: 0.17.2, 0.18.0, 0.19.0 > > Attachments: HADOOP-3707-branch-017.patch, HADOOP-3707-branch-017.patch > > > On a datanode which is completely full (leaving reserve space), we frequently see > target node reporting, > {noformat} > 2008-07-07 16:54:44,707 INFO org.apache.hadoop.dfs.DataNode: Receiving block blk_3328886742742952100 src: /11.1.11.111:22222 dest: /11.1.11.111:22222 > 2008-07-07 16:54:44,708 INFO org.apache.hadoop.dfs.DataNode: writeBlock blk_3328886742742952100 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient space for an additional block > 2008-07-07 16:54:44,708 ERROR org.apache.hadoop.dfs.DataNode: 33.3.33.33:22222:DataXceiver: org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient space for an additional block > at org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getNextVolume(FSDataset.java:444) > at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:716) > at org.apache.hadoop.dfs.DataNode$BlockReceiver.(DataNode.java:2187) > at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1113) > at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976) > at java.lang.Thread.run(Thread.java:619) > {noformat} > Sender reporting > {noformat} > 2008-07-07 16:54:44,712 INFO org.apache.hadoop.dfs.DataNode: 11.1.11.111:22222:Exception writing block blk_3328886742742952100 to mirror 33.3.33.33:22222 > java.io.IOException: Broken pipe > at sun.nio.ch.FileDispatcher.write0(Native Method) > at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) > at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104) > at sun.nio.ch.IOUtil.write(IOUtil.java:75) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334) > at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:53) > at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140) > at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:144) > at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:105) > at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) > at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) > at java.io.DataOutputStream.write(DataOutputStream.java:90) > at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveChunk(DataNode.java:2292) > at org.apache.hadoop.dfs.DataNode$BlockReceiver.receivePacket(DataNode.java:2411) > at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:2476) > at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1204) > at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976) > at java.lang.Thread.run(Thread.java:619) > {noformat} > Since it's not constantly happening, my guess is whenever datanode gets some small space available, namenode over-assigns blocks which can fail the block > pipeline. > (Note, before 0.17, namenode was much slower in assigning blocks) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.