Return-Path: Delivered-To: apmail-hadoop-hbase-dev-archive@locus.apache.org Received: (qmail 32207 invoked from network); 18 Jan 2009 22:06:21 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 18 Jan 2009 22:06:21 -0000 Received: (qmail 17487 invoked by uid 500); 18 Jan 2009 22:06:20 -0000 Delivered-To: apmail-hadoop-hbase-dev-archive@hadoop.apache.org Received: (qmail 17471 invoked by uid 500); 18 Jan 2009 22:06:20 -0000 Mailing-List: contact hbase-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-dev@hadoop.apache.org Delivered-To: mailing list hbase-dev@hadoop.apache.org Received: (qmail 17460 invoked by uid 99); 18 Jan 2009 22:06:20 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 18 Jan 2009 14:06:20 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 18 Jan 2009 22:06:19 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 85AC8234C48B for ; Sun, 18 Jan 2009 14:05:59 -0800 (PST) Message-ID: <952303901.1232316359535.JavaMail.jira@brutus> Date: Sun, 18 Jan 2009 14:05:59 -0800 (PST) From: "stack (JIRA)" To: hbase-dev@hadoop.apache.org Subject: [jira] Resolved: (HBASE-1132) Can't append to HLog, can't roll log, infinite cycle (another spin on HBASE-930) In-Reply-To: <388260579.1232238539516.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HBASE-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-1132. -------------------------- Resolution: Fixed Fix Version/s: 0.19.0 > Can't append to HLog, can't roll log, infinite cycle (another spin on HBASE-930) > -------------------------------------------------------------------------------- > > Key: HBASE-1132 > URL: https://issues.apache.org/jira/browse/HBASE-1132 > Project: Hadoop HBase > Issue Type: Bug > Environment: Ryan Rawson cluster (TRUNK) > Reporter: stack > Fix For: 0.19.0 > > > Saw below loop in Ryan Rawson logs: > {code} > .... > 2009-01-16 15:32:43,001 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-2067415907098101353_164148 > 2009-01-16 15:32:45,561 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream > 2009-01-16 15:32:45,561 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_4699358014912484437_164148 > 2009-01-16 15:32:49,004 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad connect ack with firstBadLink 10.10.20.19:50010 > 2009-01-16 15:32:49,004 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-8649135750875451286_164148 > 2009-01-16 15:32:51,562 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2723) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183) > 2009-01-16 15:32:51,562 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_4699358014912484437_164148 bad datanode[0] nodes == null > 2009-01-16 15:32:51,562 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Aborting... > 2009-01-16 15:32:51,562 FATAL org.apache.hadoop.hbase.regionserver.HLog: Could not append. Requesting close of log > java.io.IOException: Could not read from stream > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:119) > at java.io.DataInputStream.readByte(DataInputStream.java:265) > at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325) > at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346) > at org.apache.hadoop.io.Text.readString(Text.java:400) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2779) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2704) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183) > 2009-01-16 15:32:51,563 ERROR org.apache.hadoop.hbase.regionserver.LogRoller: Log rolling failed with ioe: > java.io.IOException: Could not read from stream > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:119) > at java.io.DataInputStream.readByte(DataInputStream.java:265) > at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325) > at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346) > at org.apache.hadoop.io.Text.readString(Text.java:400) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2779) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2704) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183) > 2009-01-16 15:32:51,564 FATAL org.apache.hadoop.hbase.regionserver.HLog: Could not append. Requesting close of log > java.io.IOException: Could not read from stream > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:119) > at java.io.DataInputStream.readByte(DataInputStream.java:265) > at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325) > at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346) > at org.apache.hadoop.io.Text.readString(Text.java:400) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2779) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2704) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183) > 2009-01-16 15:32:51,563 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: java.io.IOException: Could not read from stream > 2009-01-16 15:32:51,564 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: java.io.IOException: Could not read from stream > 2009-01-16 15:32:51,564 FATAL org.apache.hadoop.hbase.regionserver.HLog: Could not append. Requesting close of log > java.io.IOException: Could not read from stream > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:119) > at java.io.DataInputStream.readByte(DataInputStream.java:265) > at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325) > at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346) > at org.apache.hadoop.io.Text.readString(Text.java:400) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2779) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2704) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183) > ... > {code} > For 930, for different exception type, we triggered abort. Should do same here. If IOE and "Can't read from stream", shut down. The filesystem check seems to be coming back fine and dandy. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.