hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] Assigned: (HBASE-1132) Can't append to HLog, can't roll log, infinite cycle (another spin on HBASE-930)
Date Sun, 18 Jan 2009 22:05:59 GMT

     [ https://issues.apache.org/jira/browse/HBASE-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

stack reassigned HBASE-1132:
----------------------------

    Assignee: stack

> Can't append to HLog, can't roll log, infinite cycle (another spin on HBASE-930)
> --------------------------------------------------------------------------------
>
>                 Key: HBASE-1132
>                 URL: https://issues.apache.org/jira/browse/HBASE-1132
>             Project: Hadoop HBase
>          Issue Type: Bug
>         Environment: Ryan Rawson cluster (TRUNK)
>            Reporter: stack
>            Assignee: stack
>             Fix For: 0.19.0
>
>
> Saw below loop in Ryan Rawson logs:
> {code}
> ....
> 2009-01-16 15:32:43,001 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-2067415907098101353_164148
> 2009-01-16 15:32:45,561 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Could not read from stream
> 2009-01-16 15:32:45,561 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_4699358014912484437_164148
> 2009-01-16 15:32:49,004 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink 10.10.20.19:50010
> 2009-01-16 15:32:49,004 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-8649135750875451286_164148
> 2009-01-16 15:32:51,562 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception:
java.io.IOException: Unable to create new block.
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2723)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
> 2009-01-16 15:32:51,562 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_4699358014912484437_164148 bad datanode[0] nodes == null
> 2009-01-16 15:32:51,562 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations.
Aborting...
> 2009-01-16 15:32:51,562 FATAL org.apache.hadoop.hbase.regionserver.HLog: Could not append.
Requesting close of log
> java.io.IOException: Could not read from stream
>     at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:119)
>     at java.io.DataInputStream.readByte(DataInputStream.java:265)
>     at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325)
>     at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346)
>     at org.apache.hadoop.io.Text.readString(Text.java:400)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2779)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2704)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
> 2009-01-16 15:32:51,563 ERROR org.apache.hadoop.hbase.regionserver.LogRoller: Log rolling
failed with ioe:
> java.io.IOException: Could not read from stream
>     at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:119)
>     at java.io.DataInputStream.readByte(DataInputStream.java:265)
>     at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325)
>     at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346)
>     at org.apache.hadoop.io.Text.readString(Text.java:400)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2779)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2704)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
> 2009-01-16 15:32:51,564 FATAL org.apache.hadoop.hbase.regionserver.HLog: Could not append.
Requesting close of log
> java.io.IOException: Could not read from stream
>     at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:119)
>     at java.io.DataInputStream.readByte(DataInputStream.java:265)
>     at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325)
>     at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346)
>     at org.apache.hadoop.io.Text.readString(Text.java:400)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2779)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2704)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
> 2009-01-16 15:32:51,563 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: java.io.IOException:
Could not read from stream
> 2009-01-16 15:32:51,564 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: java.io.IOException:
Could not read from stream
> 2009-01-16 15:32:51,564 FATAL org.apache.hadoop.hbase.regionserver.HLog: Could not append.
Requesting close of log
> java.io.IOException: Could not read from stream
>     at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:119)
>     at java.io.DataInputStream.readByte(DataInputStream.java:265)
>     at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325)
>     at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346)
>     at org.apache.hadoop.io.Text.readString(Text.java:400)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2779)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2704)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
>     at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
> ...
> {code}
> For 930, for different exception type, we triggered abort.  Should do same here.  If
IOE and "Can't read from stream", shut down.    The filesystem check seems to be coming back
fine and dandy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message