hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ted Yu (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HBASE-15252) Data loss when replaying wal if HDFS timeout
Date Thu, 11 Feb 2016 15:22:18 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142856#comment-15142856
] 

Ted Yu edited comment on HBASE-15252 at 2/11/16 3:22 PM:
---------------------------------------------------------

Edit:
The following test failure can be reproduced locally with path:
{code}
Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 45.211 sec <<< FAILURE!
- in org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay
testDatalossWhenInputError(org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay) 
Time elapsed: 0.352 sec  <<< ERROR!
java.io.IOException: Got unknown writer class: SecureProtobufLogWriter
	at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:205)
	at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:154)
	at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:66)
	at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149)
	at org.apache.hadoop.hbase.regionserver.wal.TestWALReplay.testDatalossWhenInputError(TestWALReplay.java:978)
{code}


was (Author: yuzhihong@gmail.com):
+1

> Data loss when replaying wal if HDFS timeout
> --------------------------------------------
>
>                 Key: HBASE-15252
>                 URL: https://issues.apache.org/jira/browse/HBASE-15252
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.17
>            Reporter: Duo Zhang
>            Assignee: Duo Zhang
>             Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.0.4, 0.98.18
>
>         Attachments: HBASE-15252-testcase.patch, HBASE-15252.patch
>
>
> This is a problem introduced by HBASE-13825 where we change the exception type in catch
block in {{readNext}} method of {{ProtobufLogReader}}.
> {code:title=ProtobufLogReader.java}
>       try {
>           ......
>           ProtobufUtil.mergeFrom(builder, new LimitInputStream(this.inputStream, size),
>             (int)size);
>         } catch (IOException ipbe) { // <------ used to be InvalidProtocolBufferException
>           throw (EOFException) new EOFException("Invalid PB, EOF? Ignoring; originalPosition="
+
>             originalPosition + ", currentPosition=" + this.inputStream.getPos() +
>             ", messageSize=" + size + ", currentAvailable=" + available).initCause(ipbe);
>         }
> {code}
> Here if the {{inputStream}} throws an {{IOException}} due to timeout or something, we
just convert it to an {{EOFException}} and at the bottom of this method, we ignore {{EOFException}}
and return false. This cause the upper layer think we reach the end of file. So when replaying
we will treat the HDFS timeout error as a normal end of file and cause data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message