hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1
Date Sun, 19 Nov 2017 11:08:00 GMT

    [ https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258446#comment-16258446
] 

Steve Loughran commented on HBASE-19289:
----------------------------------------

Closed HADOOP-15051 as wontfix. LocalFS output streams don't declare their support for hflush/sync
for the following reason, as covered in HADOOP-13327 (oustanding, reviews welcome)

h3. Output streams which do not implement the flush/persitence semantics of hflush/hsync MUST
NOT declare that their streams have that capability.

LocalFileSystem is a subclass of ChecksumFileSystem; ChecksumFileSystem output streams don't
implement hflush/hsync, therefore it's the correct behaviour in the Hadoop code.

If HBase requires the methods for the correct persistence of its data, then it cannot safely
use localFS as destination of its output. It's check is therefore also the correct behavior

In which case, "expressly tell folks not to run HBase on top of LocalFileSystem," is the correct
action on your part. People must not be using the local FS as a direct destination of HDFS
output.

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3
beta1
> --------------------------------------------------------------------------------------------
>
>                 Key: HBASE-19289
>                 URL: https://issues.apache.org/jira/browse/HBASE-19289
>             Project: HBase
>          Issue Type: Test
>            Reporter: Ted Yu
>         Attachments: 19289.v1.txt
>
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the following exception
when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time elapsed:
0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException:
hflush
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message