hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Busbey (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1
Date Wed, 06 Dec 2017 03:58:00 GMT

    [ https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279621#comment-16279621
] 

Sean Busbey commented on HBASE-19289:
-------------------------------------

{quote}
bq. unless we want to expressly tell folks not to run HBase on top of LocalFileSystem, in
which case why are we running tests against it in the first place?
bq. ps: Are people using HBase against file:// today? If so, they've not been getting the
persistence/durability HBase needs. Tell them to stop it.

Is this solvable by a flag that says "yes I acknowledge that I may lose data"? I think we're
well aware that we may experience "data loss" with XSumFileSystem and this is OK because we
just don't care (because it's a short lived test). We don't want to wait the extra 5+secs
for a full MiniDFSCluster.
{quote}

That doesn't work for our reliance on LocalFileSystem for standalone mode ([ref the quickstart
guide|http://hbase.apache.org/book.html#_get_started_with_hbase]). We don't actually call
sync / flush or anything like that for the local OS, so the WAL is essentially useless.

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3
beta1
> --------------------------------------------------------------------------------------------
>
>                 Key: HBASE-19289
>                 URL: https://issues.apache.org/jira/browse/HBASE-19289
>             Project: HBase
>          Issue Type: Test
>            Reporter: Ted Yu
>            Assignee: Mike Drob
>         Attachments: 19289.v1.txt, 19289.v2.txt, HBASE-19289.patch
>
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the following exception
when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time elapsed:
0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException:
hflush
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
> 	at org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message