From issues-return-332580-archive-asf-public=cust-asf.ponee.io@hbase.apache.org Tue Feb 6 02:22:05 2018 Return-Path: X-Original-To: archive-asf-public@eu.ponee.io Delivered-To: archive-asf-public@eu.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by mx-eu-01.ponee.io (Postfix) with ESMTP id D3F75180647 for ; Tue, 6 Feb 2018 02:22:05 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id C39DF160C5B; Tue, 6 Feb 2018 01:22:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id E5494160C5A for ; Tue, 6 Feb 2018 02:22:04 +0100 (CET) Received: (qmail 53668 invoked by uid 500); 6 Feb 2018 01:22:03 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 53603 invoked by uid 99); 6 Feb 2018 01:22:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 06 Feb 2018 01:22:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 41EBE1A0595 for ; Tue, 6 Feb 2018 01:22:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -101.51 X-Spam-Level: X-Spam-Status: No, score=-101.51 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_WHITELIST=-100, WEIRD_PORT=0.001] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 0B2zF7XLAQLY for ; Tue, 6 Feb 2018 01:22:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 7F7495F396 for ; Tue, 6 Feb 2018 01:22:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id F0305E015E for ; Tue, 6 Feb 2018 01:22:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 45A9E21E86 for ; Tue, 6 Feb 2018 01:22:00 +0000 (UTC) Date: Tue, 6 Feb 2018 01:22:00 +0000 (UTC) From: "stack (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Reopened] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reopened HBASE-19841: --------------------------- Reopening. Breaks launching of MR jobs on a cluster. Here is what a good launch looks like: {code} ... 18/02/05 17:11:33 INFO impl.YarnClientImpl: Submitted application application_1517369646236_0009 18/02/05 17:11:33 INFO mapreduce.Job: The url to track the job: http://ve0524.halxg.cloudera.com:10134/proxy/application_1517369646236_0009/ 18/02/05 17:11:33 INFO mapreduce.Job: Running job: job_1517369646236_0009 18/02/05 17:11:40 INFO mapreduce.Job: Job job_1517369646236_0009 running in uber mode : false 18/02/05 17:11:40 INFO mapreduce.Job: map 0% reduce 0% 18/02/05 17:11:57 INFO mapreduce.Job: map 14% reduce 0% ... {code} ... but now it does this.... {code} 18/02/05 17:17:54 INFO mapreduce.Job: The url to track the job: http://ve0524.halxg.cloudera.com:10134/proxy/application_1517369646236_0011/ 18/02/05 17:17:54 INFO mapreduce.Job: Running job: job_1517369646236_0011 18/02/05 17:17:56 INFO mapreduce.Job: Job job_1517369646236_0011 running in uber mode : false 18/02/05 17:17:56 INFO mapreduce.Job: map 0% reduce 0% 18/02/05 17:17:56 INFO mapreduce.Job: Job job_1517369646236_0011 failed with state FAILED due to: Application application_1517369646236_0011 failed 2 times due to AM Container for appattempt_1517369646236_0011_000002 exited with exitCode: -1000 Failing this attempt.Diagnostics: File file:/tmp/stack/.staging/job_1517369646236_0011/job.splitmetainfo does not exist java.io.FileNotFoundException: File file:/tmp/stack/.staging/job_1517369646236_0011/job.splitmetainfo does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:635) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:861) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:625) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) For more detailed output, check the application tracking page: http://ve0524.halxg.cloudera.com:8188/applicationhistory/app/application_1517369646236_0011 Then click on links to logs of each attempt. . Failing the application. 18/02/05 17:17:56 INFO mapreduce.Job: Counters: 0 {code} If I revert this patch, the submit runs again. I'd made the staging dir /tmp/stack and seemed to get further... The job staging was made in the local fs... but it seems like we are then looking for it up in hdfs. My guess is our stamping the fs as local until minihdfscluster starts works for the unit test case but it messes up the inference of fs that allows the above submission to work. I'd like to revert this if thats ok. > Tests against hadoop3 fail with StreamLacksCapabilityException > -------------------------------------------------------------- > > Key: HBASE-19841 > URL: https://issues.apache.org/jira/browse/HBASE-19841 > Project: HBase > Issue Type: Test > Reporter: Ted Yu > Assignee: Mike Drob > Priority: Major > Fix For: 2.0.0-beta-2 > > Attachments: 19841.007.patch, 19841.06.patch, 19841.v0.txt, 19841.v1.txt, HBASE-19841.v10.patch, HBASE-19841.v11.patch, HBASE-19841.v11.patch, HBASE-19841.v2.patch, HBASE-19841.v3.patch, HBASE-19841.v4.patch, HBASE-19841.v5.patch, HBASE-19841.v7.patch, HBASE-19841.v8.patch, HBASE-19841.v8.patch, HBASE-19841.v8.patch, HBASE-19841.v9.patch > > > The following can be observed running against hadoop3: > {code} > java.io.IOException: cannot get log writer > at org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107) > at org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89) > Caused by: org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: hflush and hsync > at org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107) > at org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89) > {code} > This was due to hbase-server/src/test/resources/hbase-site.xml not being picked up by Configuration object. Among the configs from this file, the value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence of hflush and hsync. Without this config entry, StreamLacksCapabilityException is thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)