Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id CB8E4200C6E for ; Mon, 8 May 2017 11:22:10 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id CA2B7160BA5; Mon, 8 May 2017 09:22:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 1C932160B99 for ; Mon, 8 May 2017 11:22:09 +0200 (CEST) Received: (qmail 2171 invoked by uid 500); 8 May 2017 09:22:09 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 2160 invoked by uid 99); 8 May 2017 09:22:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 08 May 2017 09:22:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id C08F2C00B6 for ; Mon, 8 May 2017 09:22:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id DxL3W-wybP5q for ; Mon, 8 May 2017 09:22:07 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id C8CC25FC43 for ; Mon, 8 May 2017 09:22:06 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id A1B02E06CC for ; Mon, 8 May 2017 09:22:05 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id D35E121DFB for ; Mon, 8 May 2017 09:22:04 +0000 (UTC) Date: Mon, 8 May 2017 09:22:04 +0000 (UTC) From: "Steve Loughran (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HADOOP-13372) MR jobs can not access Swift filesystem if Kerberos is enabled MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 08 May 2017 09:22:11 -0000 [ https://issues.apache.org/jira/browse/HADOOP-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16000491#comment-16000491 ] Steve Loughran commented on HADOOP-13372: ----------------------------------------- I like the move; you should go back to assert. It's a bit confusing, but Assert: any exception fails the test Assume: throws an AssumptionException which tells JUnit to report it as skipped. its how we turn off tests which don't run on windows, if the object endpoint isn't set, etc. Using it will stop failures being picked up. I see the patch isn't applying. Try creating a patch against branch-2 & name the patch file, "HADOOP-13372-branch-2-003.patch" to tell yetus which branch to apply against thanks, > MR jobs can not access Swift filesystem if Kerberos is enabled > -------------------------------------------------------------- > > Key: HADOOP-13372 > URL: https://issues.apache.org/jira/browse/HADOOP-13372 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/swift, security > Affects Versions: 2.7.2 > Reporter: ramtin > Assignee: ramtin > Attachments: HADOOP-13372.001.patch, HADOOP-13372.002.patch > > > {code} > java.lang.IllegalArgumentException: java.net.UnknownHostException: > at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378) > at org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:262) > at org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:303) > at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:524) > at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:508) > at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121) > at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) > at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) > at org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:121) > at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) > at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) > at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:183) > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:126) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:430) > Caused by: java.net.UnknownHostException: > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: common-issues-help@hadoop.apache.org