hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10326) M/R jobs can not access S3 if Kerberos is enabled
Date Tue, 11 Feb 2014 11:15:31 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897742#comment-13897742
] 

Hudson commented on HADOOP-10326:
---------------------------------

SUCCESS: Integrated in Hadoop-Yarn-trunk #478 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/478/])
HADOOP-10326. M/R jobs can not access S3 if Kerberos is enabled. Contributed by bc Wong. (atm:
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1566965)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java


> M/R jobs can not access S3 if Kerberos is enabled
> -------------------------------------------------
>
>                 Key: HADOOP-10326
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10326
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 2.2.0
>         Environment: hadoop-1.0.0;MIT kerberos;java 1.6.0_26
> CDH4.3.0(hadoop 2.0.0-alpha);MIT kerberos;java 1.6.0_26
>            Reporter: Manuel DE FERRAN
>              Labels: s3
>             Fix For: 2.4.0
>
>         Attachments: 0001-HADOOP-10326.-s3-s3n-does-not-support-tokens.patch
>
>
> With Kerberos enabled, any job that is taking as input or output s3 files fails.
> It can be easily reproduced with wordcount shipped in hadoop-examples.jar and a public
S3 file:
> {code}
> /opt/hadoop/bin/hadoop --config /opt/hadoop/conf/ jar /opt/hadoop/hadoop-examples-1.0.0.jar
wordcount s3n://ubikodpublic/test out01
> {code}
> returns:
> {code}
> 12/08/10 12:40:19 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 192 for hadoop
on 10.85.151.233:9000
> 12/08/10 12:40:19 INFO security.TokenCache: Got dt for hdfs://aws04.machine.com:9000/mapred/staging/hadoop/.staging/job_201208101229_0004;uri=10.85.151.233:9000;t.service=10.85.151.233:9000
> 12/08/10 12:40:19 INFO mapred.JobClient: Cleaning up the staging area hdfs://aws04.machine.com:9000/mapred/staging/hadoop/.staging/job_201208101229_0004
> java.lang.IllegalArgumentException: java.net.UnknownHostException: ubikodpublic
>         at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:293)
>         at org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:317)
>         at org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:189)
>         at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:92)
>         at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:79)
>         at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:197)
>         at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252)
> <SNIP>
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message