hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-5240) inside of FileOutputCommitter the initialized Credentials cache appears to be empty
Date Tue, 14 May 2013 01:20:13 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-5240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13656618#comment-13656618
] 

Hadoop QA commented on MAPREDUCE-5240:
--------------------------------------

{color:green}+1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12583015/MAPREDUCE-5240-20130513.txt
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new or modified
test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version
1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app.

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3631//testReport/
Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3631//console

This message is automatically generated.
                
> inside of FileOutputCommitter the initialized Credentials cache appears to be empty
> -----------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5240
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5240
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv2
>    Affects Versions: 2.0.4-alpha
>            Reporter: Roman Shaposhnik
>            Assignee: Vinod Kumar Vavilapalli
>            Priority: Blocker
>             Fix For: 2.0.5-beta
>
>         Attachments: LostCreds.java, MAPREDUCE-5240-20130512.txt, MAPREDUCE-5240-20130513.txt
>
>
> I am attaching a modified wordcount job that clearly demonstrates the problem we've encountered
in running Sqoop2 on YARN (BIGTOP-949).
> Here's what running it produces:
> {noformat}
> $ hadoop fs -mkdir in
> $ hadoop fs -put /etc/passwd in
> $ hadoop jar ./bug.jar org.myorg.LostCreds
> 13/05/12 03:13:46 WARN mapred.JobConf: The variable mapred.child.ulimit is no longer
used.
> numberOfSecretKeys: 1
> numberOfTokens: 0
> ..............
> ..............
> ..............
> 13/05/12 03:05:35 INFO mapreduce.Job: Job job_1368318686284_0013 failed with state FAILED
due to: Job commit failed: java.io.IOException:
> numberOfSecretKeys: 0
> numberOfTokens: 0
> 	at org.myorg.LostCreds$DestroyerFileOutputCommitter.commitJob(LostCreds.java:43)
> 	at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:249)
> 	at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:212)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:619)
> {noformat}
> As you can see, even though we've clearly initialized the creds via:
> {noformat}
> job.getCredentials().addSecretKey(new Text("mykey"), "mysecret".getBytes());
> {noformat}
> It doesn't seem to appear later in the job.
> This is a pretty critical issue for Sqoop 2 since it appears to be DOA for YARN in Hadoop
2.0.4-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message