hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Kanter (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (YARN-1795) After YARN-713, using FairScheduler can cause an InvalidToken Exception for NMTokens
Date Thu, 13 Mar 2014 17:31:43 GMT

     [ https://issues.apache.org/jira/browse/YARN-1795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Robert Kanter updated YARN-1795:
--------------------------------

    Description: 
Running the Oozie unit tests against a Hadoop build with YARN-713 causes many of the tests
to be flakey.  Doing some digging, I found that they were failing because some of the MR jobs
were failing; I found this in the syslog of the failed jobs:
{noformat}
2014-03-05 16:18:23,452 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
Diagnostics report from attempt_1394064846476_0013_m_000000_0: Container launch failed for
container_1394064846476_0013_01_000003 : org.apache.hadoop.security.token.SecretManager$InvalidToken:
No NMToken sent for 192.168.1.77:50759
       at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:206)
       at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.<init>(ContainerManagementProtocolProxy.java:196)
       at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
       at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
       at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
       at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
       at java.lang.Thread.run(Thread.java:744)
{noformat}

I did some debugging and found that the NMTokenCache has a different port number than what's
being looked up.  For example, the NMTokenCache had one token with address 192.168.1.77:58217
but ContainerManagementProtocolProxy.java:119 is looking for 192.168.1.77:58213. The 58213
address comes from ContainerLauncherImpl's constructor. So when the Container is being launched
it somehow has a different port than when the token was created.

Any ideas why the port numbers wouldn't match?

Update: This also happens in an actual cluster, not just Oozie's unit tests

  was:
Running the Oozie unit tests against a Hadoop build with YARN-713 causes many of the tests
to be flakey.  Doing some digging, I found that they were failing because some of the MR jobs
were failing; I found this in the syslog of the failed jobs:
{noformat}
2014-03-05 16:18:23,452 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
Diagnostics report from attempt_1394064846476_0013_m_000000_0: Container launch failed for
container_1394064846476_0013_01_000003 : org.apache.hadoop.security.token.SecretManager$InvalidToken:
No NMToken sent for 192.168.1.77:50759
       at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:206)
       at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.<init>(ContainerManagementProtocolProxy.java:196)
       at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
       at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
       at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
       at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
       at java.lang.Thread.run(Thread.java:744)
{noformat}

I did some debugging and found that the NMTokenCache has a different port number than what's
being looked up.  For example, the NMTokenCache had one token with address 192.168.1.77:58217
but ContainerManagementProtocolProxy.java:119 is looking for 192.168.1.77:58213. The 58213
address comes from ContainerLauncherImpl's constructor. So when the Container is being launched
it somehow has a different port than when the token was created.

Any ideas why the port numbers wouldn't match?

        Summary: After YARN-713, using FairScheduler can cause an InvalidToken Exception for
NMTokens  (was: Oozie tests are flakey after YARN-713)

We've now seen this problem in an actual cluster, not just Oozie's unit tests; so this is
definitely a problem and not something funny we're doing in the tests.  

I've also determined that this only happens with the FairScheduler; the CapacityScheduler
seems to work fine.

I've updated the name of the JIRA accordingly.  

> After YARN-713, using FairScheduler can cause an InvalidToken Exception for NMTokens
> ------------------------------------------------------------------------------------
>
>                 Key: YARN-1795
>                 URL: https://issues.apache.org/jira/browse/YARN-1795
>             Project: Hadoop YARN
>          Issue Type: Bug
>    Affects Versions: 2.4.0
>            Reporter: Robert Kanter
>            Priority: Critical
>         Attachments: org.apache.oozie.action.hadoop.TestMapReduceActionExecutor-output.txt,
syslog
>
>
> Running the Oozie unit tests against a Hadoop build with YARN-713 causes many of the
tests to be flakey.  Doing some digging, I found that they were failing because some of the
MR jobs were failing; I found this in the syslog of the failed jobs:
> {noformat}
> 2014-03-05 16:18:23,452 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
Diagnostics report from attempt_1394064846476_0013_m_000000_0: Container launch failed for
container_1394064846476_0013_01_000003 : org.apache.hadoop.security.token.SecretManager$InvalidToken:
No NMToken sent for 192.168.1.77:50759
>        at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:206)
>        at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.<init>(ContainerManagementProtocolProxy.java:196)
>        at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
>        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
>        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
>        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
>        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>        at java.lang.Thread.run(Thread.java:744)
> {noformat}
> I did some debugging and found that the NMTokenCache has a different port number than
what's being looked up.  For example, the NMTokenCache had one token with address 192.168.1.77:58217
but ContainerManagementProtocolProxy.java:119 is looking for 192.168.1.77:58213. The 58213
address comes from ContainerLauncherImpl's constructor. So when the Container is being launched
it somehow has a different port than when the token was created.
> Any ideas why the port numbers wouldn't match?
> Update: This also happens in an actual cluster, not just Oozie's unit tests



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message