hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Metzger (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3086) Make NodeManager memory configurable in MiniYARNCluster
Date Fri, 23 Jan 2015 17:23:35 GMT

    [ https://issues.apache.org/jira/browse/YARN-3086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14289546#comment-14289546
] 

Robert Metzger commented on YARN-3086:
--------------------------------------

It seems that even on "trunk" tests are failing in the "hadoop-yarn-server-resourcemanager"
package. Looks like its pretty hard to verify if my change is breaking anything.
I'm uploading an updated patch in a few hours...
{code}
Failed tests: 
  TestAMRestart.testRMAppAttemptFailuresValidityInterval:630 AppAttempt state is not correct
(timedout) expected:<ALLOCATED> but was:<SCHEDULED>
  TestAMRestart.testShouldNotCountFailureToMaxAttemptRetry:405 AppAttempt state is not correct
(timedout) expected:<ALLOCATED> but was:<SCHEDULED>
  TestClientRMTokens.testShortCircuitRenewCancelDifferentHostSamePort:316->checkShortCircuitRenewCancel:363
expected:<getProxy> but was:<null>
  TestClientRMTokens.testShortCircuitRenewCancelDifferentHostDifferentPort:327->checkShortCircuitRenewCancel:363
expected:<getProxy> but was:<null>
  TestClientRMTokens.testShortCircuitRenewCancelSameHostDifferentPort:305->checkShortCircuitRenewCancel:363
expected:<getProxy> but was:<null>
  TestRMRestart.testQueueMetricsOnRMRestart:1812->assertQueueMetrics:1837 expected:<2>
but was:<1>
  TestRMRestart.testRMRestartGetApplicationList:965 
Wanted but not invoked:
rMAppManager.logApplicationSummary(
    isA(org.apache.hadoop.yarn.api.records.ApplicationId)
);
-> at org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartGetApplicationList(TestRMRestart.java:965)

However, there were other interactions with this mock:
-> at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1188)
-> at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1188)
-> at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1188)
-> at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1188)

  TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers:252->amRestartTests:393
Unexcpected MemorySeconds value expected:<-1456158548889> but was:<3265>

Tests in error: 
  TestClientRMTokens.testShortCircuitRenewCancel:285->checkShortCircuitRenewCancel:353
» NullPointer
  TestClientRMTokens.testShortCircuitRenewCancelWildcardAddress:294->checkShortCircuitRenewCancel:353
» NullPointer
  TestAMAuthorization.testUnauthorizedAccess:273 » UnknownHost Invalid host name...
  TestAMAuthorization.testUnauthorizedAccess:273 » UnknownHost Invalid host name...
{code}

> Make NodeManager memory configurable in MiniYARNCluster
> -------------------------------------------------------
>
>                 Key: YARN-3086
>                 URL: https://issues.apache.org/jira/browse/YARN-3086
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: test
>            Reporter: Robert Metzger
>            Priority: Minor
>         Attachments: YARN-3086.patch
>
>
> Apache Flink has a build-in YARN client to deploy it to YARN clusters.
> Recently, we added more tests for the client, using the MiniYARNCluster.
> One of the tests is requesting more containers than available. This test works well on
machines with enough memory, but on travis-ci (our test environment), the available main memory
is limited to 3 GB. 
> Therefore, I want to set custom amount of memory for each NodeManager.
> Right now, the NodeManager memory is hardcoded to 4GB.
> As discussed on the yarn-dev list, I'm going to create a patch for this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message