hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hemanth Yamijala (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5964) Fix the 'cluster drain' problem in the Capacity Scheduler wrt High RAM Jobs
Date Thu, 18 Jun 2009 05:35:07 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12721074#action_12721074

Hemanth Yamijala commented on HADOOP-5964:

Arun, I've started looking at this patch. It did not apply on trunk with TestCapacityScheduler
failing to merge. I tried to fix it - the conflict seemed to be only in an import statement.
But when I ran the test case to check whether the merge was fine, I got the following failures:

junit.framework.AssertionFailedError: null
    at org.apache.hadoop.mapred.TestCapacityScheduler.testUserLimitsForHighMemoryJobs(TestCapacityScheduler.java:1373)
junit.framework.AssertionFailedError: null
    at org.apache.hadoop.mapred.TestCapacityScheduler.testClusterBlockingForLackOfMemory(TestCapacityScheduler.java:1846)
junit.framework.AssertionFailedError: null
    at org.apache.hadoop.mapred.TestCapacityScheduler.testMemoryMatchingWithRetiredJobs(TestCapacityScheduler.java:1944)
junit.framework.ComparisonFailure: null expected:<Used capacity: [2 (33.3]% of Capacity)>
but was:<Used capacity: [4 (66.7]% of Capacity)>
    at org.apache.hadoop.mapred.TestCapacityScheduler.checkOccupiedSlots(TestCapacityScheduler.java:2814)
    at org.apache.hadoop.mapred.TestCapacityScheduler.testHighRamJobWithSpeculativeExecution(TestCapacityScheduler.java:2383)

Particularly from the last test, I am hoping that its only the test case that needs fixing,
because in actual, it seems like the patch has actually increased the number of used slots.

I will continue to look at the changes under this assumption, and get to the test cases in
a bit.

> Fix the 'cluster drain' problem in the Capacity Scheduler wrt High RAM Jobs
> ---------------------------------------------------------------------------
>                 Key: HADOOP-5964
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5964
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>    Affects Versions: 0.20.0
>            Reporter: Arun C Murthy
>            Assignee: Arun C Murthy
>             Fix For: 0.21.0
>         Attachments: HADOOP-5964_0_20090602.patch, HADOOP-5964_1_20090608.patch, HADOOP-5964_2_20090609.patch,
HADOOP-5964_4_20090615.patch, HADOOP-5964_6_20090617.patch
> When a HighRAMJob turns up at the head of the queue, the current implementation of support
for HighRAMJobs in the Capacity Scheduler has problem in that the scheduler stops assigning
tasks to all TaskTrackers in the cluster until a HighRAMJob finds a suitable TaskTrackers
for all its tasks.
> This causes a severe utilization problem since effectively no new tasks are allowed to
run until the HighRAMJob (at the head of the queue) gets slots.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message