hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5884) Capacity scheduler should account high memory jobs as using more capacity of the queue
Date Wed, 03 Jun 2009 08:04:07 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715857#action_12715857
] 

Arun C Murthy commented on HADOOP-5884:
---------------------------------------

I'm just proposing we add #slots (along with already available #running_tasks) to both per-queue
info and per-job info (jobdetails.jsp) so that it's clear to users that the queue isn't being
under-served (since #running_tasks might be lesser than #slots_taken).

> Capacity scheduler should account high memory jobs as using more capacity of the queue
> --------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5884
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5884
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod K V
>         Attachments: HADOOP-5884-20090529.1.txt, HADOOP-5884-20090602.1.txt
>
>
> Currently, when a high memory job is scheduled by the capacity scheduler, each task scheduled
counts only once in the capacity of the queue, though it may actually be preventing other
jobs from using spare slots on that node because of its higher memory requirements. In order
to be fair, the capacity scheduler should proportionally (with respect to default memory)
account high memory jobs as using a larger capacity of the queue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message