hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matei Zaharia (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4803) large pending jobs hog resources
Date Fri, 06 Feb 2009 21:39:02 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671337#action_12671337
] 

Matei Zaharia commented on HADOOP-4803:
---------------------------------------

In fact upon further discussion with Joydeep and Dhruba, we may drop deficits altogether once
we add preemption and just use a similar concept for guaranteed shares to make sure pools
get their min share in order of how long they've been waiting for it. This will simplify the
code and make the scheduler behavior easier to understand.

> large pending jobs hog resources
> --------------------------------
>
>                 Key: HADOOP-4803
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4803
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/fair-share
>            Reporter: Joydeep Sen Sarma
>            Assignee: Matei Zaharia
>
> observing the cluster over the last day - one thing i noticed is that small jobs (single
digit tasks) are not doing a good job competing against large jobs. what seems to happen is
that:
> - large job comes along and needs to wait for a while for other large jobs.
> - slots are slowly transfered from one large job to another
> - small tasks keep waiting forever.
> is this an artifact of deficit based scheduling? it seems that long pending large jobs
are out-scheduling small jobs

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message