hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hemanth Yamijala (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5186) Improve limit handling in fairshare scheduler
Date Mon, 09 Feb 2009 03:40:59 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671717#action_12671717

Hemanth Yamijala commented on HADOOP-5186:

bq. If all tasks from all initialized jobs in the pool are running, then initialize further

Matei, waiting until all the jobs become running to initialize further jobs may result in
some under utilization still, if we only fire initialization when we determine that all previous
jobs are running. This is because job initialization is expected to take some time, as it
involves DFS access for localizing the job, as well as running the 'Setup task' (which is
handled transparently by the Jobtracker). Hence, maybe it is a better idea to pre-initialize
a few additional jobs (could be very small), and keep that number a constant. This way we
are still bounded, but also have a backlog of jobs to immediately schedule tasks from, if
all other jobs become running by then. Does this make sense ?

> Improve limit handling in fairshare scheduler
> ---------------------------------------------
>                 Key: HADOOP-5186
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5186
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: contrib/fair-share
>            Reporter: Hemanth Yamijala
>            Priority: Minor
> The fairshare scheduler has a way by which it can limit the number of jobs in a pool
by setting the maxRunningJobs parameter in its allocations definition. This limit is treated
as a hard limit, and comes into effect even if the cluster is free to run more jobs, resulting
in underutilization. Possibly the same thing happens with the parameter maxRunningJobs for
user and userMaxJobsDefault. It may help to treat these as a soft limit and run additional
jobs to keep the cluster fully utilized.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message