hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "rahul k singh (JIRA)" <j...@apache.org>
Subject [jira] Updated: (MAPREDUCE-1105) CapacityScheduler: It should be possible to set queue hard-limit beyond it's actual capacity
Date Thu, 22 Oct 2009 04:44:59 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

rahul k singh updated MAPREDUCE-1105:
-------------------------------------

    Release Note: 
Replaced the existing max task limits variables "mapred.capacity-scheduler.queue.<queue-name>.max.map.slots"
and "mapred.capacity-scheduler.queue.<queue-name>.max.reduce.slots"  with  "mapred.capacity-scheduler.queue.<queue-name>.maximum-capacity"
. 

max task limit variables were used to throttle the queue, i.e, these were the hard limit and
not allowing queue to grow further.
maximum-capacity variable defines a limit beyond which a queue cannot use the capacity of
the cluster. This provides a means to limit how much excess capacity a queue can use.
 
maximum-capacity variable  behavior is different from max task limit variables, as maximum-capacity
is a percentage and it grows and shrinks in absolute terms based on total cluster capacity.Also
same maximum-capacity percentage is applied to both map and reduce.

  was:
Replaced the existing max task limits variables "mapred.capacity-scheduler.queue.<queue-name>.max.map.slots"
and "mapred.capacity-scheduler.queue.<queue-name>.max.reduce.slots"  with  "mapred.capacity-scheduler.queue.<queue-name>.maximum-capacity"
. 

max task limit variables were used to throttle the queue, i.e, these were the hard limit and
not allowing queue to grow further
maximum-capacity variable defines a limit beyond which a queue cannot use the capacity of
the cluster. This provides a means to limit how much excess capacity a queue can use.
 
maximum-capacity variable  behavior is different from max task limit variables, as maximum-capacity
is a percentage and it grows and shrinks in absolute terms based on total cluster capacity.Also
same maximum-capacity percentage is applied to both map and reduce.


> CapacityScheduler: It should be possible to set queue hard-limit beyond it's actual capacity
> --------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-1105
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1105
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>    Affects Versions: 0.21.0
>            Reporter: Arun C Murthy
>            Assignee: rahul k singh
>            Priority: Blocker
>             Fix For: 0.21.0
>
>         Attachments: MAPRED-1105-21-1.patch, MAPRED-1105-21-2.patch, MAPRED-1105-21-3.patch,
MAPRED-1105-21-3.patch, MAPREDUCE-1105-version20-2.patch, MAPREDUCE-1105-version20.patch.txt,
MAPREDUCE-1105-yahoo-version20-3.patch, MAPREDUCE-1105-yahoo-version20-4.patch, MAPREDUCE-1105-yahoo-version20-5.patch
>
>
> Currently the CS caps a queue's capacity to it's actual capacity if a hard-limit is specified
to be greater than it's actual capacity. We should allow the queue to go upto the hard-limit
if specified.
> Also, I propose we change the hard-limit unit to be percentage rather than #slots.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message