hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dick King (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1229) [Mumak] Allow customization of job submission policy
Date Tue, 24 Nov 2009 18:52:39 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12782087#action_12782087
] 

Dick King commented on MAPREDUCE-1229:
--------------------------------------

1: Should {{TestSimulator*JobSubmission}} check to see whether the total "runtime" was reasonable
for the Policy?

2: minor nit: Should {{SimulatorJobSubmissionPolicy/getPolicy(Configuration)}} use {{valueOf(policy.toUpper())}}
instead of looping through the types?

3: medium sized nit: in {{SimulatorJobClient.isOverloaded()}} there are two literals, 0.9
and 2.0F, that ought to be {{static private}} named values.

4: Here is my biggest point.  The existing code cannot submit a job more often than once every
five seconds when the jobs were spaced further apart than that and the policy is {{STRESS}}
.

Please consider adding code to call the {{processLoadProbingEvent}} core code when we {{processJobCompleteEvent}}
or a {{processJobSubmitEvent}} .  That includes potentially adding a new {{LoadProbingEvent}}
.  This can lead to an accumulation because each {{LoadProbingEvent}} replaces itself, so
we should track the ones that are in flight in a {{PriorityQueue}} and only add a new {{LoadProbingEvent}}
whenever the new event has a time stamp strictly earlier than the earliest one already in
flight.  This will limit us to two events in flight with the current {{adjustLoadProbingInterval}}
.  

If you don't do that, then if a real dreadnaught of a job gets dropped into the system and
the probing interval gets long it could take us a while to notice that we're okay to submit
jobs, in the case where the job has many tasks finishing at about the same time, and we could
submit tiny jobs as onsies every five seconds when the cluster is clear enough to accommodate
lots of jobs.  When the cluster can handle N jobs in less than 5N seconds for some N, we won't
overload it with the existing code.





> [Mumak] Allow customization of job submission policy
> ----------------------------------------------------
>
>                 Key: MAPREDUCE-1229
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1229
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: contrib/mumak
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Hong Tang
>            Assignee: Hong Tang
>             Fix For: 0.21.0, 0.22.0
>
>         Attachments: mapreduce-1229-20091121.patch, mapreduce-1229-20091123.patch
>
>
> Currently, mumak replay job submission faithfully. To make mumak useful for evaluation
purposes, it would be great if we can support other job submission policies such as sequential
job submission, or stress job submission.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message