hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nigel Daley <nda...@yahoo-inc.com>
Subject Re: [jira] Commented: (HADOOP-639) task cleanup messages can get lost, causing task trackers to keep tasks forever
Date Wed, 22 Nov 2006 17:00:23 GMT
Arun, the proposal looks good.  If the JT always gets a stale seqNo  
from the TT (because of some unrecoverable problem in the TT), will  
it send the saved response forever?  Or should there be some maximum  
resends?

Also, when the JT is resending a JTResponse, can it add or change the  
list of actions?  Or do they need to be identical?

Is it possible that a TT can get the same JTResponse more than once?   
If so, does the TT need to recognize this?


On Nov 22, 2006, at 8:25 AM, Arun C Murthy (JIRA) wrote:

>     [ http://issues.apache.org/jira/browse/HADOOP-639? 
> page=comments#action_12451982 ]
>
> Arun C Murthy commented on HADOOP-639:
> --------------------------------------
>
> Ok, while we continue to track the *metrics* part of  
> TaskTrackerStatus via HADOOP-657 I propose we move forward on the  
> 'lost messages' part over here...
>
> Here are some thoughts (with due credits to Owen):
>
> Define new classes:
>
> TaskTrackerAction implements Writable {
>   byte actionId = 0;
>   // ...
> }
>
> KillJobAction extends TaskTrackerAction {
>   byte actionId = 1;
>   // ...
> }
> KillTaskAction extends TaskTrackerAction {
>   byte actionId = 2;
>   // ...
> }
> StartTaskAction extends TaskTrackerAction {
>   byte actionId = 3;
>   // ...
> }
>
> The distinction between the KillTaskAction & KillJobAction is done  
> to fix HADOOP-737 ...
>
> Another class:
> class JTResponse {
>   long seqNo;                  // explained below
>   List<TaskTrackerAction> actions;
> }
>
> The new api replacing
>   int emitHeartbeat(TaskTrackerStatus status, boolean  
> initialContact) throws IOException;
>   Task pollForNewTask(String trackerName) throws IOException;
>   String[] pollForTaskWithClosedJob(String trackerName) throws  
> IOException;
> is:
>
>   * JTResponse updateStatus(TaskTrackerStatus status, long ackNo)  
> throws IOException; *
>
> Details about the seqNo/ackNo:
> ------------------------------------
>
> The idea is that there is a feedback (seq/ack) mechanism between  
> the JT & TT which works as follows...
>
> TT starts off by sending an ack of '-1' (indicates initial contact,  
> replaces the existing 'initialContact' boolean); and at every step  
> the JT increments the ack and sends a new JTResponse object with  
> the incremented ack as the 'seqNo' and 'actions'. The JT also  
> stores the last seq and the JTResponse object sent to each of the  
> task-trackers. OTOH the TT also stores the last 'seq' which it  
> recieved from the JT, which is what it sends out in the subsequent  
> heartbeat as 'ack'.
>
> How does this help? If a TT misses the heartbeat response from the  
> JT, it sends a stale which 'ack' disagrees with the newer 'seq' on  
> the JT, this prompts the JT to resend the 'saved' JTResponse object  
> back to the TT... thus solving the 'lost messages' issue. If JT  
> never hears back from a TT for a long time the existing  
> ExpireTrackers.run removes the TT from its queue and also discards  
> the saved JTResponse object for that TT.
>
> -*-*-
>
> Thoughts?
>
>
>> task cleanup messages can get lost, causing task trackers to keep  
>> tasks forever
>> --------------------------------------------------------------------- 
>> ----------
>>
>>                 Key: HADOOP-639
>>                 URL: http://issues.apache.org/jira/browse/HADOOP-639
>>             Project: Hadoop
>>          Issue Type: Bug
>>          Components: mapred
>>    Affects Versions: 0.7.2
>>            Reporter: Owen O'Malley
>>         Assigned To: Owen O'Malley
>>             Fix For: 0.9.0
>>
>>
>> If the pollForTaskWithClosedJob call from a job tracker to a task  
>> tracker times out when a job completes, the tasks are never  
>> cleaned up. This can cause the mini m/r cluster to hang on  
>> shutdown, but also is a resource leak.
>
> -- 
> This message is automatically generated by JIRA.
> -
> If you think it was sent incorrectly contact one of the  
> administrators: http://issues.apache.org/jira/secure/ 
> Administrators.jspa
> -
> For more information on JIRA, see: http://www.atlassian.com/ 
> software/jira
>
>


Mime
View raw message