hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: InterruptedException
Date Wed, 26 Aug 2009 15:15:00 GMT
Chris K Wensel wrote:
>> I draw your attention to this bit of startup code in JobTracker
>>
>>    try {
>>        Thread.sleep(FS_ACCESS_RETRY_PERIOD);
>>      } catch (InterruptedException e) {
>>        throw new IOException("Interrupted during system directory 
>> cleanup ",
>>                e);
>>      }
>> A few lines later, an InterruptedException is thrown directly, so the 
>> code isn't being consistent.
>>
>> -should everything at startup/shutdown time throw 
>> InterruptedExceptions if interrupted? It would make sense, though you 
>> have to deal with issues like Jetty, in its startup sleeps, has code 
>> that wraps up its exceptions too:
>>
>>    } catch (Exception e) {
>>      throw new IOException("Problem starting http server", e);
>>    }
>>
>> -we'd need to catch the jetty exception and look for a nested 
>> interrupt, throw it. Ouch.
> 
> 
> 
> Thanks Steve for taking the time to explain this.
> 
> I guess my original email was in reference to the 
> TaskInputOutputContext, RecordWriter, and RecordReader classes.
> 
> for example,
> 
> The only implementation I see of RecordWriter#write() that declares and 
> actually throws an InterruptedException is the ChainRecordWriter because 
> the outputQueue is a BlockingQueue.

I havent' seen those. As owen says, swallowing interruptions is naughty, 
I don't know what the hadoop-wide policy is on processing them; 
something to look at.

Ideally, anything that has wait() or sleep should look for 
interruptions, and do time-limited waits. Its the only way to be robust

Mime
View raw message