hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hairong Kuang <kuang.hair...@gmail.com>
Subject Re: [jira] Commented: (HDFS-198) org.apache.hadoop.dfs.LeaseExpiredException during dfs write
Date Mon, 27 Sep 2010 17:54:50 GMT
If a TT becomes zombie, of course all the files that are created but not
closed will be lease expired later on. Isn't this an expected behavior?


On 9/27/10 10:49 AM, "Eli Collins (JIRA)" <jira@apache.org> wrote:

> 
>     [ 
> https://issues.apache.org/jira/browse/HDFS-198?page=com.atlassian.jira.plugin.
> system.issuetabpanels:comment-tabpanel&focusedCommentId=12915390#action_129153
> 90 ] 
> 
> Eli Collins commented on HDFS-198:
> ----------------------------------
> 
> A zombie TT can't result in lease expiration?
> 
>> org.apache.hadoop.dfs.LeaseExpiredException during dfs write
>> ------------------------------------------------------------
>> 
>>                 Key: HDFS-198
>>                 URL: https://issues.apache.org/jira/browse/HDFS-198
>>             Project: Hadoop HDFS
>>          Issue Type: Bug
>>          Components: hdfs client, name-node
>>            Reporter: Runping Qi
>> 
>> Many long running cpu intensive map tasks failed due to
>> org.apache.hadoop.dfs.LeaseExpiredException.
>> See [a comment 
>> below|https://issues.apache.org/jira/browse/HDFS-198?focusedCommentId=1291029
>> 8&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#act
>> ion_12910298] for the exceptions from the log:



Mime
View raw message