hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Harsh J (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-198) org.apache.hadoop.dfs.LeaseExpiredException during dfs write
Date Tue, 21 Jan 2014 02:29:23 GMT

     [ https://issues.apache.org/jira/browse/HDFS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Harsh J resolved HDFS-198.
--------------------------

    Resolution: Not A Problem

This one has gone very stale and we have not seen any properly true reports of lease renewals
going amiss during long waiting tasks recently. Marking as 'Not a Problem' (anymore). If there's
a proper new report of this behaviour, please lets file a new JIRA with the newer data.

[~bugcy013] - Your problem is pretty different from what OP appears to have reported in an
older version. Your problem arises out of MR tasks not utilising an attempt ID based directory
(which Hive appears to do sometimes), in which case two different running attempts (out of
speculative exec. or otherwise) can cause one of them to run into this error as a result of
the file overwrite. Best to investigate further on a mailing list rather than here.

> org.apache.hadoop.dfs.LeaseExpiredException during dfs write
> ------------------------------------------------------------
>
>                 Key: HDFS-198
>                 URL: https://issues.apache.org/jira/browse/HDFS-198
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client, namenode
>            Reporter: Runping Qi
>
> Many long running cpu intensive map tasks failed due to org.apache.hadoop.dfs.LeaseExpiredException.
> See [a comment below|https://issues.apache.org/jira/browse/HDFS-198?focusedCommentId=12910298&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12910298]
for the exceptions from the log:



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message