hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dhanasekaran Anbalagan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-198) org.apache.hadoop.dfs.LeaseExpiredException during dfs write
Date Thu, 09 Jan 2014 20:29:51 GMT

    [ https://issues.apache.org/jira/browse/HDFS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13867013#comment-13867013
] 

Dhanasekaran Anbalagan commented on HDFS-198:
---------------------------------------------

Hi All,

getting same error. on hive External table, I am using hive-common-0.10.0-cdh4.4.0. 

In my case. we are using sqoop to import data with table. table stored data in rc file format.
I am only facing issue with external table. 

4/01/08 12:21:40 INFO mapred.JobClient: Task Id : attempt_201312121801_0049_m_000000_0, Status
: FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /dv_data_warehouse/dv_eod_performance_report/_DYN0.337789259996055/trade_date=__HIVE_DEFAULT_PARTITION__/client=__HIVE_DEFAULT_PARTITION__/install=__HIVE_DEFAULT_PARTITION__/_temporary/_attempt_201312121801_0049_m_000000_0/part-m-00000:
File is not open for writing. Holder DFSClient_NONMAPREDUCE_-794488327_1 does not have any
open files.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
	at org.apache.hadoop.hdfs.protocol.pro
attempt_201312121801_0049_m_000000_0: SLF4J: Class path contains multiple SLF4J bindings.
attempt_201312121801_0049_m_000000_0: SLF4J: Found binding in [jar:file:/usr/lib/hadoop-0.20-mapreduce/lib/slf4j-simple-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201312121801_0049_m_000000_0: SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201312121801_0049_m_000000_0: SLF4J: Found binding in [jar:file:/disk1/mapred/local/taskTracker/tech/distcache/-6782344428220505463_-433811577_1927241260/nameservice1/user/tech/.staging/job_201312121801_0049/libjars/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201312121801_0049_m_000000_0: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
for an explanation.
14/01/08 12:21:55 INFO mapred.JobClient: Task Id : attempt_201312121801_0049_m_000000_1, Status
: FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /dv_data_warehouse/dv_eod_performance_report/_DYN0.337789259996055/trade_date=__HIVE_DEFAULT_PARTITION__/client=__HIVE_DEFAULT_PARTITION__/install=__HIVE_DEFAULT_PARTITION__/_temporary/_attempt_201312121801_0049_m_000000_1/part-m-00000:
File is not open for writing. Holder DFSClient_NONMAPREDUCE_-390991563_1 does not have any
open files.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
	at org.apache.hadoop.hdfs.protocol.pro
attempt_201312121801_0049_m_000000_1: SLF4J: Class path contains multiple SLF4J bindings.
attempt_201312121801_0049_m_000000_1: SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201312121801_0049_m_000000_1: SLF4J: Found binding in [jar:file:/disk1/mapred/local/taskTracker/tech/distcache/7281954290425601736_-433811577_1927241260/nameservice1/user/tech/.staging/job_201312121801_0049/libjars/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201312121801_0049_m_000000_1: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
for an explanation.
14/01/08 12:22:12 INFO mapred.JobClient: Task Id : attempt_201312121801_0049_m_000000_2, Status
: FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /dv_data_warehouse/dv_eod_performance_report/_DYN0.337789259996055/trade_date=__HIVE_DEFAULT_PARTITION__/client=__HIVE_DEFAULT_PARTITION__/install=__HIVE_DEFAULT_PARTITION__/_temporary/_attempt_201312121801_0049_m_000000_2/part-m-00000:
File is not open for writing. Holder DFSClient_NONMAPREDUCE_1338126902_1 does not have any
open files.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
	at org.apache.hadoop.hdfs.protocol.pro
attempt_201312121801_0049_m_000000_2: SLF4J: Class path contains multiple SLF4J bindings.

> org.apache.hadoop.dfs.LeaseExpiredException during dfs write
> ------------------------------------------------------------
>
>                 Key: HDFS-198
>                 URL: https://issues.apache.org/jira/browse/HDFS-198
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client, namenode
>            Reporter: Runping Qi
>
> Many long running cpu intensive map tasks failed due to org.apache.hadoop.dfs.LeaseExpiredException.
> See [a comment below|https://issues.apache.org/jira/browse/HDFS-198?focusedCommentId=12910298&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12910298]
for the exceptions from the log:



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message