tez-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Siddharth Seth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (TEZ-201) Temporary file lease failures seen when running outer join example job
Date Mon, 26 Aug 2013 18:59:51 GMT

    [ https://issues.apache.org/jira/browse/TEZ-201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13750409#comment-13750409
] 

Siddharth Seth commented on TEZ-201:
------------------------------------

The trace is pretty much the same. TEZ-379 fixes this when multiple vertices do not configure
an OuputCommitter. Am assuming that was the case here.
                
> Temporary file lease failures seen when running outer join example job
> ----------------------------------------------------------------------
>
>                 Key: TEZ-201
>                 URL: https://issues.apache.org/jira/browse/TEZ-201
>             Project: Apache Tez
>          Issue Type: Bug
>            Reporter: Hitesh Shah
>              Labels: TEZ-0.2.0
>
> 2013-06-10 06:55:47,069 FATAL [IPC Server handler 7 on 36864] org.apache.tez.dag.app.TaskAttemptListenerImpTezDag:
Task: attempt_1370823798674_33_1_000001_000012_0 - exited : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /user/hrt_qa/Tez/JoinOut/_temporary/1/_temporary/attempt_1370823798674_0033_r_000012_0/part-r-00012:
File does not exist. Holder DFSClient_attempt_1370823798674_0033_r_000012_0_-177600377_1 does
not have any open files.
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2528)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2338)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2249)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:514)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:386)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48007)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1029)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1839)
       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1835)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1833)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1301)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1253)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
>         at $Proxy10.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>         at $Proxy10.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1219)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1072)
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:508)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message