hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8642) Make TestFileTruncate more reliable
Date Thu, 09 Jul 2015 15:33:09 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14620682#comment-14620682
] 

Hudson commented on HDFS-8642:
------------------------------

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #249 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/249/])
HDFS-8642. Make TestFileTruncate more reliable. (Contributed by Rakesh R) (arp: rev 4119ad3112dcfb7286ca68288489bbcb6235cf53)
* hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Make TestFileTruncate more reliable
> -----------------------------------
>
>                 Key: HDFS-8642
>                 URL: https://issues.apache.org/jira/browse/HDFS-8642
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Rakesh R
>            Assignee: Rakesh R
>            Priority: Minor
>             Fix For: 2.8.0
>
>         Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, HDFS-8642-02.patch
>
>
> I've observed {{TestFileTruncate#setup()}} function has to be improved by making it more
independent. Presently if any of the snapshots related test failures will affect all the subsequent
unit test cases. One such error has been observed in the [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
> {code}
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
> org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test
is snapshottable and already has snapshots
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> 	at com.sun.proxy.$Proxy22.delete(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
> 	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
> 	at com.sun.proxy.$Proxy23.delete(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
> 	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
> 	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message