hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HADOOP-14630) Contract Tests to verify create, mkdirs and rename under a file is forbidden
Date Fri, 07 Jul 2017 19:01:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078531#comment-16078531
] 

Steve Loughran edited comment on HADOOP-14630 at 7/7/17 7:00 PM:
-----------------------------------------------------------------

Patch 001. Adds tests to create and rename contract tests to verify that you cant create,
mkdir or rename under a file or subdir of a file.

Tested: local, hdfs, azure, s3a, s3a+s3guard, rawlocal, swift. Not tested: adl: oss: s3n

One really interesting surprise is "what does rename("file1, "file2/dest") do? That is: does
it return false, or does it throw an exception?

Generally, even for a missing source file, rename() returns false for "it didn't work", which
isn't actually that useful. In these tests though,
local FS and HDFS both throw exceptions. 

h3: s3a, azure

return false; no rename performed.

h3. 

h3. local FS, raw local FS

Throws {{FileAlreadyExistsException}} on both the new tests.

{code}
2017-07-07 17:31:24,798 INFO  contract.AbstractFSContractTestBase (AbstractFSContractTestBase.java:describe(264))
- rename directly under file

org.apache.hadoop.fs.FileAlreadyExistsException: Destination exists and is not a directory:
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/yOqWFaro0u/testRenameFileUnderFile/file

	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:559)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:534)
	at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:303)
	at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:292)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1054)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:943)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:393)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:316)
	at org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:372)
	at org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:617)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:372)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:263)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFile(AbstractContractRenameTest.java:237)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

{code}

h3. HDFS, rejects with a {{RemoteException}} wrapping a {{ParentNotDirectoryException}} (i.e.
it doesn't include that one on the list
of exceptions it may want to unwrap)

{code}
cmd=delete	src=/test	dst=null	perm=null	proto=rpc

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException): /test/testRenameFileUnderFileSubdir/file
(is not a directory)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:562)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1730)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1748)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:606)
	at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:62)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2822)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:988)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:628)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)


	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
	at org.apache.hadoop.ipc.Client.call(Client.java:1430)
	at org.apache.hadoop.ipc.Client.call(Client.java:1340)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
	at com.sun.proxy.$Proxy28.rename(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rename(ClientNamenodeProtocolTranslatorPB.java:554)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:411)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
	at com.sun.proxy.$Proxy32.rename(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1520)
	at org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:787)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:372)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:263)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFileSubdir(AbstractContractRenameTest.java:250)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}  
  
This patch modifies DFSClient to unwind the relevant exception, so you now get

{code}
org.apache.hadoop.fs.ParentNotDirectoryException: /test/testRenameFileUnderFileSubdir/file
(is not a directory)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:562)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1730)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1748)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:606)
	at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:62)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2822)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:988)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:628)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)


	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
	at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1522)
	at org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:787)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:372)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:265)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFileSubdir(AbstractContractRenameTest.java:250)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException):
/test/testRenameFileUnderFileSubdir/file (is not a directory)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:562)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1730)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1748)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:606)
	at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:62)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2822)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:988)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:628)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
	at org.apache.hadoop.ipc.Client.call(Client.java:1430)
	at org.apache.hadoop.ipc.Client.call(Client.java:1340)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
	at com.sun.proxy.$Proxy28.rename(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rename(ClientNamenodeProtocolTranslatorPB.java:554)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:411)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
	at com.sun.proxy.$Proxy32.rename(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1520)
	... 16 more
{code}
  
  
h3. Swift: performs the  on direct rename under a file, returns false for rename under a subdir.

{code}
java.lang.AssertionError: after rename directly under file: rename (swift://swift1/test/testRenameFileUnderFile/testRenameSrc,
swift://swift1/test/testRenameFileUnderFile/file/testRenameTarget)= true: unexpectedly found
swift://swift1/test/testRenameFileUnderFile/file/testRenameTarget as  SwiftFileStatus{ path=swift://swift1/test/testRenameFileUnderFile/file/testRenameTarget;
isDirectory=false; length=256; blocksize=33554432; modification_time=1499444735000}

	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.fs.contract.ContractTestUtils.assertPathDoesNotExist(ContractTestUtils.java:781)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathDoesNotExist(AbstractFSContractTestBase.java:314)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:266)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFile(AbstractContractRenameTest.java:237)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

{code}

This patch fixes swift up so it doesn't rename a file under another file, just returns false.

Test wise, I think we need to look at what real posix does and allow that as an outcome, while
leaving HDFS and local alone. And we really need to have a public rename() Call which consistently
throws exceptions rather than just return false.






was (Author: stevel@apache.org):

Patch 001. Adds tests to create and rename contract tests to verify that you cant create,
mkdir or rename under a file or subdir of a file.

Tested: local, hdfs, azure, s3a, rawlocal, swift. Not tested: adl: oss: s3n

One really interesting surprise is "what does rename("file1, "file2/dest") do? That is: does
it return false, or does it throw an exception?

Generally, even for a missing source file, rename() returns false for "it didn't work", which
isn't actually that useful. In these tests though,
local FS and HDFS both throw exceptions. 

h3: s3a, azure

return false; no rename performed.

h3. 

h3. local FS, raw local FS

Throws {{FileAlreadyExistsException}} on both the new tests.

{code}
2017-07-07 17:31:24,798 INFO  contract.AbstractFSContractTestBase (AbstractFSContractTestBase.java:describe(264))
- rename directly under file

org.apache.hadoop.fs.FileAlreadyExistsException: Destination exists and is not a directory:
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/yOqWFaro0u/testRenameFileUnderFile/file

	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:559)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:534)
	at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:303)
	at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:292)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1054)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:943)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:393)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:316)
	at org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:372)
	at org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:617)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:372)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:263)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFile(AbstractContractRenameTest.java:237)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

{code}

h3. HDFS, rejects with a {{RemoteException}} wrapping a {{ParentNotDirectoryException}} (i.e.
it doesn't include that one on the list
of exceptions it may want to unwrap)

{code}
cmd=delete	src=/test	dst=null	perm=null	proto=rpc

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException): /test/testRenameFileUnderFileSubdir/file
(is not a directory)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:562)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1730)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1748)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:606)
	at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:62)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2822)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:988)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:628)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)


	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
	at org.apache.hadoop.ipc.Client.call(Client.java:1430)
	at org.apache.hadoop.ipc.Client.call(Client.java:1340)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
	at com.sun.proxy.$Proxy28.rename(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rename(ClientNamenodeProtocolTranslatorPB.java:554)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:411)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
	at com.sun.proxy.$Proxy32.rename(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1520)
	at org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:787)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:372)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:263)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFileSubdir(AbstractContractRenameTest.java:250)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}  
  
This patch modifies DFSClient to unwind the relevant exception, so you now get

{code}
org.apache.hadoop.fs.ParentNotDirectoryException: /test/testRenameFileUnderFileSubdir/file
(is not a directory)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:562)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1730)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1748)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:606)
	at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:62)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2822)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:988)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:628)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)


	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
	at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1522)
	at org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:787)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:372)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:265)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFileSubdir(AbstractContractRenameTest.java:250)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException):
/test/testRenameFileUnderFileSubdir/file (is not a directory)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:596)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSimpleTraverse(FSPermissionChecker.java:587)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:562)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1730)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1748)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:606)
	at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:62)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2822)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:988)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:628)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
	at org.apache.hadoop.ipc.Client.call(Client.java:1430)
	at org.apache.hadoop.ipc.Client.call(Client.java:1340)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:115)
	at com.sun.proxy.$Proxy28.rename(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rename(ClientNamenodeProtocolTranslatorPB.java:554)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:411)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
	at com.sun.proxy.$Proxy32.rename(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1520)
	... 16 more
{code}
  
  
h3. Swift: performs the  on direct rename under a file, returns false for rename under a subdir.

{code}
java.lang.AssertionError: after rename directly under file: rename (swift://swift1/test/testRenameFileUnderFile/testRenameSrc,
swift://swift1/test/testRenameFileUnderFile/file/testRenameTarget)= true: unexpectedly found
swift://swift1/test/testRenameFileUnderFile/file/testRenameTarget as  SwiftFileStatus{ path=swift://swift1/test/testRenameFileUnderFile/file/testRenameTarget;
isDirectory=false; length=256; blocksize=33554432; modification_time=1499444735000}

	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.fs.contract.ContractTestUtils.assertPathDoesNotExist(ContractTestUtils.java:781)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathDoesNotExist(AbstractFSContractTestBase.java:314)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.expectRenameUnderFileFails(AbstractContractRenameTest.java:266)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameFileUnderFile(AbstractContractRenameTest.java:237)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

{code}

This patch fixes swift up so it doesn't rename a file under another file, just returns false.

Test wise, I think we need to look at what real posix does and allow that as an outcome, while
leaving HDFS and local alone. And we really need to have a public rename() Call which consistently
throws exceptions rather than just return false.





> Contract Tests to verify create, mkdirs and rename under a file is forbidden
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-14630
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14630
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs, fs/azure, fs/s3
>    Affects Versions: 2.9.0
>            Reporter: Steve Loughran
>         Attachments: HADOOP-14630-001.patch
>
>
> Object stores can get into trouble in ways which an FS would never, do, ways so obvious
we've never done tests for them. We know what the problems are: test for file and dir creation
directly/indirectly under other files
> * mkdir(file/file)
> * mkdir(file/subdir)
> * dir under file/subdir/subdir
> * dir/dir2/file, verify dir & dir2 exist
> * dir/dir2/dir3, verify dir & dir2 exist 
> * rename(src, file/dest)
> * rename(src, file/dir/dest)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message