impala-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sailesh Mukil (Code Review)" <ger...@cloudera.org>
Subject [Impala-ASF-CR] IMPALA-5331: Use new libHDFS API to address "Unknown Error 255"
Date Wed, 17 May 2017 21:22:19 GMT
Sailesh Mukil has posted comments on this change.

Change subject: IMPALA-5331: Use new libHDFS API to address "Unknown Error 255"
......................................................................


Patch Set 4:

(1 comment)

> Sorry, I meant any way to test/validate the error message
 > functionality itself, not necessarily the case it returns null.

For the Error 255 case, I spent quite a bit of time trying to get a test case that can be
included in our test suite. However, I couldn't find anything so easily reproducible.

I tested that it works manually. I forced a 255 error by doing the following:
I set the "dfs.namenode.fs-limits.max-directory-items" in hdfs-site to a low number (randomly
set to 1100. Its default is 1048576). Then I tried over 1100 inserts into a table, the 1101
insert gives the following error:

[localhost:21000] > insert into too_many_files values (1);
Query: insert into too_many_files values (1)
Query submitted at: 2017-05-17 14:14:44 (Coordinator: http://localhost:25000)
Query progress can be monitored at: http://localhost:25000/query_plan?query_id=ed445c0f3ee8a496:e4c1e6e600000000
WARNINGS: Error(s) moving partition files. First error (of 1) was: Hdfs op (RENAME hdfs://localhost:20500/test-warehouse/too_many_files/_impala_insert_staging/ed445c0f3ee8a496_e4c1e6e600000000/.ed445c0f3ee8a496-e4c1e6e600000000_762352442_dir/ed445c0f3ee8a496-e4c1e6e600000000_418030871_data.0.
TO hdfs://localhost:20500/test-warehouse/too_many_files/ed445c0f3ee8a496-e4c1e6e600000000_418030871_data.0.)
failed, error was: hdfs://localhost:20500/test-warehouse/too_many_files/_impala_insert_staging/ed445c0f3ee8a496_e4c1e6e600000000/.ed445c0f3ee8a496-e4c1e6e600000000_762352442_dir/ed445c0f3ee8a496-e4c1e6e600000000_418030871_data.0.
Error(255): Unknown error 255
Root cause: RemoteException: The directory item limit of /test-warehouse/too_many_files is
exceeded: limit=1100 items=1100
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyMaxDirItems(FSDirectory.java:2130)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyFsLimitsForRename(FSDirectory.java:2061)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedRenameTo(FSDirectory.java:606)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.renameTo(FSDirectory.java:518)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:3966)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInt(FSNamesystem.java:3928)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3893)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:807)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.rename(AuthorizationProviderProxyClientProtocol.java:268)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:579)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)

http://gerrit.cloudera.org:8080/#/c/6894/4/be/src/util/hdfs-bulk-ops.cc
File be/src/util/hdfs-bulk-ops.cc:

PS4, Line 129: GetHdfsErrorMsg("", src_)
These always deal with libHdfs errors, so we should call GetHdfsErrorMsg() instead of GetStrErrMsg().


-- 
To view, visit http://gerrit.cloudera.org:8080/6894
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I181e316ed63b70b94d4f7a7557d398a931bb171d
Gerrit-PatchSet: 4
Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-Owner: Sailesh Mukil <sailesh@cloudera.com>
Gerrit-Reviewer: Henry Robinson <henry@cloudera.com>
Gerrit-Reviewer: Matthew Jacobs <mj@cloudera.com>
Gerrit-Reviewer: Sailesh Mukil <sailesh@cloudera.com>
Gerrit-HasComments: Yes

Mime
View raw message