hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-13100) Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.
Date Fri, 02 Feb 2018 05:18:01 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349804#comment-16349804
] 

Yongjun Zhang edited comment on HDFS-13100 at 2/2/18 5:17 AM:
--------------------------------------------------------------

Hi [~kihwal],

Thanks again for your help!

I attached a patch ([~atm] and I worked out), we tested in real cluster. Would you please
help giving a quick review?

Many thanks.



was (Author: yzhangal):
Hi [~kihwal],

Thanks again for your help!

I attached a patch (ATM and I worked out), we tested in real cluster. Would you please help
giving a quick review?

Many thanks.


> Handle IllegalArgumentException in when GETSERVERDEFAULTS is not implemented in webhdfs.
> ----------------------------------------------------------------------------------------
>
>                 Key: HDFS-13100
>                 URL: https://issues.apache.org/jira/browse/HDFS-13100
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs, webhdfs
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>            Priority: Major
>         Attachments: HDFS-13100.001.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that don't support
this to throw UnsupportedOperationException.  However, we are seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb -update -skipcrccheck
webhdfs://<NN1>:<webhdfsPort>/fileX hdfs://<NN2>:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): Invalid value
for webhdfs parameter "op": No enum constant org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
> 	at org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
> 	at org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
> 	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
> 	at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
> 	at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> 	at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> 	at org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
> 	at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
> 	at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
> 	at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
> 	at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
> 	at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
> 	at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
> 	at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
> 	at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> 	at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or make the client
to handle IllegalArgumentException. For backward compatibility and easier operation in the
field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of UnsupportedOperationException
is thrown. 
> The correct way to do is: check if the operation is supported, and throw the UnsurportedOperationExcetion
if not; then check if parameter is legal, throw IllegalArgumentException is it's not legal.
We can do that fix as follow-up of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message