hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException
Date Tue, 27 May 2014 23:41:03 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010503#comment-14010503

Brandon Li commented on HDFS-6451:

>From [~zhongyi-altiscale]:
{quote}Hi Jing Zhao, it's definitely good to have a single exception handler instead of replicating
the same code everywhere, but since each server procedure (ACCESS, GETATTR, FSSTAT, etc) might
have their private data that needs to be written out, the child NFS3Response class still need
to overload the writeHeaderAndResponse anyways
for AccessControlException, do you mean we need to catch it together with AuthorizationException
in RpcProgramNfs3.java?
or do you mean we need to examine the whole codebase looking for every function that could
potentially throw AccessControlException,
and make sure the error code is set correctly in the catch clause?{quote}

> NFS should not return NFS3ERR_IO for AccessControlException 
> ------------------------------------------------------------
>                 Key: HDFS-6451
>                 URL: https://issues.apache.org/jira/browse/HDFS-6451
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: nfs
>            Reporter: Brandon Li
> As [~jingzhao] pointed out in HDFS-6411, we need to catch the AccessControlException
from the HDFS calls, and return NFS3ERR_PERM instead of NFS3ERR_IO for it.
> Another possible improvement is to have a single class/method for the common exception
handling process, instead of repeating the same exception handling process in different NFS

This message was sent by Atlassian JIRA

View raw message