hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yuanbo Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10276) HDFS throws AccessControlException when checking for the existence of /a/b when /a is a file
Date Wed, 18 May 2016 02:50:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15288119#comment-15288119
] 

Yuanbo Liu commented on HDFS-10276:
-----------------------------------

Discussed with [~yzhangal] who committed the patch of HDFS-5802, and here are the suggestions
from him
1. I prefer not to use a class member checkedAncestorIndex, instead, we can make it parameter
passing.

Change
{code}
  private void checkAncestorType(INode[] inodes, int ancestorIndex,
      AccessControlException e) throws AccessControlException {
    for (int i = 0; i <= ancestorIndex; i++) {
{code}
To
{code}
 private void checkAncestorType(INode[] inodes,
      int checkedAncestorIndex, AccessControlException e)
          throws AccessControlException {
    for (int i = 0; i <= checkedAncestorIndex; i++) {
{code}
2. Change
{code}
   try {
      checkTraverse(inodeAttrs, path, ancestorIndex);
    } catch (AccessControlException e) {
      checkAncestorType(inodes, ancestorIndex, e);
    }
{code}
to
{code}
   checkTraverse(inodeAttrs, inodes, path, ancestorIndex);
{code}
3. Change
{code}
 private void checkTraverse(INodeAttributes[] inodes, String path, int last
      ) throws AccessControlException {
    for(int j = 0; j <= last; j++) {
      check(inodes[j], path, FsAction.EXECUTE);
    }
  }
{code}
to
{code}
  private void checkTraverse(INodeAttributes[] inodeAttributes,
      INode[] inodes, String path, int last) throws AccessControlException {
    int j = 0;
    try {
      for(;j <= last; j++) {
        check(inodeAttributes[j], path, FsAction.EXECUTE);
      }
    } catch (AccessControlException e) {
      checkAncestorType(inodes, j, e);
    }
  }
{code}
4. Remove 
{code}
LOG.info("yuanbo print " + e.getMessage());
{code}
Thanks a lot for Yongjun's suggestions, I uploaded a new patch for this issue.

> HDFS throws AccessControlException when checking for the existence of /a/b when /a is
a file
> --------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10276
>                 URL: https://issues.apache.org/jira/browse/HDFS-10276
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kevin Cox
>            Assignee: Yuanbo Liu
>         Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, HDFS-10276.003.patch,
HDFS-10276.004.patch, HDFS-10276.005.patch
>
>
> Given you have a file {{/file}} an existence check for the path {{/file/whatever}} will
give different responses for different implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw {{org.apache.hadoop.security.AccessControlException:
Permission denied: ..., access=EXECUTE, ...}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message