hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jitendra Nath Pandey (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions
Date Sat, 16 Aug 2014 18:04:18 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099727#comment-14099727

Jitendra Nath Pandey commented on HDFS-6826:

  Thanks for quickly prototyping v2 and v3, that helps a lot to compare the two approaches.

  It seems we can do v3 easily without even exposing FSDirectory. Consider this API:
public interface INodeAuthorizationInfoProvider {
   static class InodePermissionInfo {  
        String path;
        String owner;
        String group;
        FsPermission perm;  
        boolean isDirectory;
        List<AclEntry> acls;
     // List<InodePermissionInfo> contains info about all the inodes in the path
     void checkPermission(List<InodePermissionInfo> inodePermInfos, FsAction requestedAccess)
throws AccessControlException;

 I vote for v3 because in v3 the use of the plugin for checks will be confined to FsPermissionChecker
which can centrally extract all the information needed in InodePermissionInfo from FSDirectory
and pass to the plugin. Also v3 exposes a single interface to implement which seems simpler
and more coherent.

Following snippet of code could suffice in FsPermissionChecker#checkPermission to delegate
the permission check to the plugin. Note that we already have code to get all inodes for a
path in an array from FSDirectory.
    List<InodePermissionInfo> inodePermInfos = new ArrayList<InodePermissionInfo>();

    INode [] inodeArray = .... // obtain from FSDirectory
    long snapshotId =  ... // obtain from FSDirectory 
    for (INode i : inodeArray) {
      inodePermInfos.add(new InodePermissionInfo(i.getFullPathName(), 
          i.getGroupName(snapshotId), i.getFsPermission(snapshotId),
           i.isDirectory(), i.getAclFeature().getEntries()));

    plugin.checkPermission(inodePermInfos, access);

If the above makes sense, I can help to provide a default implementation for the checkPermission
I will try to prototype and run a few tests.

> Plugin interface to enable delegation of HDFS authorization assertions
> ----------------------------------------------------------------------
>                 Key: HDFS-6826
>                 URL: https://issues.apache.org/jira/browse/HDFS-6826
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: security
>    Affects Versions: 2.4.1
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>         Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, HDFS-6826v3.patch,
HDFSPluggableAuthorizationProposal-v2.pdf, HDFSPluggableAuthorizationProposal.pdf
> When Hbase data, HiveMetaStore data or Search data is accessed via services (Hbase region
servers, HiveServer2, Impala, Solr) the services can enforce permissions on corresponding
entities (databases, tables, views, columns, search collections, documents). It is desirable,
when the data is accessed directly by users accessing the underlying data files (i.e. from
a MapReduce job), that the permission of the data files map to the permissions of the corresponding
data entity (i.e. table, column family or search collection).
> To enable this we need to have the necessary hooks in place in the NameNode to delegate
authorization to an external system that can map HDFS files/directories to data entities and
resolve their permissions based on the data entities permissions.
> I’ll be posting a design proposal in the next few days.

This message was sent by Atlassian JIRA

View raw message