hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mithun Radhakrishnan (JIRA)" <>
Subject [jira] [Updated] (HIVE-8626) Extend HDFS super-user checks to dropPartitions
Date Thu, 30 Oct 2014 22:33:34 GMT


Mithun Radhakrishnan updated HIVE-8626:
    Attachment: HIVE-8626.1.patch

Attaching a patch for trunk/.

While working on this JIRA, I noticed that {{HadoopShims.checkFileAccess()}} returns {{void}}
and indicates access-failures using exceptions. Please correct me if I'm wrong, but wouldn't
returning a boolean be less clumsy? There is currently no way to distinguish between a legitimate
"false" return from an actual exception condition.

> Extend HDFS super-user checks to dropPartitions
> -----------------------------------------------
>                 Key: HIVE-8626
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: Metastore
>    Affects Versions: 0.12.0, 0.13.1
>            Reporter: Mithun Radhakrishnan
>            Assignee: Mithun Radhakrishnan
>         Attachments: HIVE-8626.1.patch
> HIVE-6392 takes care of allowing HDFS super-user accounts to register partitions in tables
whose HDFS paths don't explicitly grant write-permissions to the super-user.
> However, the dropPartitions()/dropTable()/dropDatabase() use-cases don't handle this
at all. i.e. An HDFS super-user ({{kal_el@DEV.GRID.MYTH.NET}}) can't drop the very partitions
that were added to a table-directory owned by the user ({{mithunr}}). The following error
is the result:
> {quote}
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Table
metadata not deleted since hdfs://
is not writable by kal_el@DEV.GRID.MYTH.NET)
> {quote}
> This is the result of redundant checks in {{HiveMetaStore::dropPartitionsAndGetLocations()}}:
> {|borderStyle=solid}
> if (!wh.isWritable(partPath.getParent())) {
>   throw new MetaException("Table metadata not deleted since the partition "
>             + Warehouse.makePartName(partitionKeys, part.getValues()) 
>             +  " has parent location " + partPath.getParent() 
>             + " which is not writable " 
>             + "by " + hiveConf.getUser());
> }
> {code}
> This check is already made in StorageBasedAuthorizationProvider. If the argument is that
the SBAP isn't guaranteed to be in play, then this shouldn't be checked in HMS either. If
HDFS permissions need to be checked in addition to say, ACLs, then perhaps a recursively-composed
auth-provider ought to be used.
> For the moment, I'll get {{Warehouse.isWritable()}} to handle HDFS super-users. But I
think {{isWritable()}} checks oughtn't to be in HiveMetaStore. (Perhaps fix this in another

This message was sent by Atlassian JIRA

View raw message