Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 68A4D17C5B for ; Tue, 28 Oct 2014 00:04:34 +0000 (UTC) Received: (qmail 46925 invoked by uid 500); 28 Oct 2014 00:04:34 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 46849 invoked by uid 500); 28 Oct 2014 00:04:33 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 46833 invoked by uid 500); 28 Oct 2014 00:04:33 -0000 Delivered-To: apmail-hadoop-hive-dev@hadoop.apache.org Received: (qmail 46830 invoked by uid 99); 28 Oct 2014 00:04:33 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Oct 2014 00:04:33 +0000 Date: Tue, 28 Oct 2014 00:04:33 +0000 (UTC) From: "Mithun Radhakrishnan (JIRA)" To: hive-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HIVE-8626) Extend HDFS super-user checks to dropPartitions MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mithun Radhakrishnan updated HIVE-8626: --------------------------------------- Description: HIVE-6392 takes care of allowing HDFS super-user accounts to register partitions in tables whose HDFS paths don't explicitly grant write-permissions to the super-user. However, the dropPartitions()/dropTable()/dropDatabase() use-cases don't handle this at all. i.e. An HDFS super-user ({{kal_el@DEV.GRID.MYTH.NET}}) can't drop the very partitions that were added to a table-directory owned by the user ({{mithunr}}). The following error is the result: {quote} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Table metadata not deleted since hdfs://mythcluster-nn1.grid.myth.net:8020/user/mithunr/myth.db/myth_table is not writable by kal_el@DEV.GRID.MYTH.NET) {quote} This is the result of redundant checks in {{HiveMetaStore::dropPartitionsAndGetLocations()}}: {code:title=HiveMetaStore.java|borderStyle=solid} if (!wh.isWritable(partPath.getParent())) { throw new MetaException("Table metadata not deleted since the partition " + Warehouse.makePartName(partitionKeys, part.getValues()) + " has parent location " + partPath.getParent() + " which is not writable " + "by " + hiveConf.getUser()); } {code} This check is already made in StorageBasedAuthorizationProvider. If the argument is that the SBAP isn't guaranteed to be in play, then this shouldn't be checked in HMS either. If HDFS permissions need to be checked in addition to say, ACLs, then perhaps a recursively-composed auth-provider ought to be used. For the moment, I'll get {{Warehouse.isWritable()}} to handle HDFS super-users. But I think {{isWritable()}} checks oughtn't to be in HiveMetaStore. (Perhaps fix this in another JIRA?) was: HIVE-6392 takes care of allowing HDFS super-user accounts to register partitions in tables whose HDFS paths don't explicitly grant write-permissions to the super-user. However, the dropPartitions()/dropTable()/dropDatabase() use-cases don't handle this at all. i.e. An HDFS super-user ({{kal_el@DEV.GRID.MYTH.NET}}) can't drop the very partitions that were added to a table-directory owned by the user ({{mithunr}}). The following error is the result: {quote} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Table metadata not deleted since hdfs://mythcluster-nn1.grid.myth.net:8020/user/mithunr/myth.db/myth_table is not writable by kal_el@DEV.GRID.MYTH.NET) {quote} This is the result of redundant checks in {{HiveMetaStore::dropPartitionsAndGetLocations()}}: {code:title=HiveMetaStore.java|borderStyle=solid} if (!wh.isWritable(partPath.getParent())) { throw new MetaException("Table metadata not deleted since the partition " + Warehouse.makePartName(partitionKeys, part.getValues()) + " has parent location " + partPath.getParent() + " which is not writable " + "by " + hiveConf.getUser()); } {code} This check is already made in StorageBasedAuthorizationProvider. If the argument is that the SBAP isn't guaranteed to be in play, then this shouldn't be checked in HMS either. If HDFS permissions need to be checked in addition to say, ACLs, then perhaps a recursively-composed auth-provider ought to be used. For the moment, I'll get {{Warehouse.isWritable()}} to handle HDFS super-users. But I think {{isWritable()}} checks oughtn't to be in HiveMetaStore. (Perhaps in another JIRA?) > Extend HDFS super-user checks to dropPartitions > ----------------------------------------------- > > Key: HIVE-8626 > URL: https://issues.apache.org/jira/browse/HIVE-8626 > Project: Hive > Issue Type: Bug > Components: Metastore > Affects Versions: 0.12.0, 0.13.1 > Reporter: Mithun Radhakrishnan > Assignee: Mithun Radhakrishnan > > HIVE-6392 takes care of allowing HDFS super-user accounts to register partitions in tables whose HDFS paths don't explicitly grant write-permissions to the super-user. > However, the dropPartitions()/dropTable()/dropDatabase() use-cases don't handle this at all. i.e. An HDFS super-user ({{kal_el@DEV.GRID.MYTH.NET}}) can't drop the very partitions that were added to a table-directory owned by the user ({{mithunr}}). The following error is the result: > {quote} > FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Table metadata not deleted since hdfs://mythcluster-nn1.grid.myth.net:8020/user/mithunr/myth.db/myth_table is not writable by kal_el@DEV.GRID.MYTH.NET) > {quote} > This is the result of redundant checks in {{HiveMetaStore::dropPartitionsAndGetLocations()}}: > {code:title=HiveMetaStore.java|borderStyle=solid} > if (!wh.isWritable(partPath.getParent())) { > throw new MetaException("Table metadata not deleted since the partition " > + Warehouse.makePartName(partitionKeys, part.getValues()) > + " has parent location " + partPath.getParent() > + " which is not writable " > + "by " + hiveConf.getUser()); > } > {code} > This check is already made in StorageBasedAuthorizationProvider. If the argument is that the SBAP isn't guaranteed to be in play, then this shouldn't be checked in HMS either. If HDFS permissions need to be checked in addition to say, ACLs, then perhaps a recursively-composed auth-provider ought to be used. > For the moment, I'll get {{Warehouse.isWritable()}} to handle HDFS super-users. But I think {{isWritable()}} checks oughtn't to be in HiveMetaStore. (Perhaps fix this in another JIRA?) -- This message was sent by Atlassian JIRA (v6.3.4#6332)