incubator-hcatalog-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ranjit Mathew <>
Subject Re: Permissions and Ownership for Table Data
Date Mon, 01 Aug 2011 04:16:19 GMT
On 07/29/2011 05:48 AM, Ashutosh Chauhan wrote:
> Sorry for bit late on this. When you add new partitions permissions
> and groups do get inherited from table, if you are adding your data
> through HCatLoader (via pig) or through HCatOutputFormat (via your
> map-reduce program). If you are adding data through other mechanism
> and then doing "add partition" on CLI, in that case those won't get
> inherited.

Thanks for clarifying this Ashutosh. We *are* loading our data via

So there's no meta-data that stores the permissions/group for tables.
All we need to do then is to make sure that the warehouse directory in
HDFS for the table has the right permissions and group ownership and
that we continue to load data via HCatOutputFormat.


> AFAIK, you can't set groups/perms through Hive CLI.

Sort of - you can use the "dfs" command to do it (that's how we're
doing it right now).


> On Tue, Jul 26, 2011 at 04:33, Ranjit Mathew<>  wrote:
>> Hi,
>>   We are trying to make sure that for a table "snafu", data
>> is owned by group "foo" for a user "bar" and has permissions "750".
>> The HCatalog CLI program seems to provide "-p" and "-g" options
>> just for this purpose. However, this only seems to set the
>> permissions and group-ownership for the location in HDFS containing
>> the table-data (say "/wombat/snafu") at the time of table-creation.
>> What we are looking for is that as the data is populated into the
>> table, it continues to be owned by "bar" belonging to "foo" and
>> has the permissions "750". How can we ensure this?
>> FWIW, we're using HCatOutputFormat to directly write data into
>> this table. We're using HCatalog 0.1 with Hadoop 0.20.204.
>> If the table has already been created using Hive CLI (not HCatalog CLI),
>> is there a way to set its group/permissions attributes? (We don't
>> want to drop and recreate the table as it has a lot of data and continues
>> to be populated.)
>> Thanks in advance for your help,
>> Ranjit

View raw message