hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: hadoop file permission 1.0.3 (security)
Date Wed, 06 Jun 2012 11:24:16 GMT

On Wed, Jun 6, 2012 at 3:11 AM, Tony Dean <Tony.Dean@sas.com> wrote:
> dfs.umaskmode = umask (I believe this should be used in lieu of dfs.umask) – it appears
to set the permissions for files created in hadoop fs (minus execute permission).
> why was dffs.umask deprecated?  what’s difference between the 2.

Yes dfs.umaskmode must be used. The latter was replaced via
HADOOP-6234 to allow octal and symbolic values rather than just
decimal (as thats mostly unorthodox).

> dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought it was
supposed to set permission on blks at the os level.

This merely sets the parent dfs.data.dir (parent dir) permissions.
That, on Linux, should take care of users other than the DN's owner
can't read blocks directly.

> Are there any other file permission configuration properties?
> What I would really like to do is set data blk file permissions at the os level so that
the blocks can be locked down from all users except super and supergroup, but allow it to
be used accessed by hadoop API as specified by hdfs permissions.  Is this possible?

Yes just set the dfs.datanode.data.dir.perm to "700".

In Hadoop 2.0 and some other distributions out there, the patch at
https://issues.apache.org/jira/browse/HDFS-1560 already makes this a
default value for security.

Harsh J

View raw message