hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lars hofhansl <lhofha...@yahoo.com>
Subject Re: startMiniDFSCluster and file permissions
Date Thu, 27 Oct 2011 22:20:19 GMT
I know how to change the default permissions (as mentioned in my email below)...


The point is that I like my umask to be 0002. All users here are in their own group,
so 0775 as default permission makes sense.

Now I have to remember to set my umask to 0022 every time I want to run the tests,
and since 0002 is the default on most Linux distributions I can't be the only one seeing this.


On jenkins there must be a different default umask, or something else must be different.


-- Lars


----- Original Message -----
From: Ted Yu <yuzhihong@gmail.com>
To: dev@hbase.apache.org; lars hofhansl <lhofhansl@yahoo.com>
Cc: 
Sent: Thursday, October 27, 2011 3:03 PM
Subject: Re: startMiniDFSCluster and file permissions

I think Apache Jenkins doesn't have this problem - otherwise we should have
seen it by now.

FYI:
http://www.avajava.com/tutorials/lessons/how-do-i-set-the-default-file-and-directory-permissions.html

On Thu, Oct 27, 2011 at 2:53 PM, lars hofhansl <lhofhansl@yahoo.com> wrote:

> I just noticed today that I could not run any test that starts a
> MiniDFSCluster.
>
> The exception I got was this:
> java.lang.NullPointerException
>         at
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:422)
>         at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:280)
>         at
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:350)
>         at
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:519)
>         at
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:475)
>         at
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:462)
>
> In the logs I had:
> 2011-10-27 14:17:48,238 WARN  [main] datanode.DataNode(1540): Invalid
> directory in dfs.data.dir: Incorrect permission for
> /home/lars/dev/hbase-trunk/target/test-data/8f8d2437-1d9a-42fa-b7c3-c154d8e559f3/dfscluster_557b48bc-9c8e-4a47-b74e-4c0167710237/dfs/data/data1,
> expected: rwxr-xr-x, while actual: rwxrwxr-x
> 2011-10-27 14:17:48,260 WARN  [main] datanode.DataNode(1540): Invalid
> directory in dfs.data.dir: Incorrect permission for
> /home/lars/dev/hbase-trunk/target/test-data/8f8d2437-1d9a-42fa-b7c3-c154d8e559f3/dfscluster_557b48bc-9c8e-4a47-b74e-4c0167710237/dfs/data/data2,
> expected: rwxr-xr-x, while actual: rwxrwxr-x
> 2011-10-27 14:17:48,261 ERROR [main] datanode.DataNode(1546): All
> directories in dfs.data.dir are invalid.
>
>
> And indeed I see this in
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(...):
>
>     FsPermission dataDirPermission =
>       new FsPermission(conf.get(DATA_DIR_PERMISSION_KEY,
>                                 DEFAULT_DATA_DIR_PERMISSION));
>     for (String dir : dataDirs) {
>       try {
>         DiskChecker.checkDir(localFS, new Path(dir), dataDirPermission);
>         dirs.add(new File(dir));
>       } catch(IOException e) {
>         LOG.warn("Invalid directory in " + DATA_DIR_KEY +  ": " +
>                  e.getMessage());
>       }
>     }
>
>
> (where DEFAULT_DATA_DIR_PERMISSION is 755)
>
>
> The default umask on my machine is 0002, so that would seem to explain the
> discrepancy.
>
> Changing my umask to 0022 fixed the problem!
> I cannot be the only one seeing this. This is just a heads for anyone who
> runs into this, as I wasted over an hour on this.
>
> I assume this is due to the switch to hadoop 0.20.205.
>
> As I am fairly ignorant about Maven... Is there a way to set the default
> umask automatically for the test processes?
>
> -- Lars
>
>


Mime
View raw message