hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladimir Rodionov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-16521) Restore operation would fail if the hbase.tmp.dir directory is absent or doesn't give proper permission
Date Mon, 29 Aug 2016 23:01:21 GMT

    [ https://issues.apache.org/jira/browse/HBASE-16521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15447333#comment-15447333
] 

Vladimir Rodionov commented on HBASE-16521:
-------------------------------------------

Please use the following code to init tmp dir:
{code}
    String hbaseTmpFsDir =
        conf.get(HConstants.TEMPORARY_FS_DIRECTORY_KEY,
          HConstants.DEFAULT_TEMPORARY_HDFS_DIRECTORY);
{code}

This looks more safe.

> Restore operation would fail if the hbase.tmp.dir directory is absent or doesn't give
proper permission
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-16521
>                 URL: https://issues.apache.org/jira/browse/HBASE-16521
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>              Labels: backup
>         Attachments: 16521.v1.txt
>
>
> I ran backup IT test and bumped into the following:
> {code}
> 2016-08-29 20:38:31,390 INFO  [main] mapreduce.Job: Job job_1472498400634_0004 failed
with state FAILED due to: Job setup failed : org.apache.hadoop.security.AccessControlException:
Permission denied:   user=hbase, access=WRITE, inode="/tmp/hbase-hbase/bulk_output-default-IntegrationTestBackupRestore.table1-1472503079471/_temporary/1":hdfs:hdfs:drwxr-xr-x
>   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>   at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:307)
>   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>   at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
>   at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
>   at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)
>   at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4011)
>   at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102)
>   at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
> {code}
> Here is related code in MapReduceRestoreService :
> {code}
>   private Path getBulkOutputDir(String tableName) throws IOException
>   {
>     Configuration conf = getConf();
>     FileSystem fs = FileSystem.get(conf);
>     String tmp = conf.get("hbase.tmp.dir");
>     Path path =  new Path(tmp + Path.SEPARATOR + "bulk_output-"+tableName + "-"
>         + EnvironmentEdgeManager.currentTime());
> {code}
> conf.get("hbase.tmp.dir") returned /tmp/hbase-hbase which was not created on hdfs.
> We should use hbase.fs.tmp.dir as the base dir to avoid the above permission error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message