accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Elser (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (ACCUMULO-3647) Improve bulk-import sanity check with HDFS permissions enabled
Date Sun, 05 Apr 2015 05:48:33 GMT

     [ https://issues.apache.org/jira/browse/ACCUMULO-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Josh Elser updated ACCUMULO-3647:
---------------------------------
    Fix Version/s:     (was: 1.7.0)
                   1.8.0

> Improve bulk-import sanity check with HDFS permissions enabled
> --------------------------------------------------------------
>
>                 Key: ACCUMULO-3647
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3647
>             Project: Accumulo
>          Issue Type: Improvement
>          Components: tserver
>            Reporter: Josh Elser
>             Fix For: 1.8.0
>
>
> If HDFS permissions are enabled, the ownership of the directories used to bulk import
files into Accumulo are important. For example:
> {code:java,title=BulkImport.java}
>   @Override
>   public Repo<Master> call(final long tid, final Master master) throws Exception
{
>     ExecutorService executor = getThreadPool(master);
>     final AccumuloConfiguration conf = master.getConfiguration();
>     VolumeManager fs = master.getFileSystem();
>     List<FileStatus> files = new ArrayList<FileStatus>();
>     for (FileStatus entry : fs.listStatus(new Path(bulk))) {
>       files.add(entry);
>     }
>     log.debug("tid " + tid + " importing " + files.size() + " files");
>     Path writable = new Path(this.errorDir, ".iswritable");
>     if (!fs.createNewFile(writable)) {
>       // Maybe this is a re-try... clear the flag and try again
>       fs.delete(writable);
>       if (!fs.createNewFile(writable))
>         throw new ThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT,
TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY,
>             "Unable to write to " + this.errorDir);
>     }
> {code}
> {{fs.createNewFile(writable)}} will fail if the Accumulo user cannot write to that directory
> {noformat}
> 	org.apache.hadoop.security.AccessControlException: Permission denied: user=accumulo,
access=WRITE, inode="/tmp/org.apache.accumulo.test.BulkImportVolumeIT/err":admin:hdfs:drwxr-xr-x
> {noformat}
> We should also be checking the permissions of the directories so that a good error message
can be returned instead of what a user would presently see:
> {noformat}
> org.apache.accumulo.core.client.AccumuloException: Internal error processing waitForFateOperation
> 	at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
> 	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
> 	at org.apache.accumulo.core.master.thrift.FateService$Client.recv_waitForFateOperation(FateService.java:174)
> 	at org.apache.accumulo.core.master.thrift.FateService$Client.waitForFateOperation(FateService.java:159)
> 	at org.apache.accumulo.core.client.impl.TableOperationsImpl.waitForFateOperation(TableOperationsImpl.java:280)
> 	at org.apache.accumulo.core.client.impl.TableOperationsImpl.doFateOperation(TableOperationsImpl.java:322)
> 	at org.apache.accumulo.core.client.impl.TableOperationsImpl.doFateOperation(TableOperationsImpl.java:308)
> 	at org.apache.accumulo.core.client.impl.TableOperationsImpl.doTableFateOperation(TableOperationsImpl.java:1630)
> 	at org.apache.accumulo.core.client.impl.TableOperationsImpl.importDirectory(TableOperationsImpl.java:1218)
> 	at org.apache.accumulo.test.BulkImportVolumeIT.testBulkImportFailure(BulkImportVolumeIT.java:90)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message