accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "John Vines (JIRA)" <>
Subject [jira] [Commented] (ACCUMULO-826) MapReduce over accumlo fails if process that started job is killed
Date Mon, 22 Oct 2012 19:18:11 GMT


John Vines commented on ACCUMULO-826:

The file gets stored in the private distributed cache, which was added in Hadoop 20.20something.
The method for accessing that may not be accurate. Mike Drob is correct, that was implemented
for ACCUMULO-489, which is a critical issue. The other implementation idea was having it stored
temporarily in zookeeper. Having issues have to mess with the file system is worse, IMO. It
will lead to users having passwords laying around in the filesystem world-readable because
some do not know or do not care about securing their identity, they just want to run their
MR job.

I think the only other secure implementation would be a token system implemented, but for
the effort of timeliness, using the private distributed cache is a safe method of implementing
> MapReduce over accumlo fails if process that started job is killed
> ------------------------------------------------------------------
>                 Key: ACCUMULO-826
>                 URL:
>             Project: Accumulo
>          Issue Type: Bug
>    Affects Versions: 1.4.1
>            Reporter: Keith Turner
>            Assignee: Keith Turner
>            Priority: Critical
>             Fix For: 1.4.2
> While testing the 1.4.2rc2 I started a continuous verify and killed the process that
started the job.  Normally you would expect the job to keep running when you do this.  Howerver
task started to fail.  I was seeing errors like the following.
> {noformat}
> File does not exist: /user/hadoop/
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(
> 	at
> 	at
> 	at
> 	at org.apache.accumulo.core.client.mapreduce.InputFormatBase.getPassword(
> 	at org.apache.accumulo.core.client.mapreduce.InputFormatBase$RecordReaderBase.initialize(
> 	at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(
> 	at org.apache.hadoop.mapred.MapTask.runNewMapper(
> 	at
> 	at org.apache.hadoop.mapred.Child$
> 	at Method)
> 	at
> 	at
> 	at org.apache.hadoop.mapred.Child.main(
> {noformat}
> I think this is caused by the following code in InputFormatBase
> {code:java}
>   public static void setInputInfo(Configuration conf, String user, byte[] passwd, String
table, Authorizations auths) {
>     if (conf.getBoolean(INPUT_INFO_HAS_BEEN_SET, false))
>       throw new IllegalStateException("Input info can only be set once per job");
>     conf.setBoolean(INPUT_INFO_HAS_BEEN_SET, true);
>     ArgumentChecker.notNull(user, passwd, table);
>     conf.set(USERNAME, user);
>     conf.set(TABLE_NAME, table);
>     if (auths != null && !auths.isEmpty())
>       conf.set(AUTHORIZATIONS, auths.serialize());
>     try {
>       FileSystem fs = FileSystem.get(conf);
>       Path file = new Path(fs.getWorkingDirectory(), conf.get("") + System.currentTimeMillis()
+ ".pw");
>       conf.set(PASSWORD_PATH, file.toString());
>       FSDataOutputStream fos = fs.create(file, false);
>       fs.setPermission(file, new FsPermission(FsAction.ALL, FsAction.NONE, FsAction.NONE));
>       fs.deleteOnExit(file);  // <--- NOT 100% sure, but I think this is the culprit
> {code}

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message