hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (MAPREDUCE-2072) Allow multiple instances of same local job to run simutaneously on the same machine
Date Wed, 30 Jul 2014 23:59:39 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Allen Wittenauer resolved MAPREDUCE-2072.
-----------------------------------------

    Resolution: Fixed

Stale. Probably.

> Allow multiple instances of same local job to run simutaneously on the same machine
> -----------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2072
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2072
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: job submission
>    Affects Versions: 0.20.2
>            Reporter: Ted Yu
>
> On the same (build) machine, there may be multiple instances of same local job running
- e.g. same unit test from snapshot build and release build.
> For each build project on our build machine, there is environment variable with unique
value defined. 
> In JobClient.submitJobInternal(), there is following code:
>     JobID jobId = jobSubmitClient.getNewJobId();
>     Path submitJobDir = new Path(getSystemDir(), jobId.toString());
> The above code doesn't handle the scenario described previously and often leads to the
following failure:
> Caused by: org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access `/tmp/hadoop-build/mapred/system/job_local_0002':
No such file or directory
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:195)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:134)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286)
> 	at org.apache.hadoop.util.Shell.execCommand(Shell.java:354)
> 	at org.apache.hadoop.util.Shell.execCommand(Shell.java:337)
> 	at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:492)
> 	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:484)
> 	at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:286)
> 	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:308)
> 	at org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:614)
> 	at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:802)
> 	at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:771)
> 	at org.apache.hadoop.mapred.HadoopClient.runJob(HadoopClient.java:177)
> One solution would be to incorporate the value of the underlying environment variable
into either NewJobId or SystemDir so that there is no conflict.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message