hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Venkatesh (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10969) RawLocalFileSystem.setPermission throws Exception on windows
Date Thu, 14 Aug 2014 20:06:18 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14097544#comment-14097544
] 

Venkatesh commented on HADOOP-10969:
------------------------------------

Hi Steve, I changed my Maven pom file to replace all dependencies having cdh with the apache
version like. 
		<!-- setup for CDH 5.x -->
 <!--  
		<jdk_version>1.7</jdk_version>
		<avro_version>1.7.5-cdh5.1.0</avro_version>
		<cdh_version>2.3.0-cdh5.1.0</cdh_version>
		<cdh_ZK_version>3.4.5-cdh5.1.0</cdh_ZK_version>
		<cdh_HB_version>0.96.1.1-cdh5.1.0</cdh_HB_version>
 --> 
 		<!-- setup for non-CDH 5.x but hadoop 2.3.0-->
  
		<jdk_version>1.7</jdk_version>
		<avro_version>1.7.5</avro_version>
		<cdh_version>2.3.0</cdh_version>
		<cdh_ZK_version>3.3.1</cdh_ZK_version>
		<cdh_HB_version>0.96.1.1</cdh_HB_version>
 

but still get the same null pointer exception. I'm compiling and executing the MR job via
eclipse as a simple Java Application (Right Click on Driver class --> Run--> run as
Java Application).


> RawLocalFileSystem.setPermission throws Exception on windows
> ------------------------------------------------------------
>
>                 Key: HADOOP-10969
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10969
>             Project: Hadoop Common
>          Issue Type: Bug
>         Environment: hadoop 2.3.0, Windows Environment, Development using Eclipse, Lenevo
Laptop
>            Reporter: Venkatesh
>            Priority: Blocker
>
> I'm an application developer. We recently moved from CDH4.7 to CDH5.1. The hadoop version
have been from 1.x to 2.x. In order to perform development on Eclipse (on WINDOWS), the following
class was created 
> public class WindowsLocalFileSystem extends LocalFileSystem {
> 	public WindowsLocalFileSystem() {
> 		super();
> 	}
> 	@Override
> 	public boolean mkdirs(Path f, FsPermission permission) throws IOException {
> 		final boolean result = super.mkdirs(f);
> 		this.setPermission(f, permission);
> 		return result;
> 		
> 	}
> 	@Override
> 	public void setPermission(Path p, FsPermission permission)
> 			throws IOException {
> 		try {
> 		super.setPermission(p, permission);
> 		} catch (final IOException e) {
> 			System.err.println("Cant help it, hence ignoring IOException setting persmission for
path \"" + p +
> 					 "\": " + e.getMessage());
> 		}
> 	}
> }
> This class was used in MapReduce Job as
> 		if (RUN_LOCAL) {
> 			conf.set("fs.default.name", "file:///");
> 			conf.set("mapred.job.tracker", "local");
> 			conf.set("fs.file.impl",
> 					"org.scif.bdp.mrjobs.WindowsLocalFileSystem");
> 			conf.set(
> 					"io.serializations",
> 					"org.apache.hadoop.io.serializer.JavaSerialization,"
> 							+ "org.apache.hadoop.io.serializer.WritableSerialization");
> 		}
> It worked fine on CDH4.7. Now the same code when compiled on CDH5.1 works but when I
try to execute it throws the following stacktrace
> Exception in thread "main" java.lang.NullPointerException
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:451)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:424)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:656)
> 	at org.apache.hadoop.util.Shell.execCommand(Shell.java:745)
> 	at org.apache.hadoop.util.Shell.execCommand(Shell.java:728)
> 	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
> 	at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
> 	at com.scif.bdp.common.WindowsLocalFileSystem.setPermission(WindowsLocalFileSystem.java:26)
> 	at com.scif.bdp.common.WindowsLocalFileSystem.mkdirs(WindowsLocalFileSystem.java:17)
> 	at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
> 	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1313)
> 	at com.scif.bdp.mrjobs.DeDup.run(DeDup.java:55)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> 	at com.scif.bdp.mrjobs.DeDup.main(DeDup.java:59)
> (Note DeDup is my MR class to remove duplicates)
> Upon investigation the only change I saw was the change in method .setPermission(). It
invokes Native.POSIX.chmod as against Native.chmod



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message