hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kavali, Devaraj" <devaraj.kav...@intel.com>
Subject RE: Running MRV1 code on YARN
Date Tue, 08 Apr 2014 15:39:19 GMT
If you have set fs.defaultFS configuration with hdfs, it should use the HDFS.

And also please make sure that you have updated the Hadoop and the dependency jar files in
the client side with the Hadoop 2.2.0 jars.

Devaraj K

From: divye sheth [mailto:divs.sheth@gmail.com]
Sent: Tuesday, April 08, 2014 8:32 PM
To: user@hadoop.apache.org
Subject: Re: Running MRV1 code on YARN

Hi Deveraj,

I went through multiple links all asking me to check if the mapreduce.framework.name<http://mapreduce.framework.name>
is set to yarn, it is along with proper pointing to the Namenode i.e. fs.defaultFS.

But still it tries to connect to the local. I am not sure what to do, please help me out with
some pointers as I am fairly new to the coding aspect of map-reduce.

Divye Sheth

On Tue, Apr 8, 2014 at 7:43 PM, divye sheth <divs.sheth@gmail.com<mailto:divs.sheth@gmail.com>>

I saw that pretty much after sending the email. I verified the properties file and it has
all the correct properties even the mapred.framework.name<http://mapred.framework.name>
is set to yarn. I am unable to figure out what is the cause and why it is connecting to local

Using the same configuration file I am able to run my WordCount MRV1 example but not the code
that I have written for a usecase.

Divye Sheth

On Tue, Apr 8, 2014 at 6:12 PM, Kavali, Devaraj <devaraj.kavali@intel.com<mailto:devaraj.kavali@intel.com>>
As per the given exception stack trace, it is trying to use local file system. Can you check
whether you have configured the file system configurations with HDFS?

Devaraj K

From: divye sheth [mailto:divs.sheth@gmail.com<mailto:divs.sheth@gmail.com>]
Sent: Tuesday, April 08, 2014 5:37 PM
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Running MRV1 code on YARN


I have installed Hadoop 2.2.0 along with YARN and am trying to submit a MRV1 job already written
to YARN.

The job does not even submit and it prints the following stack trace on console:

2014-04-08 16:56:11 UserGroupInformation [ERROR] PriviledgedActionException as:eureka (auth:SIMPLE)
cause:org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access `/user/eureka54695942/.staging/job_local54695942_0001':
No such file or directory

Exception in thread "main" org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access
`/user/eureka54695942/.staging/job_local54695942_0001': No such file or directory

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:261)
        at org.apache.hadoop.util.Shell.run(Shell.java:188)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
        at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
        at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:427)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:579)
        at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:786)
        at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:746)
        at org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:177)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:963)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:948)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:948)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)

My question here is if you notice the staging location which it is trying to clean I do not
have any such user in the /user directory in hdfs. It somehow appends the jobId to the username
and creates staging area there. Any reason for this? Please let me know what am I doing wrong.
How can I make sure it goes to the user that I have created i.e. eureka and not eureka$JOBID.

I am using CDH4.

Divye Sheth

View raw message