hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From divye sheth <divs.sh...@gmail.com>
Subject Re: Running MRV1 code on YARN
Date Wed, 09 Apr 2014 07:38:22 GMT
Thanks. Got it working finally, pretty basic issue which I overlooked and
Deveraj pointed out. Was trying to submit a job to Yarn using MRV1 libs.
Once the libs were updated got a "Cluster not Initialized" exception which
was misleading since I was missing some jar files.

Used the link below to resolve the issue:
https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/vwL9Zvmue18

Thanks
Divye Sheth


On Tue, Apr 8, 2014 at 9:09 PM, Kavali, Devaraj <devaraj.kavali@intel.com>wrote:

>  If you have set fs.defaultFS configuration with hdfs, it should use the
> HDFS.
>
>
>
> And also please make sure that you have updated the Hadoop and the
> dependency jar files in the client side with the Hadoop 2.2.0 jars.
>
>
>
> Thanks
>
> Devaraj K
>
>
>
> *From:* divye sheth [mailto:divs.sheth@gmail.com]
> *Sent:* Tuesday, April 08, 2014 8:32 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Running MRV1 code on YARN
>
>
>
> Hi Deveraj,
>
>
>
> I went through multiple links all asking me to check if the
> mapreduce.framework.name is set to yarn, it is along with proper pointing
> to the Namenode i.e. fs.defaultFS.
>
>
>
> But still it tries to connect to the local. I am not sure what to do,
> please help me out with some pointers as I am fairly new to the coding
> aspect of map-reduce.
>
>
>
> Thanks
>
> Divye Sheth
>
>
>
> On Tue, Apr 8, 2014 at 7:43 PM, divye sheth <divs.sheth@gmail.com> wrote:
>
> Hi,
>
>
>
> I saw that pretty much after sending the email. I verified the properties
> file and it has all the correct properties even the mapred.framework.nameis set to yarn.
I am unable to figure out what is the cause and why it is
> connecting to local FS.
>
>
>
> Using the same configuration file I am able to run my WordCount MRV1
> example but not the code that I have written for a usecase.
>
>
>
> Thanks
>
> Divye Sheth
>
>
>
> On Tue, Apr 8, 2014 at 6:12 PM, Kavali, Devaraj <devaraj.kavali@intel.com>
> wrote:
>
> As per the given exception stack trace, it is trying to use local file
> system. Can you check whether you have configured the file system
> configurations with HDFS?
>
>
>
> Thanks
>
> Devaraj K
>
>
>
> *From:* divye sheth [mailto:divs.sheth@gmail.com]
> *Sent:* Tuesday, April 08, 2014 5:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* Running MRV1 code on YARN
>
>
>
> Hi,
>
>
>
> I have installed Hadoop 2.2.0 along with YARN and am trying to submit a
> MRV1 job already written to YARN.
>
>
>
> The job does not even submit and it prints the following stack trace on
> console:
>
>
>
> 2014-04-08 16:56:11 UserGroupInformation [ERROR]
> PriviledgedActionException as:eureka (auth:SIMPLE)
> cause:org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access
> `/user/eureka54695942/.staging/job_local54695942_0001': No such file or
> directory
>
>
>
> Exception in thread "main" org.apache.hadoop.util.Shell$ExitCodeException:
> chmod: cannot access
> `/user/eureka54695942/.staging/job_local54695942_0001': No such file or
> directory
>
>
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:261)
>
>         at org.apache.hadoop.util.Shell.run(Shell.java:188)
>
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
>
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
>
>         at
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:427)
>
>         at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:579)
>
>         at
> org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:786)
>
>         at
> org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:746)
>
>         at
> org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:177)
>
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:963)
>
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:948)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>
>         at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:948)
>
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
>
>
>
> My question here is if you notice the staging location which it is trying
> to clean I do not have any such user in the /user directory in hdfs. It
> somehow appends the jobId to the username and creates staging area there.
> Any reason for this? Please let me know what am I doing wrong. How can I
> make sure it goes to the user that I have created i.e. eureka and not
> eureka$JOBID.
>
>
>
> I am using CDH4.
>
>
>
> Thanks
>
> Divye Sheth
>
>
>
>
>

Mime
View raw message