Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3D37310041 for ; Wed, 9 Apr 2014 07:39:02 +0000 (UTC) Received: (qmail 15760 invoked by uid 500); 9 Apr 2014 07:38:51 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 15623 invoked by uid 500); 9 Apr 2014 07:38:49 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 15583 invoked by uid 99); 9 Apr 2014 07:38:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Apr 2014 07:38:47 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of divs.sheth@gmail.com designates 209.85.219.47 as permitted sender) Received: from [209.85.219.47] (HELO mail-oa0-f47.google.com) (209.85.219.47) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Apr 2014 07:38:43 +0000 Received: by mail-oa0-f47.google.com with SMTP id i11so2281280oag.20 for ; Wed, 09 Apr 2014 00:38:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=YaRbzYH+mCCqH71Rw1KPHfPrbqCZVztsZXE9btdqcyU=; b=O6u0qe+MpHkx8WVtqUvRDfwGSHLk05AVgzGIGjzJ5Vssu/6/fdF1Bmex1sNWWLWbSt 8LpaJ3800ihs3Wwg5fiVVqh/+tHVAwtWhlFnqIYhmjnFwc75Z7XMBbrBszk13TqrP9bj xtEBAaiWPMs6mhHY89mCC6GQ15mT33L0i48jUPFu0ABkzw66brHhP3hPayITpJtUpXWW mNzv675igio5BdykLOFMRL7apv+ZLLn0r2juJu802PMcRPjGD0zLaCTbj21kzjYa1fAM dfklIcA+4mb9zbX9XtLhQYLIufSCofZz+CrN1dMfMhgUZt+PDiCwoZvjHf6SA2E+lrIf Y/cw== MIME-Version: 1.0 X-Received: by 10.60.51.69 with SMTP id i5mr7293006oeo.17.1397029102420; Wed, 09 Apr 2014 00:38:22 -0700 (PDT) Received: by 10.76.109.15 with HTTP; Wed, 9 Apr 2014 00:38:22 -0700 (PDT) In-Reply-To: <03B8E515E27DB74CA4A22A2D8053514577D000@BGSMSX101.gar.corp.intel.com> References: <03B8E515E27DB74CA4A22A2D8053514577C843@BGSMSX101.gar.corp.intel.com> <03B8E515E27DB74CA4A22A2D8053514577D000@BGSMSX101.gar.corp.intel.com> Date: Wed, 9 Apr 2014 13:08:22 +0530 Message-ID: Subject: Re: Running MRV1 code on YARN From: divye sheth To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c300de64095a04f6972f20 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c300de64095a04f6972f20 Content-Type: text/plain; charset=ISO-8859-1 Thanks. Got it working finally, pretty basic issue which I overlooked and Deveraj pointed out. Was trying to submit a job to Yarn using MRV1 libs. Once the libs were updated got a "Cluster not Initialized" exception which was misleading since I was missing some jar files. Used the link below to resolve the issue: https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/vwL9Zvmue18 Thanks Divye Sheth On Tue, Apr 8, 2014 at 9:09 PM, Kavali, Devaraj wrote: > If you have set fs.defaultFS configuration with hdfs, it should use the > HDFS. > > > > And also please make sure that you have updated the Hadoop and the > dependency jar files in the client side with the Hadoop 2.2.0 jars. > > > > Thanks > > Devaraj K > > > > *From:* divye sheth [mailto:divs.sheth@gmail.com] > *Sent:* Tuesday, April 08, 2014 8:32 PM > *To:* user@hadoop.apache.org > *Subject:* Re: Running MRV1 code on YARN > > > > Hi Deveraj, > > > > I went through multiple links all asking me to check if the > mapreduce.framework.name is set to yarn, it is along with proper pointing > to the Namenode i.e. fs.defaultFS. > > > > But still it tries to connect to the local. I am not sure what to do, > please help me out with some pointers as I am fairly new to the coding > aspect of map-reduce. > > > > Thanks > > Divye Sheth > > > > On Tue, Apr 8, 2014 at 7:43 PM, divye sheth wrote: > > Hi, > > > > I saw that pretty much after sending the email. I verified the properties > file and it has all the correct properties even the mapred.framework.nameis set to yarn. I am unable to figure out what is the cause and why it is > connecting to local FS. > > > > Using the same configuration file I am able to run my WordCount MRV1 > example but not the code that I have written for a usecase. > > > > Thanks > > Divye Sheth > > > > On Tue, Apr 8, 2014 at 6:12 PM, Kavali, Devaraj > wrote: > > As per the given exception stack trace, it is trying to use local file > system. Can you check whether you have configured the file system > configurations with HDFS? > > > > Thanks > > Devaraj K > > > > *From:* divye sheth [mailto:divs.sheth@gmail.com] > *Sent:* Tuesday, April 08, 2014 5:37 PM > *To:* user@hadoop.apache.org > *Subject:* Running MRV1 code on YARN > > > > Hi, > > > > I have installed Hadoop 2.2.0 along with YARN and am trying to submit a > MRV1 job already written to YARN. > > > > The job does not even submit and it prints the following stack trace on > console: > > > > 2014-04-08 16:56:11 UserGroupInformation [ERROR] > PriviledgedActionException as:eureka (auth:SIMPLE) > cause:org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access > `/user/eureka54695942/.staging/job_local54695942_0001': No such file or > directory > > > > Exception in thread "main" org.apache.hadoop.util.Shell$ExitCodeException: > chmod: cannot access > `/user/eureka54695942/.staging/job_local54695942_0001': No such file or > directory > > > > at org.apache.hadoop.util.Shell.runCommand(Shell.java:261) > > at org.apache.hadoop.util.Shell.run(Shell.java:188) > > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381) > > at org.apache.hadoop.util.Shell.execCommand(Shell.java:467) > > at org.apache.hadoop.util.Shell.execCommand(Shell.java:450) > > at > org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593) > > at > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584) > > at > org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:427) > > at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:579) > > at > org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:786) > > at > org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:746) > > at > org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:177) > > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:963) > > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:948) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:396) > > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > > at > org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:948) > > at org.apache.hadoop.mapreduce.Job.submit(Job.java:566) > > > > My question here is if you notice the staging location which it is trying > to clean I do not have any such user in the /user directory in hdfs. It > somehow appends the jobId to the username and creates staging area there. > Any reason for this? Please let me know what am I doing wrong. How can I > make sure it goes to the user that I have created i.e. eureka and not > eureka$JOBID. > > > > I am using CDH4. > > > > Thanks > > Divye Sheth > > > > > --001a11c300de64095a04f6972f20 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Thanks. Got it working finally, pretty basic issue which I= overlooked and Deveraj pointed out. Was trying to submit a job to Yarn usi= ng MRV1 libs. Once the libs were updated got a "Cluster not Initialize= d" exception which was misleading since I was missing some jar files.<= div>
Used the link below to resolve the issue:

Thanks
Divye Sheth


On Tue, Apr 8, 2014 at= 9:09 PM, Kavali, Devaraj <devaraj.kavali@intel.com> = wrote:

If you have set fs.defaul= tFS configuration with hdfs, it should use the HDFS.

=A0<= /p>

And also please make sure= that you have updated the Hadoop and the dependency jar files in the clien= t side with the Hadoop 2.2.0 jars.

=A0<= /p>

Thanks

Devaraj K

=A0<= /p>

From: divye sh= eth [mailto:divs.= sheth@gmail.com]
Sent: Tuesday, April 08, 2014 8:32 PM
To: user= @hadoop.apache.org
Subject: Re: Running MRV1 code on YARN

=

=A0

Hi Deveraj,

=A0

I went through multiple links all asking me to check= if the mapreduce.framework.name is set to yarn, it is along with proper pointi= ng to the Namenode i.e. fs.defaultFS.

=A0

But still it tries to connect to the local. I am not= sure what to do, please help me out with some pointers as I am fairly new = to the coding aspect of map-reduce.

=A0

Thanks

Divye Sheth

=A0

On Tue, Apr 8, 2014 at 7:43 PM, divye sheth <divs.sheth@gmail.com<= /a>> wrote:

Hi,

=A0

=A0

Using the same configuration file I am able to run m= y WordCount MRV1 example but not the code that I have written for a usecase= .

=A0

Thanks

Divye Sheth<= /u>

=A0

On Tue, Apr 8, 2014 at 6:12 PM, Kavali, Devaraj <= devaraj.kaval= i@intel.com> wrote:

As per the given exceptio= n stack trace, it is trying to use local file system. Can you check whether you have configured the file system configurations with HDFS?

=A0<= /p>

Thanks

Devaraj K

=A0<= /p>

From: divye sh= eth [mailto:divs.= sheth@gmail.com]
Sent: Tuesday, April 08, 2014 5:37 PM
To: user= @hadoop.apache.org
Subject: Running MRV1 code on YARN

=A0

Hi,

=A0

I have installed Hadoop 2.2.0 along with YARN and am= trying to submit a MRV1 job already written to YARN.

=A0

The job does not even submit and it prints the follo= wing stack trace on console:

=A0

2014-04-08 16:56:11 UserGroupInformation [ERROR] Pri= viledgedActionException as:eureka (auth:SIMPLE) cause:org.apache.hadoop.uti= l.Shell$ExitCodeException: chmod: cannot access `/user/eureka54695942/.stag= ing/job_local54695942_0001': No such file or directory

=A0

Exception in thread "main" org.apache.hado= op.util.Shell$ExitCodeException: chmod: cannot access `/user/eureka54695942= /.staging/job_local54695942_0001': No such file or directory<= /u>

=A0

=A0 =A0 =A0 =A0 at org.apache.hadoop.util.Shell.runC= ommand(Shell.java:261)

=A0 =A0 =A0 =A0 at org.apache.hadoop.util.Shell.run(= Shell.java:188)

=A0 =A0 =A0 =A0 at org.apache.hadoop.util.Shell$Shel= lCommandExecutor.execute(Shell.java:381)

=A0 =A0 =A0 =A0 at org.apache.hadoop.util.Shell.exec= Command(Shell.java:467)

=A0 =A0 =A0 =A0 at org.apache.hadoop.util.Shell.exec= Command(Shell.java:450)

=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.RawLocalFile= System.execCommand(RawLocalFileSystem.java:593)

=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.RawLocalFile= System.setPermission(RawLocalFileSystem.java:584)

=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FilterFileSy= stem.setPermission(FilterFileSystem.java:427)

=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FileSystem.m= kdirs(FileSystem.java:579)

=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobClien= t.copyAndConfigureFiles(JobClient.java:786)

=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobClien= t.copyAndConfigureFiles(JobClient.java:746)

=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobClien= t.access$400(JobClient.java:177)

=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobClien= t$2.run(JobClient.java:963)

=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobClien= t$2.run(JobClient.java:948)

=A0 =A0 =A0 =A0 at java.security.AccessController.do= Privileged(Native Method)

=A0 =A0 =A0 =A0 at javax.security.auth.Subject.doAs(= Subject.java:396)

=A0 =A0 =A0 =A0 at org.apache.hadoop.security.UserGr= oupInformation.doAs(UserGroupInformation.java:1408)

=A0 =A0 =A0 =A0 at org.apache.hadoop.mapred.JobClien= t.submitJobInternal(JobClient.java:948)

=A0 =A0 =A0 =A0 at org.apache.hadoop.mapreduce.Job.s= ubmit(Job.java:566)

=A0

My question here is if you notice the staging locati= on which it is trying to clean I do not have any such user in the /user dir= ectory in hdfs. It somehow appends the jobId to the username and creates staging area there. Any reason for this? Please let m= e know what am I doing wrong. How can I make sure it goes to the user that = I have created i.e. eureka and not eureka$JOBID.

=A0

I am using CDH4.

=A0

Thanks

Divye Sheth

=A0

=A0


--001a11c300de64095a04f6972f20--