Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EADC0F253 for ; Tue, 30 Apr 2013 17:33:47 +0000 (UTC) Received: (qmail 87415 invoked by uid 500); 30 Apr 2013 17:33:42 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 87179 invoked by uid 500); 30 Apr 2013 17:33:42 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 87160 invoked by uid 99); 30 Apr 2013 17:33:42 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Apr 2013 17:33:42 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of dontariq@gmail.com designates 209.85.220.174 as permitted sender) Received: from [209.85.220.174] (HELO mail-vc0-f174.google.com) (209.85.220.174) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Apr 2013 17:33:36 +0000 Received: by mail-vc0-f174.google.com with SMTP id hf12so659234vcb.33 for ; Tue, 30 Apr 2013 10:33:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type; bh=5kIkrko0txNC85k7iOpY8qrUhGaUUMbDZ2xY9YXEecY=; b=cYAHgzb9ZZZKZNm/USxi4TCh4xrIcEYP+n8zN4D1cPYrD7bhl1Z2ncr6+p2+tkHmab nULWGUz7/9hLpReesLezlhprbXZUIpyskaB/HL/dwc0x8kFT99456rjrgrw57hIyC7b2 YsJ6JJiC9xTovOHoDL/a4Zc5rrSQwbgYpIuvT7Af0rW3dJXWWA1JaGZP7adK1Lh8qxnc QvtusZ/8w2V2qzrEyB1nBehC09droz09pmxNbHOIrk2qOObytaONUXO9TcIKZjAzB2Kg m1rFI1AQou0TxEGzQWln++vVniNttcGjr/liqnlXuqoRnlevM8GDDbLFk3xjCnii1kUa LEiw== X-Received: by 10.220.100.138 with SMTP id y10mr1144627vcn.51.1367343195973; Tue, 30 Apr 2013 10:33:15 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.220.194 with HTTP; Tue, 30 Apr 2013 10:32:35 -0700 (PDT) In-Reply-To: <030c01ce45c1$616e8240$244b86c0$@charter.net> References: <030c01ce45c1$616e8240$244b86c0$@charter.net> From: Mohammad Tariq Date: Tue, 30 Apr 2013 23:02:35 +0530 Message-ID: Subject: Re: Can't initialize cluster To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=047d7b3a95807b6f2204db976520 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b3a95807b6f2204db976520 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Set "HADOOP_MAPRED_HOME" in your hadoop-env.sh file and re-run the job. See if it helps. Warm Regards, Tariq https://mtariq.jux.com/ cloudfront.blogspot.com On Tue, Apr 30, 2013 at 10:10 PM, Kevin Burton wr= ote: > To be clear when this code is run with =91java =96jar=92 it runs without > exception. The exception occurs when I run with =91hadoop jar=92.**** > > ** ** > > *From:* Kevin Burton [mailto:rkevinburton@charter.net] > *Sent:* Tuesday, April 30, 2013 11:36 AM > *To:* user@hadoop.apache.org > *Subject:* Can't initialize cluster**** > > ** ** > > I have a simple MapReduce job that I am trying to get to run on my > cluster. When I run it I get:**** > > ** ** > > 13/04/30 11:27:45 INFO mapreduce.Cluster: Failed to use > org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invali= d > "mapreduce.jobtracker.address" configuration value for LocalJobRunner : > "devubuntu05:9001"**** > > 13/04/30 11:27:45 ERROR security.UserGroupInformation: > PriviledgedActionException as:kevin (auth:SIMPLE) > cause:java.io.IOException: *Cannot initialize Cluster*. Please check your > configuration for mapreduce.framework.name and the correspond server > addresses.**** > > Exception in thread "main" java.io.IOException: Cannot initialize Cluster= . > Please check your configuration for mapreduce.framework.name and the > correspond server addresses.**** > > ** ** > > My core-site.xml looks like:**** > > ** ** > > **** > > fs.default.name**** > > hdfs://devubuntu05:9000**** > > The name of the default file system. A URI whose scheme an= d > authority determine the FileSystem implementation. **** > > **** > > ** ** > > So I am unclear as to why it is looking at devubuntu05:9001?**** > > ** ** > > Here is the code:**** > > ** ** > > public static void WordCount( String[] args ) throws Exception {**** > > Configuration conf =3D new Configuration();**** > > String[] otherArgs =3D new GenericOptionsParser(conf, > args).getRemainingArgs();**** > > if (otherArgs.length !=3D 2) {**** > > System.err.println("Usage: wordcount ");**** > > System.exit(2);**** > > }**** > > Job job =3D new Job(conf, "word count");**** > > job.setJarByClass(WordCount.class);**** > > job.setMapperClass(WordCount.TokenizerMapper.class);**** > > job.setCombinerClass(WordCount.IntSumReducer.class);**** > > job.setReducerClass(WordCount.IntSumReducer.class);**** > > job.setOutputKeyClass(Text.class);**** > > job.setOutputValueClass(IntWritable.class);**** > > > org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(job, n= ew > Path(otherArgs[0]));**** > > > org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job= , > new Path(otherArgs[1]));**** > > System.exit(job.waitForCompletion(true) ? 0 : 1);**** > > ** ** > > Ideas?**** > --047d7b3a95807b6f2204db976520 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
Set "= HADOOP_MAPRED_HOME" in your hadoop-env.sh file and re-run the j= ob. See if it helps.



On Tue, Apr 30, 2013 at 10:10 PM, Kevin = Burton <rkevinburton@charter.net> wrote:

To be clear when this code is run with = =91java =96jar=92 it runs without exception. The exception occurs when I ru= n with =91hadoop jar=92.

=A0

From:<= span style=3D"font-size:10.0pt;font-family:"Tahoma","sans-se= rif""> Kevin Burton [mailto:rkevinburton@charter.net]
Sent: Tuesday, April 30, 2013 11:36 AM
To: user@hadoop.apache.org
= Subject: Can't initialize cluster

=A0

I have a simple MapReduce job that I am trying to get = to run on my cluster. When I run it I get:

=A0

13/04/30 11:27:45 INFO mapreduc= e.Cluster: Failed to use org.apache.hadoop.mapred.LocalClientProtocolProvid= er due to error: Invalid "mapreduce.jobtracker.address" configura= tion value for LocalJobRunner : "devubuntu05:9001"<= /p>

13/04/30 11:27:45 ERROR security.UserGroupInformatio= n: PriviledgedActionException as:kevin (auth:SIMPLE) cause:java.io.IOExcept= ion: Cannot initialize Cluster. Please check your configuration for = mapreduce.fra= mework.name and the correspond server addresses.

Exception in thread "main" java.io.IOExcep= tion: Cannot initialize Cluster. Please check your configuration for mapreduce.framework= .name and the correspond server addresses.

=A0

My core-= site.xml looks like:

=A0<= /u>

<property>

=A0 <name>fs.def= ault.name</name>

=A0 <= value>hdfs://devubuntu05:9000</value>

=A0 <description>The name of the default file system. A URI whose sch= eme and authority determine the FileSystem implementation. </description= >

</property>

=A0

So I am = unclear as to why it is looking at devubuntu05:9001?

=A0

Here is the c= ode:

=A0

=A0=A0= =A0 public static void WordCount( String[] args )=A0 throws Exception {<= /u>

=A0=A0=A0=A0=A0=A0=A0 Configuration co= nf =3D new Configuration();

=A0=A0=A0=A0=A0=A0=A0 String[] otherArgs =3D new Gen= ericOptionsParser(conf, args).getRemainingArgs();

=A0=A0=A0=A0=A0=A0=A0 if (otherArgs.length !=3D 2) {<= u>

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 System.err.println("Usage: wordcount= <in> <out>");

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 System.exit(2);=

=A0=A0=A0=A0=A0=A0=A0 }=

=A0=A0=A0=A0=A0=A0=A0 Job job =3D new Job(conf, = "word count");

=A0=A0=A0=A0=A0=A0=A0 job.setJarByClass(WordCount.class);

=

=A0=A0=A0=A0=A0=A0=A0 job.setMapperClass(WordCount.T= okenizerMapper.class);

=A0=A0=A0=A0= =A0=A0=A0 job.setCombinerClass(WordCount.IntSumReducer.class);

=A0=A0=A0=A0=A0=A0=A0 job.setReducerClass(WordCount.= IntSumReducer.class);

=A0=A0=A0=A0= =A0=A0=A0 job.setOutputKeyClass(Text.class);

=A0=A0=A0=A0=A0=A0=A0 job.setOutputValueClass(IntWritable.class);=

=A0=A0=A0=A0=A0=A0=A0 org.apache.hadoop.mapreduce.li= b.input.FileInputFormat.addInputPath(job, new Path(otherArgs[0]));

=A0=A0=A0=A0=A0=A0=A0 org.apache.hadoop.map= reduce.lib.output.FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]= ));

=A0=A0=A0=A0=A0=A0=A0 System.exit(job.waitForComplet= ion(true) ? 0 : 1);

=A0

Ideas?


--047d7b3a95807b6f2204db976520--