Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 008AFFBF0 for ; Wed, 1 May 2013 06:03:23 +0000 (UTC) Received: (qmail 20291 invoked by uid 500); 1 May 2013 06:03:18 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 19583 invoked by uid 500); 1 May 2013 06:03:12 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 19563 invoked by uid 99); 1 May 2013 06:03:11 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 May 2013 06:03:11 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of harsh@cloudera.com designates 209.85.210.182 as permitted sender) Received: from [209.85.210.182] (HELO mail-ia0-f182.google.com) (209.85.210.182) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 May 2013 06:03:06 +0000 Received: by mail-ia0-f182.google.com with SMTP id w33so1136294iag.13 for ; Tue, 30 Apr 2013 23:02:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type:content-transfer-encoding :x-gm-message-state; bh=qiBzxyDWDdwqPfnwGm/x3SXuuHFD74x6BSWJoV3xhFg=; b=lBcDcnimAdGZtm6wFqOLcuBC6FLe7VOBMPnYSEZ6NnZwdqVErjzPGQ21I7qF/U/UY6 HapDNIoJVUgj+giB0qxYshGqBdiM/0mOmi7j2HNd0chIMAUmx2dA+VA1pQTl+4zh/wCF 5oKq/GPVLPqVfoPGOE2827IpdT7TJU5uis6LYPIEtFxeW77hmilXs6NBAwjPFI/5/BUh 0TcIN+e5LxJlWwKCe0CyCgv9647TNit0JZCpKPd6xUiAbEH+bblvMrdvp7ArHUuPgY2v U9LxWDeYqItT7HvTm5WhK9FGnGT3cJH4nWiY7FrpbgnpPn4Q5+waMffppPOZZYpZERkU 38mA== X-Received: by 10.50.153.232 with SMTP id vj8mr5933059igb.2.1367388165507; Tue, 30 Apr 2013 23:02:45 -0700 (PDT) MIME-Version: 1.0 Received: by 10.50.93.100 with HTTP; Tue, 30 Apr 2013 23:02:25 -0700 (PDT) In-Reply-To: <002b01ce45cd$7ab2e640$7018b2c0$@charter.net> References: <030c01ce45c1$616e8240$244b86c0$@charter.net> <002b01ce45cd$7ab2e640$7018b2c0$@charter.net> From: Harsh J Date: Wed, 1 May 2013 11:32:25 +0530 Message-ID: Subject: Re: Can't initialize cluster To: "" Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Gm-Message-State: ALoCoQkp1qWWPft3DNXpf12y1VuBbGzQJ3fCI7glq0+S4BMhmAt2OLyKnpZ+hZW+AE20n8kUipTy X-Virus-Checked: Checked by ClamAV on apache.org When you run with java -jar, as previously stated on another thread, you aren't loading any configs present on the installation (that configure HDFS to be the default filesystem). When you run with "hadoop jar", the configs under /etc/hadoop/conf get applied automatically to your program, making it (1) use HDFS as default FS and (2) run job in distributed mode, as opposed to local with your java -jar config-less invocation. On Tue, Apr 30, 2013 at 11:36 PM, Kevin Burton w= rote: > We/I are/am making progress. Now I get the error: > > > > 13/04/30 12:59:40 WARN mapred.JobClient: Use GenericOptionsParser for > parsing the arguments. Applications should implement Tool for the same. > > 13/04/30 12:59:40 INFO mapred.JobClient: Cleaning up the staging area > hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/staging/kevi= n/.staging/job_201304301251_0003 > > 13/04/30 12:59:40 ERROR security.UserGroupInformation: > PriviledgedActionException as:kevin (auth:SIMPLE) > cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input > path does not exist: hdfs://devubuntu05:9000/user/kevin/input > > Exception in thread "main" > org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path d= oes > not exist: hdfs://devubuntu05:9000/user/kevin/input > > > > When I run it with java =96jar the input and output is the local folder. = When > running it with hadoop jar it seems to be expecting the folders (input an= d > output) to be on the HDFS file system. I am not sure why these two method= s > of invocation don=92t make the same file system assumptions. > > > > It is > > > > hadoop jar WordCount.jar input output (which gives the above exception) > > > > versus > > > > java =96jar WordCount.jar input output (which outputs the wordcount stati= stics > to the output folder) > > > > This is run in the local /home/kevin/WordCount folder. > > > > Kevin > > > > From: Mohammad Tariq [mailto:dontariq@gmail.com] > Sent: Tuesday, April 30, 2013 12:33 PM > To: user@hadoop.apache.org > Subject: Re: Can't initialize cluster > > > > Set "HADOOP_MAPRED_HOME" in your hadoop-env.sh file and re-run the job. S= ee > if it helps. > > > Warm Regards, > > Tariq > > https://mtariq.jux.com/ > > cloudfront.blogspot.com > > > > On Tue, Apr 30, 2013 at 10:10 PM, Kevin Burton > wrote: > > To be clear when this code is run with =91java =96jar=92 it runs without > exception. The exception occurs when I run with =91hadoop jar=92. > > > > From: Kevin Burton [mailto:rkevinburton@charter.net] > Sent: Tuesday, April 30, 2013 11:36 AM > To: user@hadoop.apache.org > Subject: Can't initialize cluster > > > > I have a simple MapReduce job that I am trying to get to run on my cluste= r. > When I run it I get: > > > > 13/04/30 11:27:45 INFO mapreduce.Cluster: Failed to use > org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invali= d > "mapreduce.jobtracker.address" configuration value for LocalJobRunner : > "devubuntu05:9001" > > 13/04/30 11:27:45 ERROR security.UserGroupInformation: > PriviledgedActionException as:kevin (auth:SIMPLE) cause:java.io.IOExcepti= on: > Cannot initialize Cluster. Please check your configuration for > mapreduce.framework.name and the correspond server addresses. > > Exception in thread "main" java.io.IOException: Cannot initialize Cluster= . > Please check your configuration for mapreduce.framework.name and the > correspond server addresses. > > > > My core-site.xml looks like: > > > > > > fs.default.name > > hdfs://devubuntu05:9000 > > The name of the default file system. A URI whose scheme an= d > authority determine the FileSystem implementation. > > > > > > So I am unclear as to why it is looking at devubuntu05:9001? > > > > Here is the code: > > > > public static void WordCount( String[] args ) throws Exception { > > Configuration conf =3D new Configuration(); > > String[] otherArgs =3D new GenericOptionsParser(conf, > args).getRemainingArgs(); > > if (otherArgs.length !=3D 2) { > > System.err.println("Usage: wordcount "); > > System.exit(2); > > } > > Job job =3D new Job(conf, "word count"); > > job.setJarByClass(WordCount.class); > > job.setMapperClass(WordCount.TokenizerMapper.class); > > job.setCombinerClass(WordCount.IntSumReducer.class); > > job.setReducerClass(WordCount.IntSumReducer.class); > > job.setOutputKeyClass(Text.class); > > job.setOutputValueClass(IntWritable.class); > > > org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(job, n= ew > Path(otherArgs[0])); > > > org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job= , > new Path(otherArgs[1])); > > System.exit(job.waitForCompletion(true) ? 0 : 1); > > > > Ideas? > > --=20 Harsh J