Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id ED212EB92 for ; Wed, 13 Feb 2013 01:17:23 +0000 (UTC) Received: (qmail 61044 invoked by uid 500); 13 Feb 2013 01:17:17 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 60929 invoked by uid 500); 13 Feb 2013 01:17:17 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 60919 invoked by uid 99); 13 Feb 2013 01:17:17 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Feb 2013 01:17:17 +0000 X-ASF-Spam-Status: No, hits=-0.1 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_MED,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of hemanty@thoughtworks.com designates 64.18.0.147 as permitted sender) Received: from [64.18.0.147] (HELO exprod5og116.obsmtp.com) (64.18.0.147) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Feb 2013 01:17:09 +0000 Received: from mail-ob0-f199.google.com ([209.85.214.199]) (using TLSv1) by exprod5ob116.postini.com ([64.18.4.12]) with SMTP ID DSNKURrpf3Ag10iSjeh9T2WthQbh4E5q0Y4K@postini.com; Tue, 12 Feb 2013 17:16:49 PST Received: by mail-ob0-f199.google.com with SMTP id wd20so3424502obb.6 for ; Tue, 12 Feb 2013 17:16:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-received:in-reply-to:references:date :message-id:subject:from:to:content-type:x-gm-message-state; bh=0zO6BpGBXo55c8AaU5Ub0TYft+VTxeQ5cMeFuzOSa8g=; b=BJ4xX8DsAW3O7iCQwbeurtxfgUg+DiaGb47emDbB/5up/9GdTuXOhYxQ9gMXzEVBVP NRZfEmQpVw4kwz0FmZWT4JW6+fufooDrAEJkF179hbnqXzFrLmFa9FLLVZe3jq61Swt0 Lo2FEm9ujc/NkqRjeO5uIBeS9DY7W5cxXDqa+AlmZ+vHNrmbg6HCAqDnqRzl6rA8P2L3 bYz+Kd89Sb3maocD5BSkVyJ9yVlZVnsHL+UKusEQWDQ41qhM/K0O+DkGFpBbAab70Pol +wXukP8xsLLfBT/26HSZ87kuUgYAS3WWQ11AvO4jE2tcU159hTlepPuK59eCjGf21HuW 1wgg== X-Received: by 10.60.171.144 with SMTP id au16mr14880548oec.120.1360718207231; Tue, 12 Feb 2013 17:16:47 -0800 (PST) MIME-Version: 1.0 X-Received: by 10.60.171.144 with SMTP id au16mr14880546oec.120.1360718207063; Tue, 12 Feb 2013 17:16:47 -0800 (PST) Received: by 10.76.22.45 with HTTP; Tue, 12 Feb 2013 17:16:46 -0800 (PST) In-Reply-To: <46B4B24E-EA95-4EE7-B551-37794F0AB10C@athieme.com> References: <0818375A-3C2D-42F4-9113-F78A53FB3CE5@athieme.com> <46B4B24E-EA95-4EE7-B551-37794F0AB10C@athieme.com> Date: Wed, 13 Feb 2013 06:46:46 +0530 Message-ID: Subject: Re: Java submit job to remote server From: Hemanth Yamijala To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=bcaec5523faa5f17e104d590e569 X-Gm-Message-State: ALoCoQkghYZTjGxZs3fg9vz+VQL01XYymLmKtfCGoks1NHb4byfls8bMRNftZRrdnhZAWpLOD1UprLhgxLQctW5Zn+rbnQzAnwCvoa6LLPXOb2+bE7+Avf0qFbAQPcvWgKa8/WHGtfs1EfKPWiVhsWXf3vZl3Q9MOQ== X-Virus-Checked: Checked by ClamAV on apache.org --bcaec5523faa5f17e104d590e569 Content-Type: text/plain; charset=ISO-8859-1 Can you please include the complete stack trace and not just the root. Also, have you set fs.default.name to a hdfs location like hdfs://localhost:9000 ? Thanks Hemanth On Wednesday, February 13, 2013, Alex Thieme wrote: > Thanks for the prompt reply and I'm sorry I forgot to include the > exception. My bad. I've included it below. There certainly appears to be a > server running on localhost:9001. At least, I was able to telnet to that > address. While in development, I'm treating the server on localhost as the > remote server. Moving to production, there'd obviously be a different > remote server address configured. > > Root Exception stack trace: > java.io.EOFException > at java.io.DataInputStream.readInt(DataInputStream.java:375) > at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501) > at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446) > + 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' > for everything) > > ******************************************************************************** > > On Feb 12, 2013, at 4:22 PM, Nitin Pawar wrote: > > conf.set("mapred.job.tracker", "localhost:9001"); > > this means that your jobtracker is on port 9001 on localhost > > if you change it to the remote host and thats the port its running on then > it should work as expected > > whats the exception you are getting? > > > On Wed, Feb 13, 2013 at 2:41 AM, Alex Thieme wrote: > > I apologize for asking what seems to be such a basic question, but I would > use some help with submitting a job to a remote server. > > I have downloaded and installed hadoop locally in pseudo-distributed mode. > I have written some Java code to submit a job. > > Here's the org.apache.hadoop.util.Tool > and org.apache.hadoop.mapreduce.Mapper I've written. > > If I enable the conf.set("mapred.job.tracker", "localhost:9001") line, > then I get the exception included below. > > If that line is disabled, then the job is completed. However, in reviewing > the hadoop server administration page ( > http://localhost:50030/jobtracker.jsp) I don't see the job as processed > by the server. Instead, I wonder if my Java code is simply running the > necessary mapper Java code, bypassing the locally installed server. > > Thanks in advance. > > Alex > > public class OfflineDataTool extends Configured implements Tool { > > public int run(final String[] args) throws Exception { > final Configuration conf = getConf(); > //conf.set("mapred.job.tracker", "localhost:9001"); > > final Job job = new Job(conf); > job.setJarByClass(getClass()); > job.setJobName(getClass().getName()); > > job.setMapperClass(OfflineDataMapper.class); > > job.setInputFormatClass(TextInputFormat.class); > > job.setMapOutputKeyClass(Text.class); > job.setMapOutputValueClass(Text.class); > > job.setOutputKeyClass(Text.class); > job.setOutputValueClass(Text.class); > > FileInputFormat.addInputPath(job, new > org.apache.hadoop.fs.Path(args[0])); > > final org.apache.hadoop.fs.Path output = new org.a > > --bcaec5523faa5f17e104d590e569 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Can you please include the complete stack trace and not just the root. Also= , have you set fs.default.name to a = hdfs location like hdfs://localhost:9000 ?

Thanks
Hemanth

On Wednesday, February 13, 2013, Alex Thie= me wrote:
Thanks for the prompt reply and=A0I'm sorry I forgot to include the exc= eption. My bad. I've included it below. There certainly appears to be a= server running on localhost:9001. At least, I was able to telnet to that a= ddress.=A0While in development, I'm treating the server on localhost as= the remote server.=A0Moving to production, there'd obviously be a diff= erent remote server address configured.

=
Root Exception stack trace:
java.io.EOFException
<= div> at java.io.DataInputStream= .readInt(DataInputStream.java:375)
at org.apache.hadoop.ipc.= Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.r= un(Client.java:446)
=A0 =A0 + 3 more (set debug level logging or '-Dmule.verbose.excep= tions=3Dtrue' for everything)
*******************************= *************************************************

On Feb 12, 2013, at 4:22 PM, Nitin Pawar <nitinpawar432@gma= il.com> wrote:

<= span style=3D"font-family:Consolas,Menlo,Monaco,'Lucida Console',&#= 39;Liberation Mono','DejaVu Sans Mono','Bitstream Vera Sans= Mono','Courier New',monospace,serif;font-size:14px;line-height= :18px;background-color:rgb(238,238,238)">conf.set("mapred.job.tracker&= quot;, "localhost:9001");

this means that= your jobtracker is on port 9001 on localhost=A0

if you change i= t to the remote host and thats the port its running on then it should work = as expected=A0

whats the excep= tion you are getting?=A0


On Wed, Feb 13, 2013 at 2:41 AM, Alex Thieme <athieme@athieme.com> wrote:
I apologize for asking what se= ems to be such a basic question, but I would use some help with submitting = a job to a remote server.

I have downloaded and installed hadoop locally in pseudo-dis= tributed mode. I have written some Java code to submit a job.=A0

Here's the=A0org.apache.hadoop.util.Tool and=A0org.apache.hado= op.mapreduce.Mapper I've written.

If I enable the=A0conf.set("mapred.job.tracker&quo= t;, "localhost:9001") line, then I get the exception included bel= ow.

If that line is disabled, then the job is comp= leted.=A0However, in reviewing the hadoop server administration page (http://local= host:50030/jobtracker.jsp) I don't see the job as processed by the = server. Instead, I wonder if my Java code is simply running the necessary m= apper Java code, bypassing the locally installed server.

Thanks in advance.

Alex
<= div>
public class OfflineDataTool extends Configured imp= lements Tool {

=A0 =A0 public int run(final String= [] args) throws Exception {
=A0 =A0 =A0 =A0 final Configuration conf =3D getConf();
=A0 = =A0 =A0 =A0 //conf.set("mapred.job.tracker", "localhost:9001= ");

=A0 =A0 =A0 =A0 final Job job =3D new Job= (conf);
=A0 =A0 =A0 =A0 job.setJarByClass(getClass());
=A0 =A0 =A0 =A0 job.setJobName(getClass().getName());

=A0 =A0 =A0 =A0 job.setMapperClass(OfflineDataMapper.class);
=

=A0 =A0 =A0 =A0 job.setInputFormatClass(TextInputFormat= .class);

=A0 =A0 =A0 =A0 job.setMapOutputKeyClass(Text.class);
= =A0 =A0 =A0 =A0 job.setMapOutputValueClass(Text.class);

=A0 =A0 =A0 =A0 job.setOutputKeyClass(Text.class);
=A0 =A0 = =A0 =A0 job.setOutputValueClass(Text.class);

=A0 =A0 =A0 =A0 FileInputFormat.addInputPath(job, new o= rg.apache.hadoop.fs.Path(args[0]));

=A0 =A0 =A0 = =A0 final org.apache.hadoop.fs.Path output =3D new org.a
<= /div>
--bcaec5523faa5f17e104d590e569--