Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BB932DA29 for ; Fri, 15 Feb 2013 01:11:07 +0000 (UTC) Received: (qmail 55499 invoked by uid 500); 15 Feb 2013 01:11:03 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 55334 invoked by uid 500); 15 Feb 2013 01:11:02 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 55326 invoked by uid 99); 15 Feb 2013 01:11:02 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 15 Feb 2013 01:11:02 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.210.180 as permitted sender) Received: from [209.85.210.180] (HELO mail-ia0-f180.google.com) (209.85.210.180) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 15 Feb 2013 01:10:21 +0000 Received: by mail-ia0-f180.google.com with SMTP id f27so2802577iae.39 for ; Thu, 14 Feb 2013 17:09:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type:x-gm-message-state; bh=YRiXUtp2w75cymqti8jCuQMfkjUjuRwcvUrvbXgvFYI=; b=dlFcbwrmtcWd11tz8OzstKETVNzazgue38TqdivugimcrHKVDorjqwOzbCQKO/TmFs P+gfme7VicuPLaG7RRRrYlV4SPU8bUitI6tug7HFR2DdvjCZ/5PVCM+LmU3X3oTyf83x e4M6DPNK3IAjXVCFxBzV3tdc0peYCN/jqorHY3+6aftZTcRSv+rD4MRW91MKAYkLi4kJ +eLw451VqwSaaQigl1Jep6fqRCaIvUFDLflZO+kZcMA1j5mDOv9JXZeN/l/CwU1qNG5P pVNMNL/sR3QUipIBrL3Z7a967TEeghXBiJ9lwiDIpZoJyNYICt4GiAjxhB6aaD9Dpxub WQkg== X-Received: by 10.50.53.143 with SMTP id b15mr1287460igp.69.1360890599568; Thu, 14 Feb 2013 17:09:59 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.104.229 with HTTP; Thu, 14 Feb 2013 17:09:38 -0800 (PST) In-Reply-To: References: <0818375A-3C2D-42F4-9113-F78A53FB3CE5@athieme.com> <46B4B24E-EA95-4EE7-B551-37794F0AB10C@athieme.com> From: Harsh J Date: Fri, 15 Feb 2013 06:39:38 +0530 Message-ID: Subject: Re: Java submit job to remote server To: "" Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQmzIb/4mjz74MAzEah4yxGvebolv09/jZdmMBSLvr1fgBaz0jrtBf/HzsD/p0ORlk4+COHE X-Virus-Checked: Checked by ClamAV on apache.org Hi Alex, There can be two reasons here: (a) your client libs and the server libs have a version mismatch that includes incompatible RPC protocol changes, making it impossible for communication to happen, or (b) the port you are connecting to in your app, is not really the JobTracker port. For (a), fixing dependencies in the client runtime/project to match the version of hadoop deployed on the server would usually fix it. For (b), inspecting the server's core-site.xml (For the fs.default.name port, which is the NN's port) and mapred-site.xml (For the mapred.job.tracker port, which is the JT's port), would help you figure out what the deployment looks like and fix the port configs in code as well, to connect to the right one. Does either of this help? On Fri, Feb 15, 2013 at 6:22 AM, Alex Thieme wrote: > Any thoughts on why my connection to the hadoop server fails? An help > provided would be greatly appreciated. > > Alex Thieme > athieme@athieme.com > 508-361-2788 > > > On Feb 13, 2013, at 1:41 PM, Alex Thieme wrote: > > It appears this is the full extent of the stack trace. Anything prior to the > org.apache.hadoop calls are from my container where hadoop is called from. > > Caused by: java.io.IOException: Call to /127.0.0.1:9001 failed on local > exception: java.io.EOFException > at org.apache.hadoop.ipc.Client.wrapException(Client.java:775) > at org.apache.hadoop.ipc.Client.call(Client.java:743) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at org.apache.hadoop.mapred.$Proxy55.getProtocolVersion(Unknown Source) > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) > at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:429) > at org.apache.hadoop.mapred.JobClient.init(JobClient.java:423) > at org.apache.hadoop.mapred.JobClient.(JobClient.java:410) > at org.apache.hadoop.mapreduce.Job.(Job.java:50) > at com.allenabi.sherlock.graph.OfflineDataTool.run(OfflineDataTool.java:25) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at > com.allenabi.sherlock.graph.OfflineDataComponent.submitJob(OfflineDataComponent.java:67) > ... 64 more > Caused by: java.io.EOFException > at java.io.DataInputStream.readInt(DataInputStream.java:375) > at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501) > at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446) > > Alex Thieme > athieme@athieme.com > 508-361-2788 > In > > On Feb 12, 2013, at 8:16 PM, Hemanth Yamijala > wrote: > > Can you please include the complete stack trace and not just the root. Also, > have you set fs.default.name to a hdfs location like hdfs://localhost:9000 ? > > Thanks > Hemanth > > On Wednesday, February 13, 2013, Alex Thieme wrote: >> >> Thanks for the prompt reply and I'm sorry I forgot to include the >> exception. My bad. I've included it below. There certainly appears to be a >> server running on localhost:9001. At least, I was able to telnet to that >> address. While in development, I'm treating the server on localhost as the >> remote server. Moving to production, there'd obviously be a different remote >> server address configured. >> >> Root Exception stack trace: >> java.io.EOFException >> at java.io.DataInputStream.readInt(DataInputStream.java:375) >> at >> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501) >> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446) >> + 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' >> for everything) >> >> ******************************************************************************** >> >> On Feb 12, 2013, at 4:22 PM, Nitin Pawar wrote: >> >> conf.set("mapred.job.tracker", "localhost:9001"); >> >> this means that your jobtracker is on port 9001 on localhost >> >> if you change it to the remote host and thats the port its running on then >> it should work as expected >> >> whats the exception you are getting? >> >> >> On Wed, Feb 13, 2013 at 2:41 AM, Alex Thieme wrote: >> >> I apologize for asking what seems to be such a basic question, but I would >> use some help with submitting a job to a remote server. >> >> I have downloaded and installed hadoop locally in pseudo-distributed mode. >> I have written some Java code to submit a job. >> >> Here's the org.apache.hadoop.util.Tool and >> org.apache.hadoop.mapreduce.Mapper I've written. >> >> If I enable the conf.set("mapred.job.tracker", "localhost:9001") line, >> then I get the exception included below. >> >> If that line is disabled, then the job is completed. However, in reviewing >> the hadoop server administration page >> (http://localhost:50030/jobtracker.jsp) I don't see the job as processed by >> the server. Instead, I wonder if my Java code is simply running the >> necessary mapper Java code, bypassing the locally installed server. >> >> Thanks in advance. >> >> Alex >> >> public class OfflineDataTool extends Configured implements Tool { >> >> public int run(final String[] args) throws Exception { >> final Configuration conf = getConf(); >> //conf.set("mapred.job.tracker", "localhost:9001"); >> >> final Job job = new Job(conf); >> job.setJarByClass(getClass()); >> job.setJobName(getClass().getName()); >> >> job.setMapperClass(OfflineDataMapper.class); >> >> job.setInputFormatClass(TextInputFormat.class); >> >> job.setMapOutputKeyClass(Text.class); >> job.setMapOutputValueClass(Text.class); >> >> job.setOutputKeyClass(Text.class); >> job.setOutputValueClass(Text.class); >> >> FileInputFormat.addInputPath(job, new >> org.apache.hadoop.fs.Path(args[0])); >> >> final org.apache.hadoop.fs.Path output = new org.a > > -- Harsh J