Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1FF8D10092 for ; Sat, 31 Aug 2013 13:39:29 +0000 (UTC) Received: (qmail 42155 invoked by uid 500); 31 Aug 2013 13:39:22 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 42061 invoked by uid 500); 31 Aug 2013 13:39:21 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 42054 invoked by uid 99); 31 Aug 2013 13:39:21 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 31 Aug 2013 13:39:21 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of hpnole@gmail.com designates 209.85.219.49 as permitted sender) Received: from [209.85.219.49] (HELO mail-oa0-f49.google.com) (209.85.219.49) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 31 Aug 2013 13:39:16 +0000 Received: by mail-oa0-f49.google.com with SMTP id i7so3505784oag.8 for ; Sat, 31 Aug 2013 06:38:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=user-agent:date:subject:from:to:message-id:thread-topic:in-reply-to :mime-version:content-type:content-transfer-encoding; bh=R0SwTax/yOXjQHs56oQAuBQKR1M2mxwlnjOq85q6IQU=; b=zbBEIlvip5zX+GVc8wrphaXGbcCxhfzddpT6yUlrKfDyq4oPnsq8jT2b0hyEqUDTaw f6oDVBqV+v27Vu1qxGQYQZ1Gc2eMAQjJomYXFTI++0HWs/jbqF1CPP3Ag3vutINb8Itr 2zrkfcaNEk6RgYcNiGEm3CC9jmrmmb+QaTqTjB/ptiLfnci+xPCIGp0YC+VH0CgHc2fB PmD7yF69ssIzOrSNbLVO9LFrsd+I4fG9fLrBb+IdNG3NQrw9FDVolEv5DFBvyvBuBXzs db84MK05YWGNZwjgsrsn/YzACHaj9k63vLLb2AlMY7ccCsiInsLijH2JdcjlinT5lS8Y IwLg== X-Received: by 10.182.29.198 with SMTP id m6mr10589463obh.105.1377956334968; Sat, 31 Aug 2013 06:38:54 -0700 (PDT) Received: from [192.168.0.4] (cpe-98-156-87-197.kc.res.rr.com. [98.156.87.197]) by mx.google.com with ESMTPSA id y1sm3411681oek.4.1969.12.31.16.00.00 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sat, 31 Aug 2013 06:38:54 -0700 (PDT) User-Agent: Microsoft-MacOutlook/14.3.6.130613 Date: Sat, 31 Aug 2013 08:38:51 -0500 Subject: Re: InvalidProtocolBufferException while submitting crunch job to cluster From: Narlin M To: Message-ID: Thread-Topic: InvalidProtocolBufferException while submitting crunch job to cluster In-Reply-To: Mime-version: 1.0 Content-type: text/plain; charset="US-ASCII" Content-transfer-encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org The that was mentioned in my original post is not pointing to bdatadev. I should have mentioned this in my original post, sorry I missed that. On 8/31/13 8:32 AM, "Narlin M" wrote: >I would, but bdatadev is not one of my servers, it seems like a random >host name. I can't figure out how or where this name got generated. That's >what puzzling me. > >On 8/31/13 5:43 AM, "Shekhar Sharma" wrote: > >>: java.net.UnknownHostException: bdatadev >> >> >>edit your /etc/hosts file >>Regards, >>Som Shekhar Sharma >>+91-8197243810 >> >> >>On Sat, Aug 31, 2013 at 2:05 AM, Narlin M wrote: >>> Looks like I was pointing to incorrect ports. After correcting the port >>> numbers, >>> >>> conf.set("fs.defaultFS", "hdfs://:8020"); >>> conf.set("mapred.job.tracker", ":8021"); >>> >>> I am now getting the following exception: >>> >>> 2880 [Thread-15] INFO >>> org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob >>>- >>> java.lang.IllegalArgumentException: java.net.UnknownHostException: >>>bdatadev >>> at >>> >>>org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.j >>>a >>>va:414) >>> at >>> >>>org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies. >>>j >>>ava:164) >>> at >>> >>>org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java: >>>1 >>>29) >>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:389) >>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:356) >>> at >>> >>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileS >>>y >>>stem.java:124) >>> at >>>org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2218) >>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80) >>> at >>>org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2252) >>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2234) >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:300) >>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194) >>> at >>> >>>org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissi >>>o >>>nFiles.java:103) >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:902) >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at javax.security.auth.Subject.doAs(Subject.java:396) >>> at >>> >>>org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio >>>n >>>.java:1332) >>> at >>>org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896) >>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:531) >>> at >>> >>>org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.su >>>b >>>mit(CrunchControlledJob.java:305) >>> at >>> >>>org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.start >>>R >>>eadyJobs(CrunchJobControl.java:180) >>> at >>> >>>org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJ >>>o >>>bStatusAndStartNewOnes(CrunchJobControl.java:209) >>> at >>> >>>org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:10 >>>0 >>>) >>> at >>>org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:51) >>> at org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:75) >>> at java.lang.Thread.run(Thread.java:680) >>> Caused by: java.net.UnknownHostException: bdatadev >>> ... 27 more >>> >>> However nowhere in my code a host named "bdatadev" is mentioned, and I >>> cannot ping this host. >>> >>> Thanks for the help. >>> >>> >>> On Fri, Aug 30, 2013 at 3:04 PM, Narlin M wrote: >>>> >>>> I am getting following exception while trying to submit a crunch >>>>pipeline >>>> job to a remote hadoop cluster: >>>> >>>> Exception in thread "main" java.lang.RuntimeException: Cannot create >>>>job >>>> output directory /tmp/crunch-324987940 >>>> at >>>> >>>>org.apache.crunch.impl.mr.MRPipeline.createTempDirectory(MRPipeline.jav >>>>a >>>>:344) >>>> at org.apache.crunch.impl.mr.MRPipeline.(MRPipeline.java:125) >>>> at test.CrunchTest.setup(CrunchTest.java:98) >>>> at test.CrunchTest.main(CrunchTest.java:367) >>>> Caused by: java.io.IOException: Failed on local exception: >>>> com.google.protobuf.InvalidProtocolBufferException: Protocol message >>>> end-group tag did not match expected tag.; Host Details : local host >>>>is: >>>> "NARLIN/127.0.0.1"; destination host is: "":50070; >>>> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:759) >>>> at org.apache.hadoop.ipc.Client.call(Client.java:1164) >>>> at >>>> >>>>org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngin >>>>e >>>>.java:202) >>>> at com.sun.proxy.$Proxy11.mkdirs(Unknown Source) >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>> at >>>> >>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja >>>>v >>>>a:39) >>>> at >>>> >>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso >>>>r >>>>Impl.java:25) >>>> at java.lang.reflect.Method.invoke(Method.java:597) >>>> at >>>> >>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv >>>>o >>>>cationHandler.java:164) >>>> at >>>> >>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocatio >>>>n >>>>Handler.java:83) >>>> at com.sun.proxy.$Proxy11.mkdirs(Unknown Source) >>>> at >>>> >>>>org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mk >>>>d >>>>irs(ClientNamenodeProtocolTranslatorPB.java:425) >>>> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1943) >>>> at >>>> >>>>org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSyst >>>>e >>>>m.java:523) >>>> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1799) >>>> at >>>> >>>>org.apache.crunch.impl.mr.MRPipeline.createTempDirectory(MRPipeline.jav >>>>a >>>>:342) >>>> ... 3 more >>>> Caused by: com.google.protobuf.InvalidProtocolBufferException: >>>>Protocol >>>> message end-group tag did not match expected tag. >>>> at >>>> >>>>com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(Invali >>>>d >>>>ProtocolBufferException.java:73) >>>> at >>>> >>>>com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.j >>>>a >>>>va:124) >>>> at >>>> >>>>com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessa >>>>g >>>>eLite.java:213) >>>> at >>>> >>>>com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.j >>>>a >>>>va:746) >>>> at >>>> >>>>com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.j >>>>a >>>>va:238) >>>> at >>>> >>>>com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(Abst >>>>r >>>>actMessageLite.java:282) >>>> at >>>> >>>>com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(Abstract >>>>M >>>>essage.java:760) >>>> at >>>> >>>>com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(Abst >>>>r >>>>actMessageLite.java:288) >>>> at >>>> >>>>com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(Abstract >>>>M >>>>essage.java:752) >>>> at >>>> >>>>org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeader >>>>P >>>>roto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:985) >>>> at >>>> >>>>org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:882 >>>>) >>>> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:813) >>>> 0 [Thread-3] WARN org.apache.hadoop.util.ShutdownHookManager - >>>> ShutdownHook 'ClientFinalizer' failed, java.lang.NoSuchMethodError: >>>> com.google.common.collect.LinkedListMultimap.values()Ljava/util/List; >>>> java.lang.NoSuchMethodError: >>>> com.google.common.collect.LinkedListMultimap.values()Ljava/util/List; >>>> at org.apache.hadoop.hdfs.SocketCache.clear(SocketCache.java:135) >>>> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:672) >>>> at >>>> >>>>org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSyste >>>>m >>>>.java:539) >>>> at >>>>org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2308) >>>> at >>>> >>>>org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.ja >>>>v >>>>a:2324) >>>> at >>>> >>>>org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.ja >>>>v >>>>a:54) >>>> >>>> Google search on this error yielded solutions that asked to confirm >>>>that >>>> /etc/hosts file contained the entry for NARLIN which it does in my >>>>case. >>>> >>>> Here's the code that I am using to set up the MRPipeline: >>>> >>>> Configuration conf = HBaseConfiguration.create(); >>>> >>>> conf.set("fs.defaultFS", "hdfs://:50070"); >>>> conf.set("mapred.job.tracker", ":50030"); >>>> >>>> System.out.println("Hadoop configuration created."); >>>> System.out.println("Initializing crunch pipeline ..."); >>>> >>>> conf.set("mapred.jar", ""); >>>> >>>> pipeline = new MRPipeline(getClass(), "crunchjobtest", conf); >>>> >>>> Has anyone faced this issue before and knows how to resolve it/point >>>>out >>>> if I am missing anything? >>>> >>>> Thanks for the help. >>> >>> > >