Return-Path: X-Original-To: apmail-hbase-dev-archive@www.apache.org Delivered-To: apmail-hbase-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 73304EFE8 for ; Thu, 21 Feb 2013 15:43:28 +0000 (UTC) Received: (qmail 65507 invoked by uid 500); 21 Feb 2013 15:43:27 -0000 Delivered-To: apmail-hbase-dev-archive@hbase.apache.org Received: (qmail 65306 invoked by uid 500); 21 Feb 2013 15:43:27 -0000 Mailing-List: contact dev-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list dev@hbase.apache.org Received: (qmail 65290 invoked by uid 99); 21 Feb 2013 15:43:27 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Feb 2013 15:43:26 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of yuzhihong@gmail.com designates 209.85.160.182 as permitted sender) Received: from [209.85.160.182] (HELO mail-gh0-f182.google.com) (209.85.160.182) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Feb 2013 15:43:18 +0000 Received: by mail-gh0-f182.google.com with SMTP id z15so1254813ghb.13 for ; Thu, 21 Feb 2013 07:42:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=cTDmG6bGIcmrwQqD8rGP4FreYC/vMOA6eUv0X2lwVGw=; b=e8jsut5X8qw+9WlgQFj5maG9pa3FJO97YaElpN+A2KSH/jReJ6OwEDPOWoImTXf7K7 KWhlfcj6vdXSMbxYTRyFfelmIIqF1Ihom3amvNXjh/dgbrIuB8zSldRgwBzZiOjtRSLv UUaEIsLwK1DT2lqsUSHSQL/oJ4w7pt9oed32xchIRWIZXm5YVCLA9+YaamMR+3WJmTQD +IqS5HVY9l4XeSgGzs8u8cc92p1vMjOUmDQnUjkVd+Aobucc++iqNdolVSIDwQLn/2VA +tJ5E/YBnZEdBej2VH8fRK7bYI4RzC/QnLG0gr/A5Nb3MdAcqU7uGtsXgLKoWVzUI08U 3uNg== MIME-Version: 1.0 X-Received: by 10.236.121.42 with SMTP id q30mr44858208yhh.24.1361461377625; Thu, 21 Feb 2013 07:42:57 -0800 (PST) Received: by 10.100.81.18 with HTTP; Thu, 21 Feb 2013 07:42:57 -0800 (PST) In-Reply-To: References: Date: Thu, 21 Feb 2013 07:42:57 -0800 Message-ID: Subject: Re: Exception while using HBase trunk with hadoop - 2.0.3 From: Ted Yu To: dev@hbase.apache.org Content-Type: multipart/alternative; boundary=20cf300e519fc9f73704d63ded20 X-Virus-Checked: Checked by ClamAV on apache.org --20cf300e519fc9f73704d63ded20 Content-Type: text/plain; charset=ISO-8859-1 The exception was from hadoop layer - when waiting to get out of safe mode. Here is the call: } catch (Exception e) { if (e instanceof IOException) throw (IOException) e; // Check whether dfs is on safemode. inSafeMode = dfs.setSafeMode( org.apache.hadoop.hdfs.protocol.FSConstants.SafeModeAction. SAFEMODE_GET); I wish we had logged the exception prior to the call. Looks like setSafeMode() with boolean parameter is not supported. Cheers On Thu, Feb 21, 2013 at 7:24 AM, ramkrishna vasudevan < ramkrishna.s.vasudevan@gmail.com> wrote: > Hi Devs > > I tried to run HBase current trunk snapshot with Hadoop 2.0.3 alpha. > > I got the following exception > java.io.IOException: Failed on local exception: > com.google.protobuf.InvalidProtocolBufferException: Message missing > required fields: callId, status; Host Details : local host is: "ram/ > 10.239.47.144"; destination host is: "localhost":9000; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:760) > at org.apache.hadoop.ipc.Client.call(Client.java:1168) > at > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at $Proxy10.setSafeMode(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at $Proxy10.setSafeMode(Unknown Source) > at > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:514) > at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:1896) > at > > org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:660) > at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:261) > at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:650) > at > > org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:389) > at > > org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147) > at > > org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java:131) > at > > org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:654) > at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:476) > at java.lang.Thread.run(Thread.java:662) > Caused by: com.google.protobuf.InvalidProtocolBufferException: Message > missing required fields: callId, status > at > > com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:81) > at > > org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto$Builder.buildParsed(RpcPayloadHeaderProtos.java:1094) > at > > org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto$Builder.access$1300(RpcPayloadHeaderProtos.java:1028) > at > > org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:986) > at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:886) > at org.apache.hadoop.ipc.Client$Connection.run(Client.java:817) > 2013-02-20 20:44:01,928 INFO org.apache.hadoop.hbase.master.HMaster: > Aborting > > I tried if there was something similar raised in the dev list. Could not > find one. > But when i tried with hadoop - 1.0.4 it worked fine. > Did anyone face this problem? > > Regards > Ram > --20cf300e519fc9f73704d63ded20--