hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jian Lu <...@local.com>
Subject RE: Problem starting HBase Master for the first time, EOFException
Date Wed, 20 Oct 2010 18:45:49 GMT
Hi Jean, sorry I should have spotted the difference of 0.21.0 vs 0.20.0!

If I want to use HBase 0.89.20100924, should I use cloudera's CDH3B3?  But, you seem to mention
there was a problem you have solved. Are there any steps we have to follow in order to solve
the problem you mentioned at the end of your email?


Thanks!
Jack.


-----Original Message-----
From: jdcryans@gmail.com [mailto:jdcryans@gmail.com] On Behalf Of Jean-Daniel Cryans
Sent: Wednesday, October 20, 2010 11:35 AM
To: user@hbase.apache.org
Subject: Re: Problem starting HBase Master for the first time, EOFException

HBase doesn't work on Hadoop 0.21.x, the Getting Started documentation
specifically says:

"This version of HBase will only run on Hadoop 0.20.x.HBase will lose
data unless it is running on an HDFS that has a durable sync
operation. Currently only the branch-0.20-append branch has this
attribute. No official releases have been made from this branch as of
this writing so you will have to build your own Hadoop from the tip of
this branch (or install Cloudera's CDH3 (as of this writing, it is in
beta); it has the 0.20-append patches needed to add a durable sync).
See CHANGES.txt in branch-0.20.-append to see list of patches
involved."

Also I solved what I think is the other poster's issue on the IRC
channel, basically he was using the Apache 0.89.20100924 release with
cloudera's CDH3B3, which seems to be incompatible (although from the
looks of it it's just a master of replacing the hadoop jar that hbase
is running with).

J-D

On Wed, Oct 20, 2010 at 11:25 AM, Jian Lu <jlu@local.com> wrote:
> I am having the same problem right now.  The zoo keeper started OK but HMaster failed.
 Please help!!!!!!!!
>
> I have installed hadoop-0.21.0 on a three-node cluster, I started the Hadoop cluster
successfully with NameNode on caiss01a:54310.  I tried to setup hbase-0.89.20100924 in fully-distributed
mode with: hbase.rootdir = hdfs://caiss01a:54310/hbase, hbase.cluster.distributed = true.
 Then I ran the ./bin/start-hbase.sh and got the error below:
>
>
> 2010-10-20 11:21:50,591 INFO org.apache.hadoop.ipc.HBaseServer: Starting SocketReader
> 2010-10-20 11:21:50,612 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing
RPC Metrics with hostName=HMaster, port=60000
> 2010-10-20 11:21:51,094 ERROR org.apache.hadoop.hbase.master.HMaster: Failed to start
master
> java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCall
to caiss01a/172.16.2.224:54310 failed on local ex
> ception: java.io.EOFException
>        at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1232)
>        at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1338)
>        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1389)
> Caused by: java.lang.reflect.InvocationTargetException
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1230)
>        ... 2 more
> Caused by: java.io.IOException: Call to caiss01a/172.16.2.224:54310 failed on local exception:
java.io.EOFException
>
>
>
> Please help!!!!!!!!!!!   Thanks!
>
> Jack.
>
>
>
> -----Original Message-----
> From: Andrew Nguyen [mailto:andrew-lists-hbase@ucsfcti.org]
> Sent: Wednesday, October 20, 2010 8:55 AM
> To: user@hbase.apache.org
> Subject: Re: Problem starting HBase Master for the first time, EOFException
>
> I am having a very similar issue but I have verified that both the NameNode and HBase
are pointing to the same port (in this case, the default of 8020).  I am getting the same
EOFException and have attached it here also:
>
> java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCall
to master-local/127.0.0.1:8020 failed on local exception: java.io.EOFException
>        at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1232)
>        at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1338)
>        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1389)
> Caused by: java.lang.reflect.InvocationTargetException
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1230)
>        ... 2 more
> Caused by: java.io.IOException: Call to master-local/127.0.0.1:8020 failed on local exception:
java.io.EOFException
>        at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>        at org.apache.hadoop.ipc.Client.call(Client.java:743)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy0.getProtocolVersion(Unknown Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:112)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:213)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:176)
>        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:217)
>        ... 7 more
> Caused by: java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>
> I have previously setup several clusters successfully on Ubuntu but am trying to setup
a pseudo-distributed cluster locally on my Macbook Pro.  So, this is my first attempt on
a Mac machine.  I have verified that Hadoop is running as I can create directories in HDFS
(I tried creating /hbase just to see if I could, then I subsequently removed it).  Not sure
what other causes there are for EOFExceptions that I can take a look at.
>
> Thanks!
>
> --
> Andrew Nguyen
> andrew@ucsfcti.org
>
> The information contained in this electronic message and any attachments to this message
are intended for the exclusive use of the addressee(s) and may contain confidential or privileged
information.  Any unauthorized review, dissemination, distribution, or copying of this communication
is prohibited.  If you are not the intended recipient, please notify the sender immediately
by reply e-mail, and destroy all copies of this message and any attachments from your files.
>
>
>
>
>
>
> On Jul 6, 2010, at 4:19 AM, Jamie Cockrill wrote:
>
>> Dear hbase-users,
>>
>> I was wondering if you might be able to help. I am trying to setup
>> HBase (and as such, Zookeeper) on Ubuntu 10.04 using the Cloudera
>> Karmic CDH3 distribution. Zookeeper has installed fine, however when
>> it comes to starting an hbase master, it falls over with the following
>> exception:
>>
>> (stack trace summarised to last bit)
>> Caused by: java.io.EOFException
>>    at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:508)
>>    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>>
>> Its clearly connecting to the HDFS, but it's giving it a strange
>> response. The section above of the trace identifies the hostname that
>> I've specified in the hbase.rootdir property in the hbase-site.xml and
>> also the IP address, which it must have looked up in DNS. For
>> information, that is set to:
>> hdfs://master:50071/hbase.
>>
>> Also, as i'm just evaluating it at this stage, I'm installing
>> Zookeeper and hbase-master on the same machine as the namenode
>> (master). The regionservers will go somewhere else when I get to that
>> stage.
>>
>> My hbase-site.xml was blank (between the configuration tags) and the
>> only things I've added so far are:
>>
>> hbase.cluster.distributed="true"
>> hbase.rootdir="hdfs://master:50071/hbase"
>> hbase.zookeeper.quorum="master"
>>
>> obviously in the <name><value> format of the XML file, without the quotes.
>>
>> I'm at a bit of a loss as to what is going on. I've tried a wget to
>> the namenode dfshealth.asp page and that works fine (obviously thats
>> http:// rather than hdfs://). Any pointers on where to look?
>>
>> Many thanks,
>>
>> Jamie
>>
>> I ran a wget to
>
>

Mime
View raw message