hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: Running HBase Junit Testcases on local machine
Date Tue, 27 Jul 2010 05:32:44 GMT
This won't work.  Make your server and client align.  Make the server
and client both be from TRUNK or make the server and client both use
http://svn.apache.org/repos/asf/hbase/branches/0.20/  (The 0.20.4 and
0.20.5 releases were cut from the 0.20 branch).

St.Ack


On Mon, Jul 26, 2010 at 9:54 PM, Gagandeep Singh
<gagandeep.singh@paxcel.net> wrote:
> Yes I am running my server at 20.4 and client is using latest code from svn.
>
> Shall I change my server to 20.5 ?
>
> Thanks,
> Gagan
>
>
>
> On Tue, Jul 27, 2010 at 4:14 AM, Stack <stack@duboce.net> wrote:
>
>> Master is listening for sure at 172.16.5.83:60000?
>>
>> EOFE often is because of different versions between client and server.
>>  Is that the case here?
>>
>> St.Ack
>>
>>
>> On Mon, Jul 26, 2010 at 12:52 PM, Gagandeep Singh
>> <gagandeep.singh@paxcel.net> wrote:
>> > I have modified the test case and commented out the line
>> > TEST_UTIL.startMiniCluster(3); from setUpBeforeClass() function in
>> > TestAdmin  class. It has picked up my cluster setup from the
>> configuration.
>> > And it seems it is successfully got connected with my zookeeper but I am
>> > getting problem while connecting with my master. It is giving following
>> > exception. Any clues to solve this problem ?
>> >
>> > 10/07/27 01:06:05 INFO zookeeper.ZooKeeper: Initiating client connection,
>> > connectString=172.16.5.85:2181,172.16.5.84:2181,172.16.5.83:2181
>> sessionTimeout=60000
>> > watcher=org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper@1de3f2d
>> > 10/07/27 01:06:05 INFO zookeeper.ClientCnxn: Opening socket connection to
>> > server /172.16.5.84:2181
>> > 10/07/27 01:06:11 INFO zookeeper.ClientCnxn: Socket connection
>> established
>> > to 172.16.5.84/172.16.5.84:2181, initiating session
>> > 10/07/27 01:06:11 INFO zookeeper.ClientCnxn: Session establishment
>> complete
>> > on server 172.16.5.84/172.16.5.84:2181, sessionid = 0x22a00bb1ed8000d,
>> > negotiated timeout = 40000
>> > *10/07/27 01:06:50 INFO client.HConnectionManager$TableServers: getMaster
>> > attempt 0 of 4 failed; retrying after sleep of 5000
>> > java.io.IOException: Call to /172.16.5.83:60000 failed on local
>> exception:
>> > java.io.EOFException*
>> >    at
>> >
>> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:781)
>> >    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:750)
>> >    at
>> > org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:253)
>> >    at $Proxy0.getProtocolVersion(Unknown Source)
>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:408)
>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:384)
>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:431)
>> >    at
>> >
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:385)
>> >    at
>> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:78)
>> >    at net.smithmicro.hbase.JHBaseAdmin.initConfig(JHBaseAdmin.java:53)
>> >    at net.smithmicro.hbase.JHBaseAdmin.main(JHBaseAdmin.java:40)
>> >
>> >
>> > Thanks in advance.
>> > Gagan
>> >
>> >
>> >
>> > On Sat, Jul 24, 2010 at 11:17 AM, Stack <stack@duboce.net> wrote:
>> >
>> >> On Fri, Jul 23, 2010 at 10:39 PM, Gagandeep Singh
>> >> <gagandeep.singh@paxcel.net> wrote:
>> >> > Yes I understand that they don't belong to same family. May be I was
>> >> trying
>> >> > to be over smart :)
>> >> >
>> >>
>> >> Its not hard to be smarter than the crew that hangs out here so I'd
>> >> say don't even bother trying (smile).
>> >>
>> >>
>> >> > @Stack - I looked at the code org.apache.hadoop.hdfs.MiniDFSCluster
>> code
>> >> and
>> >> > it seems they have hard-coded localhost inside the code. I think I
>> need
>> >> to
>> >> > substitute that code.
>> >> >
>> >>
>> >> I'd say don't mess with it.  Its from hadoop.   Has other ugly
>> >> hardcodings like the dir it writes data too.
>> >>
>> >> Instead, write a new test that presumes an extant cluster.  Just make
>> >> sure your cluster is up and running before the test starts and that
>> >> the conf that is on the CLASSPATH when the test starts points at your
>> >> remote cluster.
>> >>
>> >> Good luck.
>> >> St.Ack
>> >>
>> >
>>
>

Mime
View raw message