hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Gray" <jl...@streamy.com>
Subject RE: .META error when I try to insert after truncating table
Date Fri, 12 Sep 2008 21:10:37 GMT
While not officially supported, Hadoop 0.18.0 runs fine on HBase 0.2.x

To make it work, you need to recompile HBase with the Hadoop 0.18 jars in
HBASEHOME/lib/ and remove all the 0.17 jars.

Then just ensure that your classpaths are all pointing to 0.18 jars and not
0.17.

I'm running live on 0.18.0 with 0.2.1 RC2 right now and there are no issues.

-----Original Message-----
From: Krzysztof Szlapinski [mailto:krzysztof.szlapinski@starline.hk] 
Sent: Friday, September 12, 2008 2:03 PM
To: hbase-user@hadoop.apache.org
Subject: Re: .META error when I try to insert after truncating table

Preston Price pisze:
> I still can not get hbase 0.2.0 or 0.2.1 to play nicely with Hadoop 
> 0.18.0
> I did notice in the hbase 0.2.0 docs this line under Requirements:
> Hadoop 0.17.x. This version of HBase will only run on this version of 
> Hadoop.
>
> Using Hadoop 0.17.2.1 I was able to get both 0.2.0 and 0.2.1 up and 
> running.
>
> So I am assuming that Hadoop 0.18.0 is unsupported for the time being?
>
As far as I know Hadoop 0.18.0 is not supported by Hbase 0.2.x
There are plans to change the versioning scheme to keep it consistent 
with the Hadoop versions
So I guess the next Hbase release will support Hadoop 0.18.0 and it will 
have the same version number (0.18)


> Thanks
>
> -Preston
>
> On Sep 12, 2008, at 12:25 PM, Preston Price wrote:
>
>> I am using the hbase-default.xml that came with the hbase-0.2.0 
>> download.
>> The only config files I replaced are the hbase-env.sh, hbase-site.xml 
>> and regionservers files.
>>
>> I will take a stab at getting the RC up.
>>
>> Thanks
>>
>> -Preston
>> On Sep 12, 2008, at 12:13 PM, Jean-Daniel Cryans wrote:
>>
>>> Preston,
>>>
>>> Have you copied the hbase-default from the new distribution? It is 
>>> needed.
>>> You should also jump right to 0.2.1RC2 (see the thread on the 
>>> mailing list
>>> for the link to the release).
>>>
>>> J-D
>>>
>>> On Fri, Sep 12, 2008 at 2:09 PM, Preston Price <price@strands.com> 
>>> wrote:
>>>
>>>> I took Jean-Daniel Cryans' advice and am now trying to get Hadoop 
>>>> 0.18.0
>>>> and HBase 0.2.0 up and running.
>>>> I copied the configuration from the previous versions of HBase and 
>>>> Hadoop
>>>> we had running, and with a slight modification I got hadoop going.
>>>> I still can't get HBase 0.2.0 going.
>>>> Here is the output from the master log:
>>>>
>>>> Fri Sep 12 12:01:23 MDT 2008 Starting master on atlas
>>>> java version "1.5.0_15"
>>>> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04)
>>>> Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
>>>> ulimit -n 1024
>>>> 2008-09-12 12:02:24,157 ERROR 
>>>> org.apache.hadoop.hbase.master.HMaster: Can
>>>> not start master
>>>> java.lang.reflect.InvocationTargetException
>>>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>>> Method)
>>>>      at
>>>>
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAcces
sorImpl.java:39) 
>>>>
>>>>      at
>>>>
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstruc
torAccessorImpl.java:27) 
>>>>
>>>>      at 
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:494)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:798)
>>>>      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:832)
>>>> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
>>>> response
>>>>      at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>>>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>>>      at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown 
>>>> Source)
>>>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>>>      at
>>>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>>>      at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>>>      at
>>>>
org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem
.java:68) 
>>>>
>>>>      at
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)
>>>>      at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>>>>      at 
>>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291)
>>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
>>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:178)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:148)
>>>>      ... 6 more
>>>>
>>>> It looks like it can't connect to the Hadoop DFS I have running, 
>>>> but I've
>>>> confirmed that Hadoop is running by manipulating files on the DFS.
>>>>
>>>> Here is the hbase-site.xml I am using:
>>>> <configuration>
>>>>
>>>> <property>
>>>>  <name>hbase.master</name>
>>>>  <value>atlas:60000</value>
>>>>  <description>The host and port that the HBase master runs at.
>>>>  </description>
>>>> </property>
>>>>
>>>> <property>
>>>>  <name>hbase.rootdir</name>
>>>>  <value>hdfs://atlas:54310/hbase</value>
>>>>  <description>The directory shared by region servers.
>>>>  </description>
>>>> </property>
>>>>
>>>> </configuration>
>>>>
>>>> Any ideas?
>>>>
>>>> Thanks!
>>>>
>>>> -Preston
>>>>
>>>>
>>>> On Sep 12, 2008, at 11:10 AM, Jean-Daniel Cryans wrote:
>>>>
>>>> Preston,
>>>>>
>>>>> You should definitively upgrade to HBase 0.2.
>>>>>
>>>>> J-D
>>>>>
>>>>> On Fri, Sep 12, 2008 at 1:06 PM, Preston Price <price@strands.com>

>>>>> wrote:
>>>>>
>>>>> I see this error every once in a while in our client logs:
>>>>>>
>>>>>> java.io.IOException: HRegionInfo was null or empty in .META.
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegionInMeta(H
ConnectionManager.java:429) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegion(HConnec
tionManager.java:350) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HConnectionManager$TableServers.relocateRegion(HConn
ectionManager.java:318) 
>>>>>>
>>>>>>    at 
>>>>>> org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HTable$ServerCallable.instantiateServer(HTable.java:
1009) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java:1024) 
>>>>>>
>>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>>>>>>
>>>>>> I usually only see it after truncating our table like this:
>>>>>> disable tableName;
>>>>>> truncate table tableName;
>>>>>> enable tableName;
>>>>>>
>>>>>> In our process that does the inserts we see it hang for a while 
>>>>>> on the
>>>>>> first insert until it gets this error, and then starts inserting

>>>>>> records
>>>>>> with no problem.
>>>>>>
>>>>>> Is this something I should be concerned with?
>>>>>> I am not familiar enough with what goes on 'under the hood' to 
>>>>>> know what
>>>>>> this error is trying to tell me.
>>>>>>
>>>>>> Hadoop version: 0.16.2
>>>>>> HBase version: 0.1.3
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> -Preston
>>>>>>
>>>>>>
>>>>
>>
>
>


Mime
View raw message