hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Billy Pearson" <sa...@pearsonwholesale.com>
Subject Re: [ANN] hbase-0.2.0 Release Candidate 1
Date Thu, 24 Jul 2008 01:33:19 GMT
Master is online I can put one column and scan it form all servers
Restart did not help the MR job
I can not find anything in the logs to tell me there is an error except the 
MR error log for the tasks
The MR jobs fail on all server including the master I run a tasktracker on 
the master.
updated to trunk just to make sure same problem

Maybe this is a api triping me up and eclipse is not finding the change with 
the new jar for 0.2.0

I have a map class that has below code shortened

 public HBaseConfiguration c = new HBaseConfiguration();

 public static HTable getTable(HBaseConfiguration c) throws IOException {
  c.set("hbase.master", CompSpySet.hbasemaster);
  HTable table = new HTable(c, "webdata");
  return table;
 }

map{

the map splits the record into row,column,timestamp,data
then it builds a backUpdate called update
then the map calls the above code

HTable table = getTable(c);
table.commit(update);
}

not sure whats going on this import class it was working about 2 weeks ago 
and I have not had any luck in the last 4-5  days sense
I updated to latest trunk and tryed RC 1 but eclipse is not showing any 
errors or warring.

Billy

"stack" <stack@duboce.net> wrote in message 
news:4887BA79.1080407@duboce.net...
> The below happens for every task?  None can see the master?  You verify 
> the master is running via shell or something?  There could be something 
> going on here.  I heard a second-report of such a phenomeon where there is 
> a timeout though master seems to be listening fine.  Does a restart of MR 
> and hbase clusters change the story?  Any more clues that you can figure 
> Billy?
>
> I ain't sure how HBASE-770 could have fixed your problem.  It did touch 
> the area that was throwing the exception you reported earlier.
>
> St.Ack
>
>
> Billy Pearson wrote:
>> I got the errors after untaring the RC1 for 0.2.0 and running a MR job on 
>> it
>>
>> I download trunk now and it seams the errors went away
>> I thank that HBASE-770 fixed my first errors
>> but now I am getting a different error
>>
>> 2008-07-23 17:30:40,300 INFO 
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 0 
>> of 5 failed with <java.net.SocketTimeoutException: timed out waiting for 
>> rpc response>. Retrying after sleep of 10000
>> 2008-07-23 17:31:50,307 INFO 
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 1 
>> of 5 failed with <java.net.SocketTimeoutException: timed out waiting for 
>> rpc response>. Retrying after sleep of 10000
>> 2008-07-23 17:33:00,312 INFO 
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 2 
>> of 5 failed with <java.net.SocketTimeoutException: timed out waiting for 
>> rpc response>. Retrying after sleep of 10000
>> 2008-07-23 17:34:10,318 INFO 
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 3 
>> of 5 failed with <java.net.SocketTimeoutException: timed out waiting for 
>> rpc response>. Retrying after sleep of 10000
>> 2008-07-23 17:35:20,391 WARN org.apache.hadoop.mapred.TaskTracker: Error 
>> running child
>> org.apache.hadoop.hbase.MasterNotRunningException: xx.xx.xx.xx:60000
>> at 
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:219)
>> at 
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:431)
>> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
>> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:109)
>> at 
>> com.compspy.mapred.RecordImport$MapClass.getTable(RecordImport.java:50)
>> at com.compspy.mapred.RecordImport$MapClass.map(RecordImport.java:76)
>> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:219)
>> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124)
>>
>> I commented out the ip above it was correct and the port is correct I 
>> check and the master is a live and well when I get these errors when 
>> running a MR job to import records.
>>
>> Billy
>>
>>
>>
>> "Jean-Daniel Cryans" <jdcryans@gmail.com> 
>> wrote in message 
>> news:31a243e70807231413n21cfda44y2728705566d84c31@mail.gmail.com...
>>> Billy,
>>>
>>> I looked at the code where the exceptions were thrown and something is
>>> weird. For the NPE, the line 291 in HbaseObjectWritable looks like:
>>> public static Object readObject(DataInput in,
>>>      HbaseObjectWritable objectWritable, Configuration conf)
>>>  throws IOException {
>>>    Class<?> declaredClass = CODE_TO_CLASS.get(in.readByte());
>>>    Object instance;
>>>    *if (declaredClass.isPrimitive()) {            // primitive types*
>>>      if (declaredClass == Boolean.TYPE) {             // boolean
>>>        instance = Boolean.valueOf(in.readBoolean());
>>>
>>> The chance that declaredClass would be null is low. Also, at line 821 of 
>>> HCM
>>> regards the other exception it reads:
>>>
>>> server.getRegionInfo(HRegionInfo.ROOT_REGIONINFO.getRegionName());
>>>          *if (LOG.isDebugEnabled()) {*
>>>            LOG.debug("Found ROOT " + HRegionInfo.ROOT_REGIONINFO);
>>>          }
>>>
>>> So I clearly see that the " at $Proxy3.getRegionInfo(Unknown Source)" 
>>> the
>>> came after that is not called here. So I'm wondering what version of 
>>> HBase
>>> exactly  are you running?
>>>
>>> Thx for looking at this and thx for testing!
>>>
>>> J-D
>>>
>>> On Tue, Jul 22, 2008 at 10:10 PM, Billy Pearson 
>>> <sales@pearsonwholesale.com>
>>> wrote:
>>>
>>>> I get this when trying to run a mr job on the Release Candidate
>>>>
>>>> 2008-07-22 21:05:00,237 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>>>> Initializing JVM Metrics with processName=MAP, sessionId=
>>>> 2008-07-22 21:05:00,254 WARN org.apache.hadoop.fs.FileSystem: "
>>>> 64.69.33.145:9000" is a deprecated filesystem name. Use "hdfs://
>>>> 64.69.33.145:9000/" instead.
>>>> 2008-07-22 21:05:00,378 INFO org.apache.hadoop.mapred.MapTask:
>>>> numReduceTasks: 0
>>>> 2008-07-22 21:05:00,379 WARN org.apache.hadoop.fs.FileSystem: "
>>>> 64.69.33.145:9000" is a deprecated filesystem name. Use "hdfs://
>>>> 64.69.33.145:9000/" instead.
>>>> 2008-07-22 21:05:00,797 INFO org.apache.hadoop.ipc.Client:
>>>> java.lang.NullPointerException
>>>> at
>>>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:291)
>>>> at
>>>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:166)
>>>> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:306)
>>>>
>>>> 2008-07-22 21:06:00,804 WARN org.apache.hadoop.mapred.TaskTracker: 
>>>> Error
>>>> running child
>>>> java.lang.reflect.UndeclaredThrowableException
>>>> at $Proxy3.getRegionInfo(Unknown Source)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:821)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:458)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:440)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:575)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:468)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:432)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:511)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:472)
>>>> at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:432)
>>>> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:124)
>>>> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:109)
>>>> at 
>>>> com.compspy.mapred.RecordImport$MapClass.getTable(RecordImport.java:50)
>>>> at com.compspy.mapred.RecordImport$MapClass.map(RecordImport.java:76)
>>>> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
>>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:219)
>>>> at 
>>>> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124)
>>>> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
>>>> response
>>>> at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>>> at 
>>>> org.apache.hadoop.hbase.ipc.HbaseRPC$Invoker.invoke(HbaseRPC.java:213)
>>>> ... 17 more
>>>>
>>>>
>>>> not sure what the problem is here same job I been running for over a 
>>>> month
>>>> and now today I get this from the EC1 and Trunk on clean installs of 
>>>> hadoop
>>>> 0.17.1 and hbase
>>>>
>>>> Billy
>>>>
>>>> "stack" <stack@duboce.net> wrote in 
>>>> message
>>>> news:48865E15.3010507@duboce.net...
>>>>
>>>>  The first 0.2.0 release candidate is available for download:
>>>>>
>>>>>
>>>>> http://people.apache.org/~stack/hbase-0.2.0-candidate-1/<http://people.apache.org/%7Estack/hbase-0.2.0-candidate-1/>
>>>>>
>>>>> Please take this release candidate for a spin. Check the 
>>>>> documentation,
>>>>> that unit tests all complete on your platform, etc.
>>>>>
>>>>> Should we release this candidate as hbase 0.2.0?  Vote yes or no 
>>>>> before
>>>>> Friday, July 25th.
>>>>>
>>>>> Release 0.2.0 has over 240 issues resolved [1] since the branch for 
>>>>> 0.1
>>>>> hbase was made.  Be warned that hbase 0.2.0 is not backward compatible

>>>>> with
>>>>> the hbase 0.1 API.  See [2] Izaak Rubins' notes on the high-level API
>>>>> differences between 0.1 and 0.2.  For notes on how to migrate your 0.1

>>>>> era
>>>>> hbase data to 0.2, see Izaak's migration guide [3].
>>>>>
>>>>> Yours,
>>>>> The HBase Team
>>>>>
>>>>> 1.
>>>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12312955&styleName=Html&projectId=12310753&Create=Create
>>>>> 2. http://wiki.apache.org/hadoop/Hbase/Plan-0.2/APIChanges
>>>>> 3. http://wiki.apache.org/hadoop/Hbase/HowToMigrate
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
>
> 



Mime
View raw message