hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christian Schäfer <syrious3...@yahoo.de>
Subject Re: HBase upgrade
Date Fri, 22 Jun 2012 19:01:25 GMT
sorry ..wrong mailing list.

moved to scm-users



----- Ursprüngliche Message -----
Von: Christian Schäfer <syrious3000@yahoo.de>
An: "user@hbase.apache.org" <user@hbase.apache.org>
CC: 
Gesendet: 20:56 Freitag, 22.Juni 2012
Betreff: Re: HBase upgrade



Hi,


have you checked the logs of hdfs namenode?
"Could not obtain the last block locations" sounds like problem onn the namenode 
(which manages HDFS metadata like block locations)

Btw. status green/healthy of HDFS in CM needn't mean (as I experienced) that everything is
fine.

 regards
Chris


________________________________
Von: Simon <heeg.simon@googlemail.com>
An: scm-users@cloudera.org 
Gesendet: 17:45 Freitag, 22.Juni 2012
Betreff: HBase upgrade


Hello,

I just updated my Cluster, now everything works except HBase. I have a table called map, which
is listed by HBase shell "list", but if I try to do something with it I always get errors
like:

ERROR: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException:
Region is not online: -ROOT-,,0
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2862)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1768)
        at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)

Moreover the webinterface doesn't list any tables except "-ROOR-".

CM shows this error in the master log:

WARN    org.apache.hadoop.hbase.master.MasterFileSystem     

Failed splitting of [hadooplog1.****,60020,1340364527665, hadooplog2.****,60020,1340364529311,
hadooplog3.****,60020,1340364526614, hadooplog4.****,60020,1340364529713]
java.io.IOException: error or interrupt while splitting logs in [hdfs://hadooplog1.****:8020/hbase/.logs/hadooplog1.****,60020,1340364527665-splitting,
hdfs://hadooplog1.****:8020/hbase/.logs/hadooplog2.****,60020,1340364529311-splitting, hdfs://hadooplog1.****:8020/hbase/.logs/hadooplog3.****,60020,1340364526614-splitting,
hdfs://hadooplog1.****:8020/hbase/.logs/hadooplog4.****,60020,1340364529713-splitting] Task
= installed = 1 done = 0 error = 1
    at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:269)
    at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:277)
    at org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterFileSystem.java:219)
    at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:504)
    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:338)
    at java.lang.Thread.run(Thread.java:662)

and


WARN  org.apache.hadoop.conf.Configuration fs.default.name is deprecated. Instead, use fs.defaultFS  
The log of one region says:



org.apache.hadoop.hbase.regionserver.SplitLogWorker log splitting of hdfs://***:8020/hbase/.logs/****,60020,1340364529713-splitting/****%2C60020%2C1340364529713.1340366765952
failed, returning error
java.io.IOException: Could not obtain the last block locations. at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:138)
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:112) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:928)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:212) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:75)
at org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1768) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:66)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1688) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:58)
at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:166)
at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:659) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:846)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:759)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:384)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113) at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266) at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165) at java.lang.Thread.run(Thread.java:662)

  I am not sure whether this coheres or not. I hope you can help me.


Mime
View raw message