hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jason Huang <jason.hu...@icare.com>
Subject Re: Can not access HBase Shell.
Date Mon, 17 Sep 2012 21:32:01 GMT
I've done several reinstallation's and hadoop seems to be fine. However, I
still get similar error when I tried to access HBase shell.

$ jps
274 NameNode
514 JobTracker
1532 HMaster
1588 Jps
604 TaskTracker
450 SecondaryNameNode
362 DataNode

$ ./bin/hbase shell
Trace/BPT trap: 5

I looked at the log file and found errors in the HMaster node logs:

2012-09-17 17:06:54,384 INFO
org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=2,
memsize=360.0, into tmp file
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2
2012-09-17 17:06:54,389 WARN org.apache.hadoop.hdfs.DFSClient:
Exception while reading from blk_-8714444718437861427_1016 of
/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2 from
127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
        at
org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
        at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
        at
org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
        at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1457)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2172)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:582)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1364)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1869)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1637)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1286)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1294)
        at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:137)
        at
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:533)
        at
org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:563)
        at
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1252)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:516)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:606)
        at
org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1590)
        at
org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:769)
        at
org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:108)
        at
org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2204)
        at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1429)
        at
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2685)
        at
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:535)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3682)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3630)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
        at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:636)

2012-09-17 17:06:54,389 INFO org.apache.hadoop.hdfs.DFSClient: Could
not obtain block blk_-8714444718437861427_1016 from any node:
java.io.IOException: No live nodes contain current block. Will get new
block locations from namenode and retry...

I checked the file system using fsck and that seems to be healthy:
./bin/hadoop fsck / -files
Warning: $HADOOP_HOME is deprecated.

FSCK started by jasonhuang from /192.168.1.124 for path / at Mon Sep 17
17:24:46 EDT 2012
/ <dir>
/hbase <dir>
/hbase/-ROOT- <dir>
/hbase/-ROOT-/.tableinfo.0000000001 727 bytes, 1 block(s):  OK
/hbase/-ROOT-/.tmp <dir>
/hbase/-ROOT-/70236052 <dir>
/hbase/-ROOT-/70236052/.logs <dir>
/hbase/-ROOT-/70236052/.logs/hlog.1347915355095 309 bytes, 1 block(s):  OK
/hbase/-ROOT-/70236052/.oldlogs <dir>
/hbase/-ROOT-/70236052/.regioninfo 109 bytes, 1 block(s):  OK
/hbase/-ROOT-/70236052/.tmp <dir>
/hbase/-ROOT-/70236052/.tmp/2f094a87dd314072b1eb464761639c0c 859 bytes, 1
block(s):  OK
/hbase/-ROOT-/70236052/info <dir>
/hbase/-ROOT-/70236052/recovered.edits <dir>
/hbase/-ROOT-/70236052/recovered.edits/0000000000000000002 310 bytes, 1
block(s):  OK
/hbase/.META. <dir>
/hbase/.META./1028785192 <dir>
/hbase/.META./1028785192/.logs <dir>
/hbase/.META./1028785192/.logs/hlog.1347915355190 134 bytes, 1 block(s):  OK
/hbase/.META./1028785192/.oldlogs <dir>
/hbase/.META./1028785192/.regioninfo 111 bytes, 1 block(s):  OK
/hbase/.META./1028785192/info <dir>
/hbase/.corrupt <dir>
/hbase/.logs <dir>
/hbase/.oldlogs <dir>
/hbase/.oldlogs/192.168.1.124%2C50887%2C1347915939955.1347915972194 134
bytes, 1 block(s):  OK
/hbase/.oldlogs/192.168.1.124%2C51177%2C1347916254506.1347916283458 134
bytes, 1 block(s):  OK
/hbase/hbase.id 38 bytes, 1 block(s):  OK
/hbase/hbase.version 3 bytes, 1 block(s):  OK
/hbase/splitlog <dir>
/test <dir>
/tmp <dir>
/tmp/hadoop-jasonhuang <dir>
/tmp/hadoop-jasonhuang/mapred <dir>
/tmp/hadoop-jasonhuang/mapred/system <dir>
/tmp/hadoop-jasonhuang/mapred/system/jobtracker.info 4 bytes, 1 block(s):
 OK
Status: HEALTHY


However, the file mentioned in the error log
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2
doesn't seem to exist in my fsck report. (Not sure if that matters).

I have no idea where to go next.. Any suggestions?

thanks!

Jason


On Fri, Sep 14, 2012 at 4:25 PM, Jason Huang <jason.huang@icare.com> wrote:

> Thanks Marcos.
>
> I applied the change you mentioned but it still gave me error. I then stop
> everything and restart Hadoop and tried to run a simple Map-Reduce job with
> the provided example jar. (./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10
> 100)
>
> That gave me an error of:
> 12/09/14 15:59:50 INFO mapred.JobClient: Task Id :
> attempt_201209141539_0001_m_000011_0, Status : FAILED
> Error initializing attempt_201209141539_0001_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>
> I think there is something wrong with my Hadoop setup. I will do more
> research and see if I can find out why.
>
> thanks,
>
> Jason
>
> On Thu, Sep 13, 2012 at 7:56 PM, Marcos Ortiz <mlortiz@uci.cu> wrote:
>
>>
>> Regards, Jason.
>> Answers in line
>>
>>
>> On 09/13/2012 06:42 PM, Jason Huang wrote:
>>
>> Hello,
>>
>> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
>> I was able to installed hadoop and HBase and started the nodes.
>>
>> $JPS
>> 5417 TaskTracker
>> 5083 NameNode
>> 5761 HRegionServer
>> 5658 HMaster
>> 6015 Jps
>> 5613 HQuorumPeer
>> 5171 DataNode
>> 5327 JobTracker
>> 5262 SecondaryNameNode
>>
>> However, when I tried ./hbase shell I got the following error:
>> Trace/BPT trap: 5
>>
>> Loooking at the log from master server I found:
>> 2012-09-13 18:33:46,842 DEBUG
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> Looked up root region location,
>> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
>> serverName=192.168.1.124,60020,1347575067207
>> 2012-09-13 18:34:18,981 DEBUG
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> Looked up root region location,
>> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
>> serverName=192.168.1.124,60020,1347575067207
>> 2012-09-13 18:34:18,982 DEBUG
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> locateRegionInMeta parentTable=-ROOT-,
>> metaLocation={region=-ROOT-,,0.70236052, hostname=192.168.1.124,
>> port=60020}, attempt=14 of 100 failed; retrying after sleep of 32044
>> because: HRegionInfo was null or empty in -ROOT-,
>> row=keyvalues={.META.,,1/info:server/1347575458668/Put/vlen=19/ts=0,
>> .META.,,1/info:serverstartcode/1347575458668/Put/vlen=8/ts=0}
>>
>> I don't quite understand what this error is and how to fix it. Any
>> suggestions?  Thanks!
>>
>> Here are my config files:
>>
>> <configuration>
>>   <property>
>>     <name>hbase.rootdir</name>
>>     <value>hdfs://localhost:9000/hbase</value>
>>   </property>
>>   <property>
>>     <name>hbase.zookeeper.quorum</name>
>>     <value>localhost</value>
>>   </property>
>>   <property>
>>     <name>hbase.cluster.distributed</name>
>>     <value>true</value>
>>
>>  If you want to use HBase in pseudo-distributed mode, you can not put
>> this property here, because the HRegionMaster thinks that the cluster is on
>> full distributed mode, and tries to find the region servers, and this error
>> come to light because in pseudo-distributed mode, you don´t have to include
>> that.
>>
>> So, remove the hbase.cluster.distributed property, and restart all
>> daemons.
>>
>> Another thing, for the pseudo-distributed mode, you don´t need a running
>> ZooKeeper cluster, you need that for a full distributed cluster.
>>
>>   </property>
>>   <property>
>>     <name>dfs.replication</name>
>>     <value>1</value>
>>   </property>
>>   <property>
>>      <name>hbase.master</name>
>>      <value>localhost:60000</value>
>>   </property>
>>   <property>
>>     <name>dfs.support.append</name>
>>     <value>true</value>
>>   </property>
>> </configuration>
>>
>> hdfs-site.xml
>> <configuration>
>>   <property>
>>      <name>fs.default.name</name>
>>      <value>localhost:9000</value>
>>   </property>
>>   <property>
>>      <name>dfs.replication</name>
>>      <value>1</value>
>>   </property>
>>   <property>
>>      <name>dfs.namenode.name.dir</name>
>>      <value>/Users/jasonhuang/hdfs/name</value>
>>   </property>
>>   <property>
>>      <name>dfs.datanode.data.dir</name>
>>      <value>/Users/jasonhuang/hdfs/data</value>
>>   </property>
>>   <property>
>>      <name>dfs.datanode.max.xcievers</name>
>>      <value>4096</value>
>>   </property>
>> </configuration>
>>
>> mapred-site.xml
>> <configuration>
>>     <property>
>>         <name>mapred.job.tracker</name>
>>         <value>localhost:9001</value>
>>     </property>
>>     <property>
>>         <name>mapred.child.java.opts</name>
>>         <value>-Xmx512m</value>
>>     </property>
>>     <property>
>>         <name>mapred.job.tracker</name>
>>         <value>hdfs://localhost:54311</value>
>>     </property>
>> </configuration>
>>
>>
>>
>> --
>> **
>>
>> Marcos Luis Ortíz Valmaseda
>> *Data Engineer && Sr. System Administrator at UCI*
>> about.me/marcosortiz
>> My Blog <http://marcosluis2186.posterous.com>
>> Tumblr's blog <http://marcosortiz.tumblr.com/>
>> @marcosluis2186 <http://twitter.com/marcosluis2186>
>>  **
>>
>>
>>
>>   <http://www.uci.cu/>
>>
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message