hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shumin Wu <shumin...@gmail.com>
Subject Re: Can not access HBase Shell.
Date Tue, 18 Sep 2012 17:29:05 GMT
Hi Jason,

In a pesudo-distributed environment, you should start zookeeper and
hbase-regionserver. I don't see them in your process list.

"$ jps
274 NameNode
514 JobTracker
1532 HMaster
1588 Jps
604 TaskTracker
450 SecondaryNameNode
362 DataNode

$ ./bin/hbase shell
Trace/BPT trap: 5
"
Shumin

On Tue, Sep 18, 2012 at 10:21 AM, Jason Huang <jason.huang@icare.com> wrote:

> Hi J-D,
>
> I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
> but that had already been updated (someone else pointed that out)
> before I ran this test today.
>
> thanks,
>
> Jason
>
> On Tue, Sep 18, 2012 at 1:05 PM, Jean-Daniel Cryans <jdcryans@apache.org>
> wrote:
> > Which Hadoop version are you using exactly? I see you are setting
> > dfs.datanode.data.dir which is a post 1.0 setting (from what I can
> > tell by googling, since I didn't recognize it), but you are using a
> > "hadoop-examples-1.0.3.jar" file that seems to imply you are on 1.0.3
> > which would probably not pick up dfs.datanode.data.dir
> >
> > J-D
> >
> > On Tue, Sep 18, 2012 at 9:21 AM, Jason Huang <jason.huang@icare.com>
> wrote:
> >> I've done some more research but still can't start the HMaster node
> >> (with similar error). Here is what I found in the Master Server log:
> >>
> >> Tue Sep 18 11:50:22 EDT 2012 Starting master on Jasons-MacBook-Pro.local
> >> core file size          (blocks, -c) 0
> >> data seg size           (kbytes, -d) unlimited
> >> file size               (blocks, -f) unlimited
> >> max locked memory       (kbytes, -l) unlimited
> >> max memory size         (kbytes, -m) unlimited
> >> open files                      (-n) 65536
> >> pipe size            (512 bytes, -p) 1
> >> stack size              (kbytes, -s) 8192
> >> cpu time               (seconds, -t) unlimited
> >> max user processes              (-u) 1064
> >> virtual memory          (kbytes, -v) unlimited
> >>
> >>
> >> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> >> HBase 0.94.0
> >> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> >> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r
> >> 1332822
> >> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> >> Compiled by jenkins on Tue May  1 21:43:54 UTC 2012
> >> 2012-09-18 11:50:23,395 INFO
> >> org.apache.zookeeper.server.ZooKeeperServer: Server
> >> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
> >> GMT
> >>
> >> ........
> >>
> >> 2012-09-18 11:50:56,671 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Updates disabled for
> >> region -ROOT-,,0.70236052
> >> 2012-09-18 11:50:56,671 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush
> >> for -ROOT-,,0.70236052, current region memstore size 360.0
> >> 2012-09-18 11:50:56,671 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting
> >> -ROOT-,,0.70236052, commencing wait for mvcc, flushsize=360
> >> 2012-09-18 11:50:56,671 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting,
> >> commencing flushing stores
> >> 2012-09-18 11:50:56,684 DEBUG org.apache.hadoop.hbase.util.FSUtils:
> >> Creating
> file:hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13with
> >> permission:rwxrwxrwx
> >> 2012-09-18 11:50:56,692 DEBUG
> >> org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with
> >> CacheConfig:enabled [cacheDataOnRead=false] [cacheDataOnWrite=false]
> >> [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false]
> >> [cacheEvictOnClose=false] [cacheCompressed=false]
> >> 2012-09-18 11:50:56,694 INFO
> >> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
> >> filter type for
> >>
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13:
> >> CompoundBloomFilterWriter
> >> 2012-09-18 11:50:56,703 INFO
> >> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and
> >> NO DeleteFamily was added to HFile
> >>
> (hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13)
> >> 2012-09-18 11:50:56,703 INFO
> >> org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=2,
> >> memsize=360.0, into tmp file
> >>
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> >> 2012-09-18 11:50:56,716 WARN org.apache.hadoop.hdfs.DFSClient:
> >> Exception while reading from blk_8430779885801230139_1008 of
> >> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
> >> 127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
> >> header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
> >>         at
> org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
> >>         at
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> >>         at
> org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
> >>         at
> org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
> >>         at
> org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> >>
> >>
> >> Since my colleges can follow the same setup instruction and install it
> >> in another machine (non-mac) I think this might be an issue with my
> >> Macbook Pro?
> >>
> >> One thing I am not sure is if the system settings (max open files /
> >> max user proc) needs to be adjusted. I've increased the max open files
> >> # to 65536 already (as you can see from the beginning of the log).
> >>
> >>
> >> The other thing I am not sure is why/how the file
> >>
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> >> is created. After failure to start HMaster, I checked that file with
> >> dfs cat and get the same error:
> >>
> >> $ ./bin/hadoop dfs -cat
> >>
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> >> Warning: $HADOOP_HOME is deprecated.
> >> 12/09/18 12:01:59 WARN hdfs.DFSClient: Exception while reading from
> >> blk_8430779885801230139_1008 of
> >> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
> >> 127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
> >> header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
> >>         at
> org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
> >>
> >>
> >> And this file definitely exists:
> >> ./bin/hadoop dfs -ls hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/
> >> Warning: $HADOOP_HOME is deprecated.
> >> Found 1 items
> >> -rw-r--r--   1 jasonhuang supergroup        848 2012-09-18 11:50
> >> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> >>
> >>
> >> Also, when I look at some other dfs files they seem to be OK:
> >>  ./bin/hadoop dfs -cat
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.regioninfo
> >> Warning: $HADOOP_HOME is deprecated.
> >>
> >>         -ROOT-,,0-ROOT-?Y??
> >>
> >> {NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED =>
> 70236052,}
> >>
> >>
> >>
> >> $ ./bin/hadoop dfs -cat
> >> hdfs://localhost:54310/hbase/-ROOT-/70236052/.logs/hlog.1347983456546
> >> Warning: $HADOOP_HOME is deprecated.
> >>
> SEQ0org.apache.hadoop.hbase.regionserver.wal.HLogKey0org.apache.hadoop.hbase.regionserver.wal.WALEditversion1g둣?????%???bV?"70236052-ROOT-9?9?????M#"
>   .META.,,1inforegioninfo9?9?     .META.,,1.META.+???$    .META.,,1infov9?9?
> >>
> >>
> >> Sorry for the lengthy email. Any help will be greatly appreciated!
> >>
> >> Jason
> >>
> >> On Thu, Sep 13, 2012 at 6:42 PM, Jason Huang <jason.huang@icare.com>
> wrote:
> >>> Hello,
> >>>
> >>> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
> >>> I was able to installed hadoop and HBase and started the nodes.
> >>>
> >>> $JPS
> >>> 5417 TaskTracker
> >>> 5083 NameNode
> >>> 5761 HRegionServer
> >>> 5658 HMaster
> >>> 6015 Jps
> >>> 5613 HQuorumPeer
> >>> 5171 DataNode
> >>> 5327 JobTracker
> >>> 5262 SecondaryNameNode
> >>>
> >>> However, when I tried ./hbase shell I got the following error:
> >>> Trace/BPT trap: 5
> >>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message