hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 陈加俊 <cjjvict...@gmail.com>
Subject Re: HMaster startup is very slow, and always run into out-of-memory issue
Date Thu, 10 Mar 2011 08:11:13 GMT
copy your hbase-site.xml here

On Thu, Mar 10, 2011 at 3:01 PM, 茅旭峰 <m9suns@gmail.com> wrote:

> It seems like there are lots of WAL files in .logs and .oldlogs
> directories.
> Is there any parameter to control
> the size of those WAL files? Or the frequency at which to check the WAL
> files.
>
> Thanks a lot!
>
> On Thu, Mar 10, 2011 at 1:00 PM, 茅旭峰 <m9suns@gmail.com> wrote:
>
> > Yes, I increased the heap memory to 16GB, then the master fall into such
> > kind loop like
> >
> > =====
> > 2011-03-10 12:56:25,765 DEBUG
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Creating writer
> >
> path=hdfs://cloud135:9000/hbase/richard/1992d77ac89a289dfb15e1a626e037c7/recovered.edits/0000000000001113639
> > region=1992d77ac89a289dfb15e1a626e037c7
> > 2011-03-10 12:56:25,792 WARN
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing old
> > edits file. It could be the result of a previous failed split attempt.
> > Deleting
> >
> hdfs://cloud135:9000/hbase/richard/260eb775f84574acf15edccc397ae88b/recovered.edits/0000000000001113703,
> > length=3865335
> > 2011-03-10 12:56:25,879 WARN
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing old
> > edits file. It could be the result of a previous failed split attempt.
> > Deleting
> >
> hdfs://cloud135:9000/hbase/richard/7fe64e1b343f6a4920c5a15907c7604c/recovered.edits/0000000000001112552,
> > length=3280578
> > 2011-03-10 12:56:25,880 INFO
> > org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Using
> syncFs
> > -- HDFS-200
> > =====
> >
> > and when I tried scan '-ROOT-' in hbase shell, I got
> >
> > =====
> > ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying
> to
> > contact region server cloud134:60020 for region -ROOT-,,0, row '', but
> > failed after 7 attempts.
> > Exceptions:
> > org.apache.hadoop.hbase.NotServingRegionException:
> > org.apache.hadoop.hbase.NotServingRegionException: Region is not online:
> > -ROOT-,,0
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2321)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:1766)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >
> >         at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >         at java.lang.reflect.Method.invoke(Method.java:597)
> >         at
> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
> >         at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1039)
> > =====
> >
> > Looks like I've lost the ROOT  region?
> >
> >
> >
> >
> > On Thu, Mar 10, 2011 at 11:29 AM, Ted Yu <yuzhihong@gmail.com> wrote:
> >
> >> >> java.lang.OutOfMemoryError: Java heap space
> >> Have you tried increasing heap memory ?
> >>
> >> On Wed, Mar 9, 2011 at 7:23 PM, 茅旭峰 <m9suns@gmail.com> wrote:
> >>
> >> > Thanks Stack for your reply!
> >> >
> >> > Yes, our application is using big cells, ranging from 4mb - 15mb per
> >> entry.
> >> >
> >> > Regarding shutting down of RS because of ZK session loss,
> >> >
> >> > The master logs are below,
> >> >
> >> >
> >>
> >
> >
>



-- 
Thanks & Best regards
jiajun

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message