hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ramkrishna.S.Vasudevan" <ramkrishna.vasude...@huawei.com>
Subject RE: hbase can't drop a table
Date Tue, 16 Oct 2012 08:52:00 GMT
What does the 'list' command show?  Does it say the table exists or not?

What I can infer here is that the HTableDescriptor  file got deleted but the
META is having the entry.  Any chance of the HTD getting accidently deleted
in your cluster?

The hbck tool with -fixOrphanTables should atleast try to create the
HTableDescriptor file I suppose.  Then restart the cluster and then see what
happens.
I will not be able to access the logs even if you add it to pastebin.  But
pls do it so that some one else who has access can look into it.

Regards
Ram
> -----Original Message-----
> From: 张磊 [mailto:zhanglei@youku.com]
> Sent: Tuesday, October 16, 2012 1:44 PM
> To: 'user@hbase.apache.org'
> Subject: RE: hbase can't drop a table
> 
> Hope this can help you!
> https://issues.apache.org/jira/browse/HBASE-
> 3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-
> tabpanel&focusedCommentId=13418790#comment-13418790
> 
> Fowler Zhang
> 
> -----Original Message-----
> From: 唐 颖 [mailto:ivytang0812@gmail.com]
> Sent: 2012年10月16日 16:08
> To: user@hbase.apache.org
> Subject: Re: hbase can't drop a table
> 
> version 0.94.0, r8547
> 
> And the table is ivytest_deu.
> 
> 
> 在 2012-10-16,下午3:58,"Ramkrishna.S.Vasudevan"
> <ramkrishna.vasudevan@huawei.com> 写道:
> 
> > Which version of HBase?
> >
> >
> > The below logs that you have attached says about a different table
> right '
> > deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.'
> > And the one you are trying to drop is ' ivytest_deu’
> >
> > Regards
> > Ram
> >
> >
> >
> >> -----Original Message-----
> >> From: 唐 颖 [mailto:ivytang0812@gmail.com]
> >> Sent: Tuesday, October 16, 2012 1:23 PM
> >> To: user@hbase.apache.org
> >> Subject: hbase can't drop a table
> >>
> >> I disable this table ivytest_deu , drop it .Error occurs.
> >>
> >>
> >> ERROR: java.io.IOException: java.io.IOException: HTableDescriptor
> >> missing for ivytest_deu
> >> 	at
> >>
> org.apache.hadoop.hbase.master.handler.TableEventHandler.getTableDesc
> >> ri
> >> ptor(TableEventHandler.java:174)
> >> 	at
> >>
> org.apache.hadoop.hbase.master.handler.DeleteTableHandler.<init>(Dele
> >> te
> >> TableHandler.java:44)
> >> 	at
> >>
> org.apache.hadoop.hbase.master.HMaster.deleteTable(HMaster.java:1143)
> >> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> 	at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> >> ja
> >> va:39)
> >> 	at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> >> so
> >> rImpl.java:25)
> >> 	at java.lang.reflect.Method.invoke(Method.java:597)
> >> 	at
> >>
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpc
> >> En
> >> gine.java:364)
> >> 	at
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:
> >> 13
> >> 76)
> >>
> >> Here is some help for this command:
> >> Drop the named table. Table must first be disabled. If table has
> more
> >> than one region, run a major compaction on .META.:
> >>
> >>  hbase> major_compact ".META."
> >>
> >> The major_compact ".META." doesn't work.
> >> Then i try to create it ,but HBase says it .
> >>
> >> ERROR: Table already exists: ivytest_deu!
> >>
> >> After checking the region server log , the region server is always
> >> trying to load this region.
> >>
> >> 2012-10-16 00:00:00,308 INFO
> >> org.apache.hadoop.hbase.regionserver.HRegionServer: Received request
> >> to open region:
> >> deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.
> >> 2012-10-16 00:00:00,313 WARN
> >> org.apache.hadoop.hbase.util.FSTableDescriptors: The following
> folder
> >> is in HBase's root directory and doesn't contain a table descriptor,
> >> do consider deleting it: deu_ivytest
> >> 2012-10-16 00:00:00,358 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Opening region: {NAME
> >> => 'deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.',
> >> STARTKEY => '', ENDKEY => '', ENCODED =>
> >> 985d6ca9986d7d8cfaf82daf523fcd45,}
> >> 2012-10-16 00:00:00,358 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Registered protocol
> >> handler:
> >> region=deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.
> >>
> protocol=com.xingcloud.adhocprocessor.hbase.coprocessor.DEUColumnAggr
> >> eg
> >> ationProtocol
> >> 2012-10-16 00:00:00,358 ERROR
> >> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler:
> >> Failed open of
> >> region=deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.,
> >> starting to roll back the global memstore size.
> >> 2012-10-16 00:00:00,358 INFO
> >> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler:
> >> Opening of region {NAME =>
> >> 'deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.',
> >> STARTKEY => '', ENDKEY => '', ENCODED =>
> >> 985d6ca9986d7d8cfaf82daf523fcd45,} failed, marking as FAILED_OPEN in
> >> ZK
> >>
> >> And we have a endpoint in base .After the base tried to load this
> >> table ivy test_deu for 90,000 times ,the endpoint class also has
> been
> >> loaded for 90,000 times.
> >> The jvm memory has been filled.
> >> The gcutil shows
> >> S0C    S1C    S0U    S1U      EC       EU        OC         OU
> PC
> >> PU    YGC     YGCT    FGC    FGCT     GCT
> >> 34880.0 34880.0 34648.1  0.0   209472.0 209472.0 2792768.0
> 2792768.0
> >> 71072.0 41461.5 129770 3448.191 24598 28469.996 31918.187
> >> 34880.0 34880.0 34648.1  0.0   209472.0 209472.0 2792768.0
> 2792768.0
> >> 71072.0 41461.5 129770 3448.191 24598 28469.996 31918.187
> >> 34880.0 34880.0 34648.1  0.0   209472.0 209472.0 2792768.0
> 2792768.0
> >> 71072.0 41461.5 129770 3448.191 24598 28469.996 31918.187
> >> 34880.0 34880.0 34648.1  0.0   209472.0 209472.0 2792768.0
> 2792768.0
> >> 71072.0 41461.5 129770 3448.191 24598 28469.996 31918.187
> >> 34880.0 34880.0 34880.0  0.0   209472.0 209472.0 2792768.0
> 2792768.0
> >> 71072.0 41461.5 129770 3448.191 24600 28481.974 31930.165
> >> 34880.0 34880.0 34880.0  0.0   209472.0 209472.0 2792768.0
> 2792768.0
> >> 71072.0 41461.5 129770 3448.191 24600 28481.974 31930.165
> >>
> >> The jmap dump file shows
> >>
> >> 3982039 instances of class org.apache.hadoop.hbase.KeyValue
> >> 191050 instances of class org.apache.hadoop.fs.Path
> >> 187364 instances of class
> >> org.cliffc.high_scale_lib.ConcurrentAutoTable$CAT
> >> 187301 instances of class org.cliffc.high_scale_lib.Counter
> >> 102272 instances of class
> net.sf.ehcache.concurrent.ReadWriteLockSync
> >> 93652 instances of class org.apache.hadoop.hbase.HRegionInfo
> >> 93650 instances of class
> >> com.google.common.collect.MutableClassToInstanceMap
> >> 93650 instances of class DEUColumnAggregationEndpoint
> >>
> >> DEUColumnAggregationEndpoint is our endpoint class.
> >>
> >> We guess the 90,000 times check this table and load endpoint class
> >> leads this memory leak.
> >>
> >> But how to drop this table?
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >
> >



Mime
View raw message