hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Gray (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-2761) GC overhead limit exceeded in client
Date Mon, 21 Jun 2010 17:05:26 GMT

    [ https://issues.apache.org/jira/browse/HBASE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880893#action_12880893

Jonathan Gray commented on HBASE-2761:

Right, but GC overhead limit is kinda like an out of memory, you spent too much time trying
to evict memory.  There could be some CPU starvation I suppose but in the past I've seen similar
things trigger either one of those messages.

The stack trace is odd.  Does seem to be rebuilding a new configuration and each Configuration
is making a new hash table.  Are the current prefetch tests sufficient?  I guess there's a
bug in that we're not reusing the existing Configuration in any case.

> GC overhead limit exceeded in client
> ------------------------------------
>                 Key: HBASE-2761
>                 URL: https://issues.apache.org/jira/browse/HBASE-2761
>             Project: HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.21.0
> Never seen this prior to the new meta prefetch stuff. Saw it tonight on a YCSB run after
about an hour.
> Exception in thread "Thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Hashtable.rehash(Hashtable.java:356)
>         at java.util.Hashtable.put(Hashtable.java:412)
>         at java.util.Properties.setProperty(Properties.java:143)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1337)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1227)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1156)
>         at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:1198)
>         at org.apache.hadoop.hbase.HBaseConfiguration.hashCode(HBaseConfiguration.java:112)
>         at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:121)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:130)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:99)
>         at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.prefetchRegionCache(HConnectionManager.java:733)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:784)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:678)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1424)
>         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:660)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:545)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message