hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Age Mooij (JIRA)" <j...@apache.org>
Subject [jira] Created: (HBASE-1951) Stack overflow when calling HTable.checkAndPut() when deleting a lot of values
Date Mon, 02 Nov 2009 15:10:59 GMT
Stack overflow when calling HTable.checkAndPut() when deleting a lot of values
------------------------------------------------------------------------------

                 Key: HBASE-1951
                 URL: https://issues.apache.org/jira/browse/HBASE-1951
             Project: Hadoop HBase
          Issue Type: Bug
          Components: regionserver
    Affects Versions: 0.20.1
         Environment: Running HBase 0.20.1 on Cloudera distribution of Hadoop 0.20.1+152 on
an EC2 test cluster with one master, one embedded zookeeper, and only one region server
            Reporter: Age Mooij


We get a stackoverflow when calling HTable.checkAndPut() from a map-reduce job though the
client API after doing a large number of deletes.

Our mapred job is a periodic job (which extends TableMapper) that merges the versions for
a value in a column into a new value/version and then deletes the older versions. This is
because we use versions to store data so we can do append-only insertion. Our rows can have
large/huge (from 1 till > 1M) numbers of columns (aka key-values).

The problem seems to be that the org.apache.hadoop.hbase.regionserver.GetDeleteTracker.isDeleted()
method is implemented with recursion but since Java has no tail recursion optimization, this
fails for cases where the number of deletes that are being tracked is bigger than the stack
size. I'm not sure why recursion is used here but it is not safe without tail-call optimization
and it should be optimized into a simple loop.

I'll attach the stacktrace.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message