Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@locus.apache.org Received: (qmail 64375 invoked from network); 30 Mar 2008 20:39:39 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 30 Mar 2008 20:39:39 -0000 Received: (qmail 2104 invoked by uid 500); 30 Mar 2008 20:39:38 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 2087 invoked by uid 500); 30 Mar 2008 20:39:38 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 2078 invoked by uid 99); 30 Mar 2008 20:39:38 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 30 Mar 2008 13:39:38 -0700 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [63.203.238.117] (HELO dns.duboce.net) (63.203.238.117) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 30 Mar 2008 20:38:47 +0000 Received: by dns.duboce.net (Postfix, from userid 1008) id 771E5C51D; Sun, 30 Mar 2008 12:12:20 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.1.4 (2006-07-26) on dns.duboce.net X-Spam-Level: Received: from durruti.local (unknown [192.168.1.135]) by dns.duboce.net (Postfix) with ESMTP id E8FEBC1CE for ; Sun, 30 Mar 2008 12:12:16 -0700 (PDT) Message-ID: <47EFF9BA.6080201@duboce.net> Date: Sun, 30 Mar 2008 13:36:10 -0700 From: stack User-Agent: Thunderbird 2.0.0.12 (Macintosh/20080213) MIME-Version: 1.0 To: hbase-user@hadoop.apache.org Subject: Re: StackOverFlow Error in HBase References: <13050.217.74.68.2.1206621641.squirrel@poczta2.xg.pl> <47EBB1CE.9030505@duboce.net> <1206645120.6724.80.camel@da-laptop-csw> <47EBF75B.30208@duboce.net> <1206906868.15138.5.camel@da-laptop-csw> In-Reply-To: <1206906868.15138.5.camel@da-laptop-csw> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org X-Old-Spam-Status: No, score=-4.4 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.1.4 You're doing nothing wrong. The filters as written recurse until they find a match. If long stretches between matching rows, then you will get a StackOverflowError. Filters need to be changed. Thanks for pointing this out. Can you do without them for the moment until we get a chance to fix it? (HBASE-554) Thanks, St.Ack David Alves wrote: > Hi St.Ack and all > > The error always occurs when trying to see if there are more rows to > process. > Yes I'm using a filter(RegExpRowFilter) to select only the rows (any > row key) that match a specific value in one of the columns. > Then I obtain the scanner just test the hasNext method, close the > scanner and return. > Am I doing something wrong? > Still StackOverflowError is not supposed to happen right? > > Regards > David Alves > On Thu, 2008-03-27 at 12:36 -0700, stack wrote: > >> You are using a filter? If so, tell us more about it. >> St.Ack >> >> David Alves wrote: >> >>> Hi guys >>> >>> I 'm using HBase to keep data that is later indexed. >>> The data is indexed in chunks so the cycle is get XXXX records index >>> them check for more records etc... >>> When I tryed the candidate-2 instead of the old 0.16.0 (which I >>> switched to do to the regionservers becoming unresponsive) I got the >>> error in the end of this email well into an indexing job. >>> So you have any idea why? Am I doing something wrong? >>> >>> David Alves >>> >>> java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException: >>> java.io.IOException: java.lang.StackOverflowError >>> at java.io.DataInputStream.readFully(DataInputStream.java:178) >>> at java.io.DataInputStream.readLong(DataInputStream.java:399) >>> at org.apache.hadoop.dfs.DFSClient >>> $BlockReader.readChunk(DFSClient.java:735) >>> at >>> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:234) >>> at >>> org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176) >>> at >>> org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193) >>> at >>> org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:157) >>> at org.apache.hadoop.dfs.DFSClient >>> $BlockReader.read(DFSClient.java:658) >>> at org.apache.hadoop.dfs.DFSClient >>> $DFSInputStream.readBuffer(DFSClient.java:1130) >>> at org.apache.hadoop.dfs.DFSClient >>> $DFSInputStream.read(DFSClient.java:1166) >>> at java.io.DataInputStream.readFully(DataInputStream.java:178) >>> at org.apache.hadoop.io.DataOutputBuffer >>> $Buffer.write(DataOutputBuffer.java:56) >>> at >>> org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:90) >>> at org.apache.hadoop.io.SequenceFile >>> $Reader.next(SequenceFile.java:1829) >>> at org.apache.hadoop.io.SequenceFile >>> $Reader.next(SequenceFile.java:1729) >>> at org.apache.hadoop.io.SequenceFile >>> $Reader.next(SequenceFile.java:1775) >>> at org.apache.hadoop.io.MapFile$Reader.next(MapFile.java:461) >>> at org.apache.hadoop.hbase.HStore >>> $StoreFileScanner.getNext(HStore.java:2350) >>> at >>> org.apache.hadoop.hbase.HAbstractScanner.next(HAbstractScanner.java:256) >>> at org.apache.hadoop.hbase.HStore >>> $HStoreScanner.next(HStore.java:2561) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1807) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> at org.apache.hadoop.hbase.HRegion >>> $HScanner.next(HRegion.java:1843) >>> ... >>> >>> >>> >>> > >