hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jim Kellerman <...@powerset.com>
Subject RE: StackOverFlow Error in HBase
Date Thu, 03 Apr 2008 16:12:57 GMT
David,

Have you had a chance to try this patch? We are about to release hbase-0.1.1 and until we
receive a confirmation in HBASE-554 from another person who has tried it and verifies that
it works, we cannot include it in this release. If it is not in this release, there will be
a significant wait for it to appear in an hbase release. hbase-0.1.2 will not happen anytime
soon unless there are critical issues that arise that have not been fixed in 0.1.1. hbase-0.2.0
is also some time in the future. There are a significant number of issues to address before
that release is ready.

Frankly, I'd like to see this patch in 0.1.1, because it is an issue for people that use filters.

The alternative would be for Clint to supply a test case that fails without the patch but
passes with the patch.

We will hold up the release, but need a commitment either from David to test the patch or
for Clint to supply a test. We need that commitment by the end of the day today 2008/04/03
along with an eta as to when it will be completed.

---
Jim Kellerman, Senior Engineer; Powerset


> -----Original Message-----
> From: David Alves [mailto:dr-alves@criticalsoftware.com]
> Sent: Tuesday, April 01, 2008 2:36 PM
> To: hbase-user@hadoop.apache.org
> Subject: RE: StackOverFlow Error in HBase
>
> Hi
>
>         I just deployed the unpatched version.
>         Tomorrow I'll rebuild the system with the patch and
> try it out.
>         Thanks again.
>
> Regards
> David Alves
>
> > -----Original Message-----
> > From: Jim Kellerman [mailto:jim@powerset.com]
> > Sent: Tuesday, April 01, 2008 10:04 PM
> > To: hbase-user@hadoop.apache.org
> > Subject: RE: StackOverFlow Error in HBase
> >
> > David,
> >
> > Have you tried this patch and does it work for you? If so we'll
> > include it
> > hbase-0.1.1
> >
> > ---
> > Jim Kellerman, Senior Engineer; Powerset
> >
> >
> > > -----Original Message-----
> > > From: David Alves [mailto:dr-alves@criticalsoftware.com]
> > > Sent: Tuesday, April 01, 2008 10:44 AM
> > > To: hbase-user@hadoop.apache.org
> > > Subject: RE: StackOverFlow Error in HBase
> > >
> > > Hi
> > >         Thanks for the prompt path Clint, St.Ack and all you guys.
> > >
> > > Regards
> > > David Alves
> > >
> > > > -----Original Message-----
> > > > From: clint.a.m@gmail.com [mailto:clint.a.m@gmail.com]
> On Behalf
> > > > Of Clint Morgan
> > > > Sent: Tuesday, April 01, 2008 2:04 AM
> > > > To: hbase-user@hadoop.apache.org
> > > > Subject: Re: StackOverFlow Error in HBase
> > > >
> > > > Try the patch at
> https://issues.apache.org/jira/browse/HBASE-554.
> > > >
> > > > cheers,
> > > > -clint
> > > >
> > > > On Mon, Mar 31, 2008 at 5:39 AM, David Alves
> > > > <dr-alves@criticalsoftware.com> wrote:
> > > > > Hi ... again
> > > > >
> > > > >         In my previous mail I stated that increasing the
> > > stack size
> > > > solved the
> > > > >  problem, well I jumped a little bit to the conclusion,
> > > in fact it
> > > > > didn't, the StackOverFlowError always occurs at the end
> > > of the cycle
> > > > > when no more records match the filter. Anyway I've
> rewritten my
> > > > > application to use a normal scanner and and do the
> > > "filtering" after
> > > > > which is not optimal but it works.
> > > > >         I'm just saying this because it might be a clue,
> > > in previous
> > > > versions
> > > > >  (!= 0.1.0) even though a more serious problem happened
> > > > > (regionservers  became irresponsive after so many
> records) this
> > > > > didn't happen. Btw in  current version I notice no, or
> > > very small,
> > > > > decrease of thoughput with  time, great work!
> > > > >
> > > > >  Regards
> > > > >  David Alves
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >  On Mon, 2008-03-31 at 05:18 +0100, David Alves wrote:
> > > > >  > Hi again
> > > > >  >
> > > > >  >       As I was almost at the end (80%) of indexable
> > > docs, for the
> > > > time
> > > > >  > being I simply increased the stack size, which
> seemed to work.
> > > > >  >       Thanks for your input St.Ack really helped me
> > > solve the problem
> > > > at
> > > > >  > least for the moment.
> > > > >  >       On another note in the same method I changed
> the way the
> > > > scanner was
> > > > >  > obtained when htable.getStartKeys() would be more than
> > > 1, so that
> > > > > I
> > > > could
> > > > >  > limit the records read each time to a single
> region, and the
> > > > > scanning
> > > > would
> > > > >  > start at the last region, strangely the number of keys
> > > obtained
> > > > > by  > htable.getStartKeys() was always 1 even though
> by the end
> > > > > there are
> > > > already
> > > > >  > 21 regions.
> > > > >  >       Any thoughts?
> > > > >  >
> > > > >  > Regards
> > > > >  > David Alves
> > > > >  >
> > > > >  > > -----Original Message-----
> > > > >  > > From: stack [mailto:stack@duboce.net]  > > Sent:
> > > Sunday, March
> > > > > 30, 2008 9:36 PM  > > To: hbase-user@hadoop.apache.org  >
> > > > Subject:
> > > > > Re: StackOverFlow Error in HBase  > >  > > You're
> doing nothing
> > > > > wrong.
> > > > >  > >
> > > > >  > > The filters as written recurse until they find a
> match.  If
> > > > > long  > > stretches between matching rows, then you will
> > > get a  > >
> > > > > StackOverflowError.  Filters need to be changed.  Thanks for
> > > > pointing
> > > > >  > > this out.  Can you do without them for the moment
> > > until we get
> > > > > a
> > > > chance
> > > > >  > > to fix it?  (HBASE-554)
> > > > >  > >
> > > > >  > > Thanks,
> > > > >  > > St.Ack
> > > > >  > >
> > > > >  > >
> > > > >  > >
> > > > >  > > David Alves wrote:
> > > > >  > > > Hi St.Ack and all
> > > > >  > > >
> > > > >  > > >   The error always occurs when trying to see if
> > > there are more
> > > > rows to
> > > > >  > > > process.
> > > > >  > > >   Yes I'm using a filter(RegExpRowFilter) to
> > > select only the rows
> > > > (any
> > > > >  > > > row key) that match a specific value in one of
> the columns.
> > > > >  > > >   Then I obtain the scanner just test the hasNext
> > > method, close
> > > > the
> > > > >  > > > scanner and return.
> > > > >  > > >   Am I doing something wrong?
> > > > >  > > >   Still StackOverflowError is not supposed to
> happen right?
> > > > >  > > >
> > > > >  > > > Regards
> > > > >  > > > David Alves
> > > > >  > > > On Thu, 2008-03-27 at 12:36 -0700, stack wrote:
> > > > >  > > >
> > > > >  > > >> You are using a filter?  If so, tell us more
about it.
> > > > >  > > >> St.Ack
> > > > >  > > >>
> > > > >  > > >> David Alves wrote:
> > > > >  > > >>
> > > > >  > > >>> Hi guys
> > > > >  > > >>>
> > > > >  > > >>>         I 'm using HBase to keep data that
is
> > > later indexed.
> > > > >  > > >>>         The data is indexed in chunks so
the
> > > cycle is get XXXX
> > > > records index
> > > > >  > > >>> them check for more records etc...
> > > > >  > > >>>         When I tryed the candidate-2 instead
of
> > > the old 0.16.0
> > > > (which I
> > > > >  > > >>> switched to do to the regionservers becoming
> > > unresponsive)
> > > > > I
> > > > got the
> > > > >  > > >>> error in the end of this email well into
an
> indexing job.
> > > > >  > > >>>         So you have any idea why? Am I doing
> > > something wrong?
> > > > >  > > >>>
> > > > >  > > >>> David Alves
> > > > >  > > >>>
> > > > >  > > >>> java.lang.RuntimeException:
> > > > org.apache.hadoop.ipc.RemoteException:
> > > > >  > > >>> java.io.IOException: java.lang.StackOverflowError
> > > > >  > > >>>         at
> > > > java.io.DataInputStream.readFully(DataInputStream.java:178)
> > > > >  > > >>>         at
> > > > java.io.DataInputStream.readLong(DataInputStream.java:399)
> > > > >  > > >>>         at org.apache.hadoop.dfs.DFSClient
> > > > >  > > >>> $BlockReader.readChunk(DFSClient.java:735)
> > > > >  > > >>>         at
> > > > >  > > >>>
> > > > >  > >
> > > >
> > > org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputC
> > > hecker.java:
> > > > >  > > 234)
> > > > >  > > >>>         at
> > > > >  > > >>>
> > > >
> org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
> > > > >  > > >>>         at
> > > > >  > > >>>
> > > >
> org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
> > > > >  > > >>>         at
> > > > >  > > >>>
> > > >
> org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:157)
> > > > >  > > >>>         at org.apache.hadoop.dfs.DFSClient
> > > > >  > > >>> $BlockReader.read(DFSClient.java:658)
> > > > >  > > >>>         at org.apache.hadoop.dfs.DFSClient
> > > > >  > > >>> $DFSInputStream.readBuffer(DFSClient.java:1130)
> > > > >  > > >>>         at org.apache.hadoop.dfs.DFSClient
> > > > >  > > >>> $DFSInputStream.read(DFSClient.java:1166)
> > > > >  > > >>>         at
> > > > java.io.DataInputStream.readFully(DataInputStream.java:178)
> > > > >  > > >>>         at org.apache.hadoop.io.DataOutputBuffer
> > > > >  > > >>> $Buffer.write(DataOutputBuffer.java:56)
> > > > >  > > >>>         at
> > > > >  > > >>>
> > > >
> > >
> org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:90
> > > )
> > > > >  > > >>>         at org.apache.hadoop.io.SequenceFile
> > > > >  > > >>> $Reader.next(SequenceFile.java:1829)
> > > > >  > > >>>         at org.apache.hadoop.io.SequenceFile
> > > > >  > > >>> $Reader.next(SequenceFile.java:1729)
> > > > >  > > >>>         at org.apache.hadoop.io.SequenceFile
> > > > >  > > >>> $Reader.next(SequenceFile.java:1775)
> > > > >  > > >>>         at
> > > > org.apache.hadoop.io.MapFile$Reader.next(MapFile.java:461)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HStore
> > > > >  > > >>> $StoreFileScanner.getNext(HStore.java:2350)
> > > > >  > > >>>         at
> > > > >  > > >>>
> > > > >  > >
> > > >
> > >
> org.apache.hadoop.hbase.HAbstractScanner.next(HAbstractScanner.java:
> > > 25
> > > > 6)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HStore
> > > > >  > > >>> $HStoreScanner.next(HStore.java:2561)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1807)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)
> > > > >  > > >>>         at org.apache.hadoop.hbase.HRegion
> > > > >  > > >>> $HScanner.next(HRegion.java:1843)  > >
>>> ...
> > > > >  > > >>>
> > > > >  > > >>>
> > > > >  > > >>>
> > > > >  > > >>>
> > > > >  > > >
> > > > >  > > >
> > > > >  >
> > > > >
> > > > >
> > >
> > >
> > > No virus found in this incoming message.
> > > Checked by AVG.
> > > Version: 7.5.519 / Virus Database: 269.22.3/1354 - Release
> > > Date: 4/1/2008 5:38 AM
> > >
> > >
> >
> > No virus found in this outgoing message.
> > Checked by AVG.
> > Version: 7.5.519 / Virus Database: 269.22.3/1354 - Release Date:
> > 4/1/2008
> > 5:38 AM
>
>
> No virus found in this incoming message.
> Checked by AVG.
> Version: 7.5.519 / Virus Database: 269.22.3/1354 - Release
> Date: 4/1/2008 5:38 AM
>
>

No virus found in this outgoing message.
Checked by AVG.
Version: 7.5.519 / Virus Database: 269.22.5/1357 - Release Date: 4/3/2008 10:48 AM


Mime
View raw message