Return-Path: Delivered-To: apmail-lucene-pylucene-dev-archive@minotaur.apache.org Received: (qmail 93030 invoked from network); 8 Jan 2011 00:04:24 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 8 Jan 2011 00:04:24 -0000 Received: (qmail 11304 invoked by uid 500); 8 Jan 2011 00:04:24 -0000 Delivered-To: apmail-lucene-pylucene-dev-archive@lucene.apache.org Received: (qmail 11279 invoked by uid 500); 8 Jan 2011 00:04:24 -0000 Mailing-List: contact pylucene-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: pylucene-dev@lucene.apache.org Delivered-To: mailing list pylucene-dev@lucene.apache.org Received: (qmail 11271 invoked by uid 99); 8 Jan 2011 00:04:24 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 08 Jan 2011 00:04:24 +0000 X-ASF-Spam-Status: No, hits=0.7 required=10.0 tests=SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [66.39.3.58] (HELO www4.webmail.pair.com) (66.39.3.58) by apache.org (qpsmtpd/0.29) with SMTP; Sat, 08 Jan 2011 00:04:17 +0000 Received: (qmail 41764 invoked by uid 65534); 8 Jan 2011 00:03:56 -0000 Received: from 68.33.158.219 ([68.33.158.219]) (SquirrelMail authenticated user darren@ontrenet.com) by sm.webmail.pair.com with HTTP; Fri, 7 Jan 2011 19:03:56 -0500 Message-ID: In-Reply-To: References: <4D265465.7030401@ontrenet.com> <4D26FAC8.9050602@ontrenet.com> <4D27A372.5090502@ontrenet.com> Date: Fri, 7 Jan 2011 19:03:56 -0500 Subject: Re: JVM errors From: darren@ontrenet.com To: pylucene-dev@lucene.apache.org User-Agent: SquirrelMail/1.4.20 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-Virus-Checked: Checked by ClamAV on apache.org Not really because each thread is a worker that stays alive and iterates over work items. The fatality seems to happen when they all are using the index (lucene is supposed to be multi-read threadsafe). It does it without the detach, but when the detach is in there it fatals _on_ the detach. wierd. > > On Jan 7, 2011, at 15:36, Darren Govoni wrote: > >> That's a good thought. >> >> I just wish I knew how to interpret the JVM fatal error. I rarely >> ever see them working with straight java, >> so I'm also wondering if something else is at play. Its pretty >> predicatable when it happens. >> Like at either a vm.detachCurrentThread > > Oh. I have no confidence in detachCurrentThread(). Can you pool and > reuse threads instead ? > > Andi.. > >> or perhaps when an object attached to the current thread >> goes out of scope, but in a stressed environment. >> >> On 01/07/2011 06:09 PM, Andi Vajda wrote: >>> >>> On Fri, 7 Jan 2011, Darren Govoni wrote: >>> >>>> I'll try that and report. >>>> >>>> It seems to happen when many threads are attached at once and the >>>> CPU throttles. >>> >>> Maybe some GC settings could also help work around this ? >>> >>> Andi.. >>> >>>> >>>> On 01/06/2011 09:42 PM, Andi Vajda wrote: >>>>> >>>>> On Thu, 6 Jan 2011, Darren Govoni wrote: >>>>> >>>>>> Hi, >>>>>> I am getting these JVM fatal errors: >>>>>> >>>>>> # >>>>>> # A fatal error has been detected by the Java Runtime Environment: >>>>>> # >>>>>> # SIGSEGV (0xb) at pc=0x00007f11dc3c093a, pid=6268, >>>>>> tid=139711024641792 >>>>>> # >>>>>> # JRE version: 6.0_21-b06 >>>>>> # Java VM: Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed >>>>>> mode linux-amd64 ) >>>>>> # Problematic frame: >>>>>> # C [libjcc.so+0xa93a] >>>>>> _ZNK6JCCEnv21getObjectArrayElementEP13_jobjectArrayi+0x2a >>>>>> # >>>>>> # An error report file with more information is saved as: >>>>>> # /home/darren/Gridwave/product/trunk/symphony/server/software/ >>>>>> hs_err_pid6268.log >>>>>> # >>>>>> # If you would like to submit a bug report, please visit: >>>>>> # http://java.sun.com/webapps/bugreport/crash.jsp >>>>>> # >>>>>> >>>>>> They seem to happen sporadically in a multithreaded situation >>>>>> where many threads (attached, etc) are accessing the same index. >>>>>> >>>>>> Is there anything I can do on my end to correct this? >>>>> >>>>> I've never seen this. Maybe getting a newer JVM ? >>>>> >>>>> Andi.. >>>> >