Return-Path: Delivered-To: apmail-geronimo-user-archive@www.apache.org Received: (qmail 16061 invoked from network); 15 Jan 2009 17:32:13 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 15 Jan 2009 17:32:13 -0000 Received: (qmail 34684 invoked by uid 500); 15 Jan 2009 17:32:12 -0000 Delivered-To: apmail-geronimo-user-archive@geronimo.apache.org Received: (qmail 34665 invoked by uid 500); 15 Jan 2009 17:32:11 -0000 Mailing-List: contact user-help@geronimo.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: List-Post: Reply-To: user@geronimo.apache.org List-Id: Delivered-To: mailing list user@geronimo.apache.org Received: (qmail 34651 invoked by uid 99); 15 Jan 2009 17:32:11 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 15 Jan 2009 09:32:11 -0800 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [209.85.218.20] (HELO mail-bw0-f20.google.com) (209.85.218.20) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 15 Jan 2009 17:32:03 +0000 Received: by bwz13 with SMTP id 13so4156882bwz.19 for ; Thu, 15 Jan 2009 09:31:41 -0800 (PST) Received: by 10.223.126.145 with SMTP id c17mr1891195fas.102.1232040701426; Thu, 15 Jan 2009 09:31:41 -0800 (PST) Received: by 10.223.112.136 with HTTP; Thu, 15 Jan 2009 09:31:41 -0800 (PST) Message-ID: Date: Thu, 15 Jan 2009 18:31:41 +0100 From: "Trygve Hardersen" To: user@geronimo.apache.org Subject: Re: wadi clustering - session invalidation In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: X-Virus-Checked: Checked by ClamAV on apache.org I can confirm that this solves the shutdown problem. After some initial testing I'm not seeing other issues either, though I've not stressed the servers yet. I'm building a slightly customized Geronimo 2.2-SNAPSHOT so I just updated wadiVersion to 2.2-SNAPSHOT in the main POM. I had to checkout and build WADI trunk because some of the required dependencies could not be found in the codehaus snapshots repository. Many thanks for all your help, appreciate it! Trygve On Thu, Jan 15, 2009 at 3:20 PM, Trygve Hardersen wrote: > Cool, thanks for the quick fix. > > I'm seeing other issues that look like race condititions to me, but > I've yet to find a consistent pattern. > > I'll test this and let you know how it goes. > > Trygve > > On Thu, Jan 15, 2009 at 2:31 PM, Gianny Damour > wrote: >> Hi Trygve, >> >> This was a bug and it is now fixed; I also changed the log level as ERROR >> was indeed inappropriate. >> >> You will need to get a snapshot version of wadi-core as this was a problem >> with WADI which was not properly re-initiating replicas when sessions were >> evacuated from a node shutting down to all the remaining nodes. I will >> review more closely the re-initialisation of replicas in the case of normal >> shutdown over the week-end as there is still a problem in a very specific >> race condition (I do not think that you will be able to observe it). >> >> >> You can get the snapshot there: >> >> http://snapshots.repository.codehaus.org/org/codehaus/wadi/wadi-core/2.2-SNAPSHOT/wadi-core-2.2-20090115.131018-1.jar >> >> >> The simplest thing is to replace >> >> repository/org/codehaus/wadi/wadi-core/2.1/wadi-core-2.1.jar >> >> with this snapshot version. You can also install this artefact in your repo >> with a version number higher than 2.1. and it will be transparently >> picked-up instead of 2.1. >> >> >> I will need to cut a release of WADI very soon as these fixes need to be >> included in G 2.2. So, you should not have to use this snapshot for more >> than 3-4 days. >> >> Thanks, >> Gianny