Return-Path: X-Original-To: apmail-tomcat-users-archive@www.apache.org Delivered-To: apmail-tomcat-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9FA846BE3 for ; Fri, 20 May 2011 07:02:21 +0000 (UTC) Received: (qmail 63814 invoked by uid 500); 20 May 2011 07:02:16 -0000 Delivered-To: apmail-tomcat-users-archive@tomcat.apache.org Received: (qmail 63318 invoked by uid 500); 20 May 2011 07:02:06 -0000 Mailing-List: contact users-help@tomcat.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: "Tomcat Users List" Delivered-To: mailing list users@tomcat.apache.org Received: (qmail 63290 invoked by uid 99); 20 May 2011 07:01:53 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 May 2011 07:01:53 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of aw@ice-sa.com designates 212.85.38.228 as permitted sender) Received: from [212.85.38.228] (HELO tor.combios.es) (212.85.38.228) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 May 2011 07:01:44 +0000 Received: from [192.168.245.129] (p549E8B3B.dip0.t-ipconnect.de [84.158.139.59]) by tor.combios.es (Postfix) with ESMTPA id 18F6B226113 for ; Fri, 20 May 2011 09:01:22 +0200 (CEST) Message-ID: <4DD611B8.7080507@ice-sa.com> Date: Fri, 20 May 2011 09:01:12 +0200 From: =?UTF-8?B?QW5kcsOpIFdhcm5pZXI=?= Reply-To: Tomcat Users List User-Agent: Thunderbird 2.0.0.23 (Windows/20090812) MIME-Version: 1.0 To: Tomcat Users List Subject: Re: Application crash after Migrate to different ESX References: <4DD28179.8030203@ice-sa.com> <4DD2E14C.7080005@ice-sa.com> <4DD3EACD.4080003@christopherschultz.net> <4DD4C6F6.1050205@ice-sa.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Checked: Checked by ClamAV on apache.org הילה wrote: ... > >> So, if you are using that pool, I basically do not understand why you would >> need any additional mechanism to overcome the loss of a db connection when >> your VM is migrated. >> Particularly that "smart keep alive" you keep talking about, but never >> telling us where it comes from and where it is inserted in that >> architecture. >> >> I don't understand on what mechanism are you talking about. I don't want to > add anything (unless it can fix the problem) > I just stated the problem, and the environment we have (win 2008, tomcat > 6.0.29, jdbc pool.. These are the dry facts) > > The keep alive is an xslt file which contains actions to perform on the DB > to check that everything is alive. > if so, it returns an OK response, which can be viewed in HTML file on IE > browser. > The Load balancer samples the keep alive every 10 seconds to check the OK > state. if it's not OK for 3 times in a row- the LB takes it out from the > servers' pool and no one else can connect to it. So now there is also a load balancer ? I've just gone through all your previous posts, and this is the first time it has been mentioned. And it appears that it is the load balancer which tests de DB server directly (?) Huh ? I'm like Chris now, just a bit confused again about your setup. > so yes. we need this keep alive and can't dismiss it, because it's the > indication that the server is functioning properly. Which server ? and if it's not function > - The LB can identify it and remove the server from the servers' pool so no > one will try and approach it. > Again, which server is taken out of the server pool ? The DB server, or the one running Tomcat ? Do you actually mean that the load balancer on one side, and Tomcat on the other side, are each accessing the DB server in parallel and by different channels ? >> >>> we use the JTDS driver (I tried the Microsoft JDBC, but its performance is >>> poor compared to the JTDS driver of sourceforge) >>> Someone in my company suggested that the problem can rely in either of >>> these >>> JARs. >>> so I will check with the tomcat-dbcp.jar as scenario 1, sql-jdbc.jar as >>> scenario2, and maybe both combined as scenario 3. >>> scenario 4 will be testing the behavior while validation is configured. >>> >>> >> Sure, add some extra variables to the problem. That will make it a lot >> simpler to find out what happens. >> >> no need of sarcasm here. these aren't additional variables. we spoke on the > connection pool, so this is one of the things I can focus on to try and fix > the problem. > Yes, that was sarcasm. I was just getting a bit frustrated, because I am trying to help, but it seems impossible to get logical explanations here even about your exact configuration. So let me try again, graphically. As far as I can tell by your posts, your configuration is : hardware : - Vmware VM with your application and Tomcat and jdbc pool and jtds drivers - network - another machine with the DB and, somewhere, there is a load-balancer with a "smart keep alive" feature built-in. logical : Application <--> Tomcat <--> jdbc pool <--> jtds driver <--> network <--> database At the start, the jdbc pool contains for example 10 connections to the database. At some point, there is a network problem, and as a consequence 5 of these connections are broken. But the jdbc pool is not configured to detect this in advance, and as a consequence, when the application tries to use a DB connection, it may get one of the 5 pooled connections which are broken, and it then gets an exception and breaks down. Or it may get a pooled connection that is not broken, and then everything appears to work fine. Now can you tell us where in the above schema the "smart keep alive" fits in ? Or else, correct the above schema to tell us how things really work ? --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org For additional commands, e-mail: users-help@tomcat.apache.org