Return-Path: Delivered-To: apmail-incubator-esme-dev-archive@minotaur.apache.org Received: (qmail 88838 invoked from network); 18 Nov 2009 03:47:55 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 18 Nov 2009 03:47:55 -0000 Received: (qmail 54395 invoked by uid 500); 18 Nov 2009 03:47:55 -0000 Delivered-To: apmail-incubator-esme-dev-archive@incubator.apache.org Received: (qmail 54342 invoked by uid 500); 18 Nov 2009 03:47:54 -0000 Mailing-List: contact esme-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: esme-dev@incubator.apache.org Delivered-To: mailing list esme-dev@incubator.apache.org Received: (qmail 54332 invoked by uid 99); 18 Nov 2009 03:47:54 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Nov 2009 03:47:54 +0000 X-ASF-Spam-Status: No, hits=-2.6 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of hirsch.dick@gmail.com designates 209.85.218.210 as permitted sender) Received: from [209.85.218.210] (HELO mail-bw0-f210.google.com) (209.85.218.210) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Nov 2009 03:47:50 +0000 Received: by bwz2 with SMTP id 2so844866bwz.20 for ; Tue, 17 Nov 2009 19:47:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=0/2g3Q8xxzJdHSPcjn9NOu60S86bjDxfhY6+cILnM3g=; b=TzjwdITqp9QUmQN9KvH7hbEfp8FXxM7r5UzEETbkdQRfu22zMAATN5bRNF5VoVVHDL xi8JWUsJMzGQ4YDEnOjm0l8B0xGPA/V6zfOH/NdD7F7aK9V8nnpEp5Vx2EZFOXjV4Yvd OPNuN9fCEIi3NKtPKBHCDDMPvJRRaLVEg/mz4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=IDwaGPEYBEBe6xQ6fwRtxKXYYf0AnIXqk3qJjD0Y/dV+6wwLRkdB0SVfOGAXkHuJnV No8+8+HheAv5BO2/FeH5NmE0I8+2WxKfqmkEcuEmoUFEjWDZ4rCh6SBP3yneRbbGejJA up33c00nOlHpSTVbppn+6lomIVIip3n/U22Qo= MIME-Version: 1.0 Received: by 10.103.78.22 with SMTP id f22mr4894650mul.14.1258516049408; Tue, 17 Nov 2009 19:47:29 -0800 (PST) In-Reply-To: References: <771905290911171325p160b0e4ci3fad2465322158e2@mail.gmail.com> <771905290911171344s41d2c638ua26ee0a875627e28@mail.gmail.com> Date: Wed, 18 Nov 2009 04:47:29 +0100 Message-ID: Subject: Re: First mass user tests results From: Richard Hirsch To: esme-dev@incubator.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable OK. Tell me the air is clear. D. On Wed, Nov 18, 2009 at 4:40 AM, David Pollak wrote: > On Tue, Nov 17, 2009 at 7:36 PM, Richard Hirsch wr= ote: > >> @Markus: >> >> Thanks a lot for performing these tests. They are really important for >> ESME. >> >> I have created a wiki page for performance tests: ( >> http://cwiki.apache.org/confluence/display/ESME/Performance+tests >> ).Take a look at it and tell me what I'm missing >> >> Send me the attachments to my gmail account and I will post them to >> the ESME wiki. For some reason attachments are never included when >> submitting to esme-dev list. >> >> For your new tests on stax, you could always use the old snapshot of >> the DB on stax. This would allow you to recreate the db from this >> initial tests. >> >> Once we have decent performance with the existing one-machine >> environment on stax (which we currently share with other applications >> in the stax environment, we could move to a dedicated server in the >> amazon cloud or even try more than one server. >> >> @David: I'll do a new deplyoment on stax this morning. >> > > You might want to hold out for a little time... there's nasty just floati= ng > around in Lift. > > >> >> D. >> >> >> On Tue, Nov 17, 2009 at 10:44 PM, Markus Kohler >> wrote: >> > Hi, >> > If Richard deploys the latest version, it's no problem to repeat the >> test. >> > Anyone can see the png's? They show up normally in my gmail. >> > Regards, >> > Markus >> > >> > On Tue, Nov 17, 2009 at 10:36 PM, David Pollak < >> > feeder.of.the.bears@gmail.com> wrote: >> > >> >> Markus, >> >> >> >> Interesting information. >> >> >> >> I found a bug in the Lift Actors (fix checked into master this mornin= g) >> >> where the Actor thread pool would grow unbounded. =A0Given the amount= of >> >> message passing in ESME, I think threads were being created rather >> queuing >> >> messages. =A0I'd be interested to see if a build of ESME made with th= e >> >> current >> >> SNAPSHOT of Lift exhibits the same thread explosion issue. >> >> >> >> Also, I don't think the attachments made it through. >> >> >> >> Thanks, >> >> >> >> David >> >> >> >> On Tue, Nov 17, 2009 at 1:25 PM, Markus Kohler > >> >wrote: >> >> >> >> > Hi all, >> >> > Yesterday night I finally got some tests running. >> >> > I still focused on single threaded (serial) tests using Selenium RC >> Java >> >> to >> >> > control one firefox browser. >> >> > *Test 1, Creating Users* >> >> > The first test would create the 300+x users, using a CSV file I >> generated >> >> > from my twitter followers. The test script enters User data includi= ng >> the >> >> > url for the avatar and then logs out. Basically that means that dur= ing >> >> the >> >> > test only one user is logged on at any =A0given point in time. Sorr= y >> didn't >> >> > make any screenshots of the Stax monitor. Learned in the meantime t= his >> >> would >> >> > have been a good idea. >> >> > The number of threads went up to 130, which I find =A0surprising, g= iven >> >> that >> >> > there were no users on the system in parallel. >> >> > >> >> > *Test2, Logon each user* >> >> > In the second test I logon each user and do not logout afterwards. = The >> >> idea >> >> > was to see what the memory overhead of one user is. =A0I achieved t= his >> with >> >> > one browser by clearing the cookies after the user has logged on. >> >> > The memory_allUsers attachment shows that the number of threads >> increased >> >> > dramatically beyond 1000. >> >> > >> >> > The memory also went up, but this is not at all an issue atm. Compa= red >> to >> >> > what I've seen so far at Enterprise apps it's very low! >> >> > >> >> > After the test was run, I tried with one browser whether everything >> would >> >> > work still fine. This caused an unexpected behavior of the server. = See >> >> > cpu_allUsers and memory_allUsers2 attachments. >> >> > The system load went up dramatically and stayed there for while. Wh= en >> >> > entering a message, this message would appear only very slowly or n= ot >> all >> >> in >> >> > the users timeline. The number of threads would go down after a whi= le, >> >> but >> >> > there was a second peak.Not sure where it came from. >> >> > >> >> > What's also interesting is that the number of classes grew overtime= . >> >> > I would assume that full GC's where running so they should have bee= n >> >> > reclaimed, if they would have been only of temporary nature. >> >> > >> >> > Note that Stax seems to run on Tomcat 6 without the async/Comet >> support >> >> of >> >> > the Servlet 3.0 API. >> >> > The will wait for 7.0 to support that. >> >> > >> >> > As soon as I have some time, I will rerun the test on my local >> machine, >> >> > where I have more tools to check what is going on. >> >> > I will also first run it on Jetty to see whether it performs better= . >> >> > >> >> > Still I would assume that NW CE will show the same issues and soone= r >> or >> >> > later we will have to figure out the root cause. >> >> > >> >> > >> >> > Greetings, >> >> > Markus >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> >> >> >> >> -- >> >> Lift, the simply functional web framework http://liftweb.net >> >> Beginning Scala http://www.apress.com/book/view/1430219890 >> >> Follow me: http://twitter.com/dpp >> >> Surf the harmonics >> >> >> > >> > > > > -- > Lift, the simply functional web framework http://liftweb.net > Beginning Scala http://www.apress.com/book/view/1430219890 > Follow me: http://twitter.com/dpp > Surf the harmonics >