Return-Path: Delivered-To: apmail-incubator-esme-dev-archive@minotaur.apache.org Received: (qmail 21811 invoked from network); 17 Nov 2009 21:36:30 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 17 Nov 2009 21:36:30 -0000 Received: (qmail 8595 invoked by uid 500); 17 Nov 2009 21:36:27 -0000 Delivered-To: apmail-incubator-esme-dev-archive@incubator.apache.org Received: (qmail 8560 invoked by uid 500); 17 Nov 2009 21:36:27 -0000 Mailing-List: contact esme-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: esme-dev@incubator.apache.org Delivered-To: mailing list esme-dev@incubator.apache.org Received: (qmail 8501 invoked by uid 99); 17 Nov 2009 21:36:27 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Nov 2009 21:36:27 +0000 X-ASF-Spam-Status: No, hits=-2.8 required=5.0 tests=AWL,BAYES_00,HTML_MESSAGE X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of feeder.of.the.bears@gmail.com designates 209.85.211.182 as permitted sender) Received: from [209.85.211.182] (HELO mail-yw0-f182.google.com) (209.85.211.182) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Nov 2009 21:36:25 +0000 Received: by ywh12 with SMTP id 12so480718ywh.21 for ; Tue, 17 Nov 2009 13:36:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=6WGrB8GSlzph7KRuOYmIMNGXNFYmY5Ygzq4m3Cs10pc=; b=miLhur9RQzfl4gGYjDf8K8Mzc3Pd+vGNid9Zl3lZG/IHoQgbi77y8qINv9PKdSVtLN DtS55cZbeayW/9pfC7jO/X2SuOY7RHtu/56Msyy+gz5455YhNS5qTsKD0yvuzc+PgJBn TiE6boTQ6hWs63kPDkO8BN7AeyJR8Dtz1z8Fs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=KjG7f4xBAW3IyXhAlx3GvdmzDehSkB5J0RSGI7qcOzClvHQYA+yXCBM5hyvM+fmQQl NQLvHPlv0fc0Hp1c44prn0R2XrOS/BnMseZL98zm3W96Fsd2bT0EVNpd2N10GrQyTFUn PRkvyRfjWmp/CzbkJVkDSvVCYqI7Ij5JxKWbI= MIME-Version: 1.0 Received: by 10.90.166.2 with SMTP id o2mr691482age.93.1258493763950; Tue, 17 Nov 2009 13:36:03 -0800 (PST) In-Reply-To: <771905290911171325p160b0e4ci3fad2465322158e2@mail.gmail.com> References: <771905290911171325p160b0e4ci3fad2465322158e2@mail.gmail.com> Date: Tue, 17 Nov 2009 13:36:03 -0800 Message-ID: Subject: Re: First mass user tests results From: David Pollak To: esme-dev@incubator.apache.org Content-Type: multipart/alternative; boundary=001636164bdbc0999c047897e908 --001636164bdbc0999c047897e908 Content-Type: text/plain; charset=UTF-8 Markus, Interesting information. I found a bug in the Lift Actors (fix checked into master this morning) where the Actor thread pool would grow unbounded. Given the amount of message passing in ESME, I think threads were being created rather queuing messages. I'd be interested to see if a build of ESME made with the current SNAPSHOT of Lift exhibits the same thread explosion issue. Also, I don't think the attachments made it through. Thanks, David On Tue, Nov 17, 2009 at 1:25 PM, Markus Kohler wrote: > Hi all, > Yesterday night I finally got some tests running. > I still focused on single threaded (serial) tests using Selenium RC Java to > control one firefox browser. > *Test 1, Creating Users* > The first test would create the 300+x users, using a CSV file I generated > from my twitter followers. The test script enters User data including the > url for the avatar and then logs out. Basically that means that during the > test only one user is logged on at any given point in time. Sorry didn't > make any screenshots of the Stax monitor. Learned in the meantime this would > have been a good idea. > The number of threads went up to 130, which I find surprising, given that > there were no users on the system in parallel. > > *Test2, Logon each user* > In the second test I logon each user and do not logout afterwards. The idea > was to see what the memory overhead of one user is. I achieved this with > one browser by clearing the cookies after the user has logged on. > The memory_allUsers attachment shows that the number of threads increased > dramatically beyond 1000. > > The memory also went up, but this is not at all an issue atm. Compared to > what I've seen so far at Enterprise apps it's very low! > > After the test was run, I tried with one browser whether everything would > work still fine. This caused an unexpected behavior of the server. See > cpu_allUsers and memory_allUsers2 attachments. > The system load went up dramatically and stayed there for while. When > entering a message, this message would appear only very slowly or not all in > the users timeline. The number of threads would go down after a while, but > there was a second peak.Not sure where it came from. > > What's also interesting is that the number of classes grew overtime. > I would assume that full GC's where running so they should have been > reclaimed, if they would have been only of temporary nature. > > Note that Stax seems to run on Tomcat 6 without the async/Comet support of > the Servlet 3.0 API. > The will wait for 7.0 to support that. > > As soon as I have some time, I will rerun the test on my local machine, > where I have more tools to check what is going on. > I will also first run it on Jetty to see whether it performs better. > > Still I would assume that NW CE will show the same issues and sooner or > later we will have to figure out the root cause. > > > Greetings, > Markus > > > > > > -- Lift, the simply functional web framework http://liftweb.net Beginning Scala http://www.apress.com/book/view/1430219890 Follow me: http://twitter.com/dpp Surf the harmonics --001636164bdbc0999c047897e908--