incubator-esme-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Richard Hirsch <hirsch.d...@gmail.com>
Subject Re: First mass user tests results
Date Wed, 18 Nov 2009 03:47:29 GMT
OK. Tell me the air is clear.

D.

On Wed, Nov 18, 2009 at 4:40 AM, David Pollak
<feeder.of.the.bears@gmail.com> wrote:
> On Tue, Nov 17, 2009 at 7:36 PM, Richard Hirsch <hirsch.dick@gmail.com>wrote:
>
>> @Markus:
>>
>> Thanks a lot for performing these tests. They are really important for
>> ESME.
>>
>> I have created a wiki page for performance tests: (
>> http://cwiki.apache.org/confluence/display/ESME/Performance+tests
>> ).Take a look at it and tell me what I'm missing
>>
>> Send me the attachments to my gmail account and I will post them to
>> the ESME wiki. For some reason attachments are never included when
>> submitting to esme-dev list.
>>
>> For your new tests on stax, you could always use the old snapshot of
>> the DB on stax. This would allow you to recreate the db from this
>> initial tests.
>>
>> Once we have decent performance with the existing one-machine
>> environment on stax (which we currently share with other applications
>> in the stax environment, we could move to a dedicated server in the
>> amazon cloud or even try more than one server.
>>
>> @David: I'll do a new deplyoment on stax this morning.
>>
>
> You might want to hold out for a little time... there's nasty just floating
> around in Lift.
>
>
>>
>> D.
>>
>>
>> On Tue, Nov 17, 2009 at 10:44 PM, Markus Kohler <markus.kohler@gmail.com>
>> wrote:
>> > Hi,
>> > If Richard deploys the latest version, it's no problem to repeat the
>> test.
>> > Anyone can see the png's? They show up normally in my gmail.
>> > Regards,
>> > Markus
>> >
>> > On Tue, Nov 17, 2009 at 10:36 PM, David Pollak <
>> > feeder.of.the.bears@gmail.com> wrote:
>> >
>> >> Markus,
>> >>
>> >> Interesting information.
>> >>
>> >> I found a bug in the Lift Actors (fix checked into master this morning)
>> >> where the Actor thread pool would grow unbounded.  Given the amount of
>> >> message passing in ESME, I think threads were being created rather
>> queuing
>> >> messages.  I'd be interested to see if a build of ESME made with the
>> >> current
>> >> SNAPSHOT of Lift exhibits the same thread explosion issue.
>> >>
>> >> Also, I don't think the attachments made it through.
>> >>
>> >> Thanks,
>> >>
>> >> David
>> >>
>> >> On Tue, Nov 17, 2009 at 1:25 PM, Markus Kohler <markus.kohler@gmail.com
>> >> >wrote:
>> >>
>> >> > Hi all,
>> >> > Yesterday night I finally got some tests running.
>> >> > I still focused on single threaded (serial) tests using Selenium RC
>> Java
>> >> to
>> >> > control one firefox browser.
>> >> > *Test 1, Creating Users*
>> >> > The first test would create the 300+x users, using a CSV file I
>> generated
>> >> > from my twitter followers. The test script enters User data including
>> the
>> >> > url for the avatar and then logs out. Basically that means that during
>> >> the
>> >> > test only one user is logged on at any  given point in time. Sorry
>> didn't
>> >> > make any screenshots of the Stax monitor. Learned in the meantime this
>> >> would
>> >> > have been a good idea.
>> >> > The number of threads went up to 130, which I find  surprising, given
>> >> that
>> >> > there were no users on the system in parallel.
>> >> >
>> >> > *Test2, Logon each user*
>> >> > In the second test I logon each user and do not logout afterwards.
The
>> >> idea
>> >> > was to see what the memory overhead of one user is.  I achieved this
>> with
>> >> > one browser by clearing the cookies after the user has logged on.
>> >> > The memory_allUsers attachment shows that the number of threads
>> increased
>> >> > dramatically beyond 1000.
>> >> >
>> >> > The memory also went up, but this is not at all an issue atm. Compared
>> to
>> >> > what I've seen so far at Enterprise apps it's very low!
>> >> >
>> >> > After the test was run, I tried with one browser whether everything
>> would
>> >> > work still fine. This caused an unexpected behavior of the server.
See
>> >> > cpu_allUsers and memory_allUsers2 attachments.
>> >> > The system load went up dramatically and stayed there for while. When
>> >> > entering a message, this message would appear only very slowly or not
>> all
>> >> in
>> >> > the users timeline. The number of threads would go down after a while,
>> >> but
>> >> > there was a second peak.Not sure where it came from.
>> >> >
>> >> > What's also interesting is that the number of classes grew overtime.
>> >> > I would assume that full GC's where running so they should have been
>> >> > reclaimed, if they would have been only of temporary nature.
>> >> >
>> >> > Note that Stax seems to run on Tomcat 6 without the async/Comet
>> support
>> >> of
>> >> > the Servlet 3.0 API.
>> >> > The will wait for 7.0 to support that.
>> >> >
>> >> > As soon as I have some time, I will rerun the test on my local
>> machine,
>> >> > where I have more tools to check what is going on.
>> >> > I will also first run it on Jetty to see whether it performs better.
>> >> >
>> >> > Still I would assume that NW CE will show the same issues and sooner
>> or
>> >> > later we will have to figure out the root cause.
>> >> >
>> >> >
>> >> > Greetings,
>> >> > Markus
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >>
>> >>
>> >> --
>> >> Lift, the simply functional web framework http://liftweb.net
>> >> Beginning Scala http://www.apress.com/book/view/1430219890
>> >> Follow me: http://twitter.com/dpp
>> >> Surf the harmonics
>> >>
>> >
>>
>
>
>
> --
> Lift, the simply functional web framework http://liftweb.net
> Beginning Scala http://www.apress.com/book/view/1430219890
> Follow me: http://twitter.com/dpp
> Surf the harmonics
>

Mime
View raw message