tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Kerber <dcker...@verizon.net>
Subject Re: Code performance question #2
Date Tue, 08 Aug 2006 12:43:40 GMT
Peter Crowther wrote:

>>From: David Kerber [mailto:dckerber@verizon.net] 
>>Do you think 
>>it be more efficient to scan the string once and grab the 
>>field values as I get to each field marker?
>>    
>>
>
>Yes.
>
>  
>
>>Yes, the machine is cpu-bound.  
>>My 768k data line will spike the cpu to 100% and hold it 
>>above 95% until 
>>the transmission queues on the other end of the WAN are 
>>caught up.
>>    
>>
>
>(Wince).  Ouch.
>  
>
Yeah.  That surprised me when I first noticed it, too.  I never expected 
a 768k pipe to saturate this cpu.

>  
>
>>watching task manager tells me that the disk subsystem 
>>seems to be able to keep up.
>>    
>>
>
>If you're on Windows, start up Performance Monitor and add the following
>counters:
>
>Processor: % CPU time
>Memory: Pages/sec
>Physical disk: Avg disk queue length (by array or disk if you have more
>than one)
>
>(I would add network: Bytes/sec, but it seems that's not the bottleneck)
>
>The key disk counter is the queue length.  MS documentation suggests
>that when the average queue length climbs above 2 per spindle in the
>array, your disk subsystem is likely to be the bottleneck.  So if you
>have a 5-disk array, queues that climb over 10 show a disk issue.
>  
>
I'll check on that; I never knew what values to look for to spot 
disk-bound problems.

>Memory: Pages/sec is a general indicator of paging traffic.
>COnsistently high values tend to show a paging problem that could
>possibly be solved by adding RAM.
>  
>
I already added some RAM when I noted that the allocated memory was 
larger than the physical RAM.  Now it only has about 700MB allocated, 
with 1.5GB of physical RAM.  Wouldn't hurt to do some more checking, though.

>Processor: % CPU time is a general processor counter.  Sustained values
>above 80% tend to indicate a bottleneck according to MS.  It's sometimes
>worth adding the % user time counter as well to see whether the issue is
>your code or OS code.
>  
>
Good point.

>  
>
>>I haven't run a profiler on this code; I've tried, but getting the 
>>configuration figured out has stumped me every time.  I 
>>picked out these 
>>particular routines (and one other I haven't posted) because of the 
>>principal that 90% of the cpu time is taken by 10% of the code, and 
>>these routines are the only loops in the entire servlet (i.e. 
>>the only 
>>lines of code which are executed more than once per incoming data 
>>line).
>>    
>>
>
>Seems like a reasonable heuristic, I agree.  You may find that Tomcat
>itself is the bottleneck - this is an area where profiling is of great
>  
>
Yes, that's something I've considered.  I'm trying to pick the 
low-hanging fruit first and make sure my code is reasonably efficient 
before I go pointing fingers at Tomcat.  It may turn out that I just 
need to throw more hardware at the problem.

>help.  However, I'd beware hidden complexity: the complexity behind
>function calls into third-party libraries.  For example, you say you're
>decrypting the arguments.  Depending on the exact crypto algorithm used,
>this ranges from moderately expensive to horribly expensive; once again,
>profiling would reveal this, and might indicate where a change to the
>crypto could be of benefit.
>  
>
It's a home-grown light-encryption algorithm, but based on responses you 
guys have posted to my two questions, I have some ideas on things to 
check there as well.

>Can you set up a simple Java test harness outside your servlet that
>simply calls the servlet's service routine repeatedly with a few sample
>lines?  If you can construct something that will run outside Tomcat,
>it'll be easier to instrument and you'll be able to analyse the impact
>of your tuning changes more easily.  I also see Mark's willing to help
>getting a profiler set up... :-).
>
>Sorry to point you off in a different direction from your likely
>preferred route, but I've seen a lot of people spend a lot of time
>optimising the wrong area of their code.  In a past life, I wrote
>highly-optimised classifier code for an inference engine (admittedly in
>C++); I found a profiler was the only way to work out what was
>*actually* happening.  I ended up getting a factor of 20 out of my code
>by combining optimisations in the most unlikely places, giving the
>company the fastest engine in the world at that time.  I simply couldn't
>have done that with static analysis - I kept guessing wrong!
>
>		- Peter
>  
>
Thanks, Peter!  I'll post back when I get more useful information, 
including how much the various suggestions helped.

Dave



---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Mime
View raw message