tomcat-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-frederic Clere <jfcl...@gmail.com>
Subject Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java
Date Thu, 26 Oct 2006 19:58:41 GMT
Peter Rossbach wrote:

> Hi,
>
> for other server os's I found:
>
> =============
> For AIX: To see the current TCP_TIMEWAIT value, run the following  
> command:
> /usr/sbin/no a | grep tcp_timewait
>
> To set the TCP_TIMEWAIT values to 15 seconds, run the following command:
> /usr/sbin/no o tcp_timewait =1
>
> The tcp_timewait option is used to configure how long connections are  
> kept in the timewait state. It is given in 15-second intervals, and  
> the default is 1.
> ============
> For Linux: Set the timeout_timewait paramater using the following  
> command:
> /sbin/sysctl -w net.ipv4.vs.timeout_timewait=30
> This will set TME_WAIT for 30 seconds.


No... My machine (debian 2.6.13) says:
+++
jfclere@jfcexpert:~$ sudo /sbin/sysctl -w net.ipv4.vs.timeout_timewait=30
error: "net.ipv4.vs.timeout_timewait" is an unknown key
+++
net.ipv4.tcp_fin_timeout is probably the thing to use:
+++
jfclere@jfcexpert:~$ more  /proc/sys/net/ipv4/tcp_fin_timeout
60
+++

Cheers

Jean-Frederic

>
> ============
> For Solaris: Set the tcp_time_wait_interval to 30000 milliseconds as  
> follows:
> /usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 30000
>
> ==
>
> Tipps for tuning mac os x 10.4 are very welcome :-(
>
> Regards
> Peter Ro├čbach
> pr@objektpark.de
>
>
>
> Am 26.10.2006 um 20:58 schrieb Filip Hanik - Dev Lists:
>
>> That's some very good info, it looks like my system never does go  
>> over 30k and cleaning it up seems to be working really well.
>> btw. do you know where I change the cleanup intervals for linux 2.6  
>> kernel?
>>
>> I figured out what the problem was:
>> Somewhere I have a lock/wait problem
>>
>> for example, this runs perfectly:
>> ./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i
>>
>> If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1  
>> second.
>>
>> so what was happening in my test was running 1000 requests over 400  
>> connections, then invoking 1 request over 1 connection, and repeat.
>> Every time I did the single connection request, it does a 1sec  
>> delay, this cause the CPU to drop.
>>
>> So basically, the NIO connector sucks majorly if you are a single  
>> user :), I'll trace this one down.
>> Filip
>>
>>
>> Rainer Jung wrote:
>>
>>> Hi Filip,
>>>
>>> the fluctuation reminds me of something: depending on the client
>>> behaviour connections will end up in TIME_WAIT state. Usually you run
>>> into trouble (throughput stalls) once you have around 30K of them.  
>>> They
>>> will be cleaned up every now and then by the kernel (talking about  the
>>> unix/Linux style mechanisms) and then throughput (and CPU usage)  start
>>> again.
>>>
>>> With modern systems handling 10-20k requests per second one can  run 
>>> into
>>> trouble much faster, than the usual cleanup intervals.
>>>
>>> Check with "netstat -an" if you can see a lot of TIME_WAIT  connections
>>> (thousands). If not it's something different :(
>>>
>>> Regards,
>>>
>>> Rainer
>>>
>>> Filip Hanik - Dev Lists schrieb:
>>>
>>>> Remy Maucherat wrote:
>>>>
>>>>> fhanik@apache.org wrote:
>>>>>
>>>>>> Author: fhanik
>>>>>> Date: Wed Oct 25 15:11:10 2006
>>>>>> New Revision: 467787
>>>>>>
>>>>>> URL: http://svn.apache.org/viewvc?view=rev&rev=467787
>>>>>> Log:
>>>>>> Documented socket properties
>>>>>> Added in the ability to cache bytebuffers based on number of  
>>>>>> channels
>>>>>> or number of bytes
>>>>>> Added in nonGC poller events to lower CPU usage during high  traffic
>>>>>
>>>>> I'm starting to get emails again, so sorry for not replying.
>>>>>
>>>>> I am testing with the default VM settings, which basically means  
>>>>> that
>>>>> excessive GC will have a very visible impact. I am testing to
>>>>> optimize, not to see which connector would be faster in the real  
>>>>> world
>>>>> (probably neither unless testing scalability), so I think it's
>>>>> reasonable.
>>>>>
>>>>> This fixes the paranormal behavior I was seeing on Windows, so  
>>>>> the NIO
>>>>> connector works properly now. Great ! However, I still have NIO  
>>>>> which
>>>>> is slower than java.io which is slower than APR. It's ok if some
>>>>> solutions are better than others on certain platforms of course.
>>>>>
>>>>>
>>>> thanks for the feedback, I'm testing with larger files now, 100k+  and
>>>> also see APR->JIO->NIO
>>>> NIO has a very funny CPU telemetry graph, it fluctuates way to  
>>>> much, so
>>>> I have to find where in the code it would do this, so there is still
>>>> some work to do.
>>>> I'd like to see a nearly flat CPU usage when running my test, but
>>>> instead the CPU goes from 20-80% up and down, up and down.
>>>>
>>>> during my test
>>>> (for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
>>>> http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)
>>>>
>>>> my memory usage goes up to 40MB, then after a FullGC it goes down to
>>>> 10MB again, so I wanna figure out where that comes from as well. My
>>>> guess is that all that data is actually in the java.net.Socket  
>>>> classes,
>>>> as I am seeing the same results with the JIO connector, but not with
>>>> APR(cause APR allocates mem using pools)
>>>> Btw, had to put in the byte[] buffer back into the
>>>> InternalNioOutputBuffer.java, ByteBuffers are way to slow.
>>>>
>>>> With APR, I think the connections might be lingering to long as
>>>> eventually, during my test, it stop accepting connections. Usually
>>>> around the 89th iteration of the test.
>>>> I'm gonna keep working on this for a bit, as I think I am getting  
>>>> to a
>>>> point with the NIO connector where it is a viable alternative.
>>>>
>>>> Filip
>>>>
>>>> -------------------------------------------------------------------- -
>>>> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: dev-help@tomcat.apache.org
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: dev-help@tomcat.apache.org
>>>
>>>
>>>
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: dev-help@tomcat.apache.org
>>
>>
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org


Mime
View raw message