tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arijit Ganguly <>
Subject file descriptor (unix socket) leak with NIO handler
Date Sat, 07 Jan 2012 07:23:16 GMT

I have an application running under tomcat configured with
Http11NioProtocol. I am noticing that under a high connection rate (not
request rate) of about 100 connections/second,  tomcat is leaks file
handles corresponding to Unix sockets.  I am using the lsof command to find
the number of file handles open by the java process. I observe an increase
number of file handles that are all pointing to the same unix socket (same
inode number) over a period of time.

# ls -l /proc/3907/fd/8368
lrwx------ 1 root root 64 Jan  7 00:15 /proc/3907/fd/8368 ->

# ls -l /proc/3907/fd/8366
lrwx------ 1 root root 64 Jan  7 00:15 /proc/3907/fd/8366 ->

# ls -l /proc/3907/fd/8367
lrwx------ 1 root root 64 Jan  7 00:15 /proc/3907/fd/8367 ->

Netstat –p does show that the state of the socket is connected.

# netstat -p | grep unix | grep java

unix  2      [ ]         STREAM     CONNECTED     605702067 3907/java

At my anticipated connection rates (100/second), we observe the application
to leak about 150 file handles (corresponding to Unix sockets) each day. I
did some investigation on the issue by connecting to the application via
JMX. It seems like this increase in Unix socket file handles is due to
improperly closed connections, since I am also able to reproduce the same
unix socket leak by invoking stop() on the HTTP Connector via JMX. Just
before stopping the connector my application had 4000 open connections
(lsof showed these as 4000 IPv6 file handles). On invoking stop() via JMX,
I the number of IPv6 file went down to about 70, while the number of unix
socket file handles went up by 4000. It seems that these incorrectly closed
connections then incarnate as Unix socket handles.

Here;s the distribution of file handles before and after the stop():

Command used:  pid=`ps -ef  | grep tomcat | grep -v grep  | awk '{print
$2}'`; lsof -p $pid |  awk '{print $5}' | sort -n | uniq -c


     10 0000
      4 CHR
      2 DIR
     20 FIFO
   4105 IPv6 -- see the drop
     93 REG
     37 sock
      1 TYPE
     12 unix

      1 unknown


Sat Jan  7 00:07:50 SAST 2012
     10 0000
      4 CHR
      2 DIR
     20 FIFO
     13 IPv6
     93 REG
      1 sock
      1 TYPE
   4138 unix -- see the increase

A similar issue about leaking unix sockets is also reported in
The same post also mentions that NIO internally uses Unix sockets, and the
problem was with the way the application was using NIO. I suspect there is
a race condition in the NIO handler that causes some resources not getting
cleaned up properly under high connection rate.

I have tried different versions of tomcat 6.0.32, 6.0.35, 7.0.21 and
tomcat7.0.23 – they all have this issue. My application is using the
default configuration of the NIO Connector. I dont think Comet or SendFile
are being used. Furthermore, no unix sockets show up when I use Blocking IO
(default HTTP handler).

I would appreciate the help of the community in addressing this issue.


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message