httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chuck Murcko <>
Subject Re: WWW Form Bug Report: "sockets left in FIN_WAIT_2 lingering state" on SunOS 4.x (fwd)
Date Thu, 19 Dec 1996 19:37:05 GMT
Rob Hartill liltingly intones:
> Chuck Murcko wrote:
> >
> >Rob Hartill liltingly intones:
> >> 
> >> 
> >> Is this documented ?
> >> 
> >Not yet, AFAIK. I will do it later today.
> Can this bring down a server ?. I can see lots of these via netstat -a
> and one of my machines just died.
> Is netstat supposed to die with "netstat: kvm_read: Bad address" on
> occasions ?, if not, what's the most likely cause ?
> On a related note, what kernel config settings do I need to adjust
> to keep a FreeBSD box happy as a webserver ?
I think I will be writing the docs below. 8^)

Answer to first question is, yes, eventually. A connection stuck in FIN_WAIT_2
means its mbufs/mbuf_clusters(if system has the second) are not freed yet.
Eventually the machine either gets kvm-starved (like SunOS will) or runs
out of allocated non-kernel memory or list entries for mbuf_clusters.
Without being able to timeout FIN_WAIT_2, like Solaris 2.5+ seems to, the
only 'stable' recourse is to allocate a ton of mbufs/mbuf_clusters, and
restart the machine or, if you can, the TCP stack (IRIX allows this)
periodically. Otherwise you get the odd machine panic.

We have a couple of 256 Mb and rather more of 64 Mb BSDI boxen here as
httpd machines, and we use kernel configs similar to the following:
(only the relevant lines included)

maxusers        256	# drives a lot of formulae in /usr/src/sys/conf/param.c

# Network options.  NMBCLUSTERS defines the number of mbufs and defaults to 
# 256.  This machine is a gateway that handles lots of traffic, so we boost
# that value.
options         SOMAXCONN=256           # max pending connects
options         NMBCLUSTERS=4096        # clusters to spare(maybe)!

# Misc. options

options         CHILD_MAX=1536          # maximum number of child processes
options         OPEN_MAX=1536           # maximum open fds (breaks RPC svcs)
options         "KMAPENTRIES=1000"      # need more vm_map_entries


I actually set CHILD_MAX and OPEN_MAX down from what the maxusers value
would cause them to be set for.

This is from a 256 Mb Pentium box intended to eventually run 1024 as a
HARD_SERVER_LIMIT. Currently is running at 512. So far, the max I've seen
running is around 300.

Size of an mbuf_cluster in 4.4lite BSD is 16 kB. So, it's a compromise
as to where you want to set that value. Remember, you're sacrificing
RAM that could be running user procs by bumping it up.

Chuck Murcko	N2K Inc.	Wayne PA
And now, on a lighter note:
... and furthermore ... I don't like your trousers.

View raw message