httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Slemko <ma...@znep.com>
Subject Re: WWW Form Bug Report: "sockets left in FIN_WAIT_2 lingering state" on SunOS 4.x (fwd)
Date Sun, 22 Dec 1996 02:23:49 GMT
On Sat, 21 Dec 1996, Chuck Murcko wrote:

> Marc Slemko liltingly intones:
> > 
> > On Thu, 19 Dec 1996, Chuck Murcko wrote:
> > 
> > > options         NMBCLUSTERS=4096        # clusters to spare(maybe)!
> > 
> > When I last checked BSD/OS, it hardcoded NMBCLUSTERS.  FreeBSD does it
> > based on maxusers using:
> > 	int     nmbclusters = 512 + MAXUSERS * 16;
> 
> Hardcoded meaning you can change it from the config, but not on a running
> kernel, right?

Hardcoded meaning that it doesn't change based on maxusers.  With
BSD/OS 2.0 NMBCLUSTERS doesn't changed based on maxusers.  With
FreeBSD, if you set maxusers to something like 128 then NMBCLUSTERS
will be 512 + 128*16 = 2560.  Apparently (see below) the comment
doesn't apply to 2.1.

> With either BSD/OS or FreeBSD you can force the NMBCLUSTERS value.
> But you left out the lines that show this. You can do it either way:
> 
> >From FreeBSD param.c:
> 
> #ifndef NMBCLUSTERS
> int     nmbclusters = 512 + MAXUSERS * 16;
> #else
> int     nmbclusters = NMBCLUSTERS;
> #endif
> 
> BSD/OS does its soft limit differently:
> 
> int     nmbclusters = NMBCLUSTERS;      /* current/"soft" limit */
> int     maxmbclusters = MAXMBCLUSTERS;  /* hard limit */
> 
> from param.c. BSD/OS dynamically allocates MBCLUSTERS if you don't define
> MAXMBCLUSTERS or NMBCLUSTERS. That is, it picks a value based on RAM in
> the machine. It doesn't grow or shrink the allocated space dynamically
> while running. (max 4096 unless overridden).

Ok, that has been changed since 2.0.  2.0 just set it to a particular
value (256 or 512, depending on if GATEWAY is defined) regardless
of RAM, maxusers, etc.


[...]
> It also strengthens my suspicion that the limiting factor on how many
> FIN_WAIT_2 connects a machine can tolerate is driven more by hitting the
> limit of the list used to maintain them than by lossage of RAM. We're
> definitely not out of RAM when our BSDI boxes panic.

Does BSD/OS implement TCP PCB hashing?  If not, you could be running
into trouble trying to search a huge linked list, although that
should just slow things to a crawl not crash things.  If it does
implement tcpcb hashing, you could be running low on space in the
hash table, causing undesirable results.  It may simply be running
out of mbuf_clusters, but it could also be something in the tcpcb
code.

If you can talk to the right person at BSDI, they may give you some
help with figuring out what is going on and, more importantly,
getting it fixed.  ...although finding the right person may be none
too easy.



Mime
View raw message