Received: by taz.hyperreal.com (8.7.6/V2.0) id MAA27765; Sat, 23 Nov 1996 12:45:46 -0800 (PST) Received: from umr.edu by taz.hyperreal.com (8.7.6/V2.0) with ESMTP id MAA27757; Sat, 23 Nov 1996 12:45:43 -0800 (PST) Received: from [131.151.253.147] (dialup-pkr-9-10.network.umr.edu [131.151.253.147]) via ESMTP by hermes.cc.umr.edu (8.7.5/R.4.20) id OAA24834; Sat, 23 Nov 1996 14:33:34 -0600 (CST) X-Sender: nneul@pop3.umr.edu Message-Id: In-Reply-To: <199611231936.TAA03430> References: <199611231837.TAA10510@icarus.demon.co.uk> from "Andrew Ford" at Nov 23, 96 07:37:01 pm Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Date: Sat, 23 Nov 1996 14:32:33 -0600 To: new-httpd@hyperreal.com From: Nathan Neulinger Subject: Re: suggestion for automatic log file rotation Sender: new-httpd-owner@apache.org Precedence: bulk Reply-To: new-httpd@hyperreal.com >My question is what do people think of the suggestion? > >you need to take care of cases where people would want to use >date stamped filenames but they don't rotate their logs wach day. > >e.g. there's a problem if someone uses this feature to name their >logfiles but only rotates them once a week/month/etc, or even at an hour >other than midnight. If the server is restarted on a different day, it'll >insist on opening new logs when the intention was that it continued with >the old ones. > >Renaming the logfiles when you're done with them is simple to do. I personally don't see the big deal about processing the logs with another utility. I have something set up for all of our servers at our site that periodically (hourly in some cases) shuffles the logs... Our setup processes the logs into: /some/dir/www.server.name:port/access-YYYY-MMM /some/dir/www.server.name:port/error-YYYY-MMM Those files are automatically gzipped, the log processor handles either case. That central dir is then automatically processed into html files for the stats. The data files can be removed with the stats html remaining. There is no need to kill -HUP or do anything to the server, all you need to do is null out the file. The next write by the server will pick up where it left off. Basically, it just creates a file with a hole in it. This is not really a problem, as the log processor will just skip over the nulls. (And with some minor effort can be made to ignore the holes alltogether.) It works well for us anyway. -- Nathan ------------------------------------------------------------ Nathan Neulinger Univ. of Missouri - Rolla EMail: nneul@umr.edu Computing Services WWW: http://www.umr.edu/~nneul SysAdmin: rollanet.org