httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Koen Vingerhoets" <koen.vingerho...@ubench.com>
Subject RE: Log rotation/Logging to a database?
Date Wed, 21 Aug 2002 09:11:31 GMT
Hi,

why don't you use a customlog to limit the size?

	SetEnvIf Request_URI \.*$ dontlog
	CustomLog /home/logs/access_log common env=!dontlog

Koen

Met vriendelijke groet,

Koen Vingerhoets

***** UBench nv *****
http://www.ubench.com
____________________________________________
The information contained in this electronic mail message is privileged and
confidential,
and is intended only for use of the addressee. If you are not the intended
recipient, you
are hereby notified that any disclosure,reproduction, distribution or other
use of this
communication is strictly prohibited.

If you have received this communication in error, please notify the sender
by reply
transmission and delete the message without copying or disclosing it.


-----Original Message-----
From: Jon Benson [mailto:Jon@destra.com]
Sent: 21 August 2002 10:26
To: 'users@httpd.apache.org'
Subject: Log rotation/Logging to a database?


Hi folks,

Here is my dilemma:

I've recently started in this role to find I'm responsible for 2 apache
servers, each with hundreds (eg 1200 on one) of virtual sites and NO log
rotation occuring!  :(

Hence log files are growing out of control and consuming valuable resources
(disc space).

Due to the time it would take to stop apache, rotate all the logs, and start
apache again that's not an option.

Due to the number of simultaneous requests the server/s may endure (I've
seen 150+ simultaneous instances of httpd) using rotatelogs or cronolog does
not seem to be an option.  Ie due to the extra two processes created and
destroyed per httpd instance.  Though perhaps I'm over estimating the
overhead this would create and need to do some load testing?

I attempted to create a little script to use cronosplit and essentially
rotate the logs in place but stress testing it showed, as expected, there
would be corrupt lines and lines out of order if I ran it while a httpd
process was attempting to log to the same file.


What I'm currently considering:
pgLOGd - http://www.digitalstratum.com/pglogd/index.php

I'm just beginning to wonder if the overhead of a Postgres database is going
to be greater then the overhead of an extra 300 processes and the associated
open file handles?

Ie pgLOGd would be used and log files generated from the database at regular
intervals (eg daily)

Any/all comments, particularly from those who may have encountered a similar
dilemma or had experience with pgLOGd, would be most welcome.


Thanks,

Jon Benson
Mail/DNS/Linux Administrator
OzHosting.com

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org





---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Mime
View raw message