httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jacob Coby" <jc...@listingbook.com>
Subject Re: Log rotation/Logging to a database?
Date Wed, 21 Aug 2002 13:53:59 GMT
>Hence log files are growing out of control and consuming valuable resources
>(disc space).

Yikes.  Last I checked, Apache has a 25mb limit on log files, maybe that was
an artificial limit in my config?  All i know is that apache puked once the
logs got that big :)  That was my kick in the ass to get logrotations going.

> Due to the number of simultaneous requests the server/s may endure (I've
> seen 150+ simultaneous instances of httpd) using rotatelogs or cronolog
does
> not seem to be an option.  Ie due to the extra two processes created and
> destroyed per httpd instance.  Though perhaps I'm over estimating the
> overhead this would create and need to do some load testing?

I think you are.  Where do you get the 'extra two processes created and
destroyed per httpd instance' bit?  I use logrotate here, and haven't
noticed any performance issues with using logrotate.  You only need to be
sure to `killall -HUP httpd` after you've rotated the logs out, or else
httpd won't know about the new file and you'll loose everything.  I can send
you my httpd rotate config.

Granted, we don't have nearly as many instances of httpd running, current
count is 60, and it will go up to ~100 later this afternoon.

I haven't noticed any corruption of the logfile.  Perhaps a lost entry or
two, but does it really matter that someone accessed a 1k image at 2am?

> I attempted to create a little script to use cronosplit and essentially
> rotate the logs in place but stress testing it showed, as expected, there
> would be corrupt lines and lines out of order if I ran it while a httpd
> process was attempting to log to the same file.

This doesn't happen with logrotate.  You'll get complete entries, but you
may loose one or two while the logs are being rotated. :)

 Do you have 150 instances of httpd running all the time, or do you have
some time where usage is lowest?  Swap out logs then.

> What I'm currently considering:
> pgLOGd - http://www.digitalstratum.com/pglogd/index.php
>
> I'm just beginning to wonder if the overhead of a Postgres database is
going
> to be greater then the overhead of an extra 300 processes and the
associated
> open file handles?

I doubt it.  Databases aren't usually CPU bound, but IO and physical limits
bound.  At least until you get into huge databases with complex queries.
Besides, it looks like pgLOGd is fairly intelligent, logging to a file until
the db catches up.  DBs are very memory hungry, and could exhaust your
webserver, if you chose to put it on the same physical server.

> Ie pgLOGd would be used and log files generated from the database at
regular
> intervals (eg daily)

Doing this is likely to use more resources than just rotating the logs.
Querying, deleting, writing a couple hundred thousand rows to disk takes
forever.  I recently culled 200000 rows from a table with 5mil rows, and it
took about 30 min just to do the delete.

Granted, this was deleting based on date, and if you just do a straight
truncate, its very quick.

But then you're back to losing log entries, unless you suspend the pg db or
make pgLOGd write to disk until the log gathering is done.

-Jacob
http://www.listingbook.com


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Mime
View raw message