httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r..@ai.mit.edu (Robert S. Thau)
Subject Re: Log file and databases
Date Mon, 08 May 1995 15:25:34 GMT
   Date: Mon, 08 May 1995 10:08:02 -0500
   From: Randy Terbush <randy@dsndata.com>
   Precedence: bulk
   Reply-To: new-httpd@hyperreal.com

   CLF is too restrictive (we have said this before)

   It would be nice to be able to log *all* of the server
   request info into a database format.

   I have implemented a Perl-CGI approach and can log the CGI
   environment to dbm logfiles.  However this requires the
   index.cgi approach in every directory.

Hmmm... if external Perl code is involved, I'm not sure how requests
which don't go through some external CGI script get logged.  

   With some recent discussion regarding logging and security issues,
   would it not make sense to let the parent process running as root
   handle all of the logging? (non-forker)

This would be a potential bottleneck.  (Besides which, one nice
feature of the current non-forking code is that the parent process,
which runs as root, is not involved in handling transactions at all;
this makes it simply impossible for skillfully constructed requests to
abuse its privileges).

   Any comments on the system load that could be generated by the
   above mentioned index.cgi approach?

If every transaction involves a CGI hit, it's *quite* severe,
involving a fork *and exec* on each transaction.

   Related to the Perl-CGI approach, there is an interesting API
   developing that can spawn "Minisvr" processes to provide some
   statefulness to the session. Comments?

Don't know about this.  References?

   Should the logger be a separate program?

I would prefer for it not to be --- it's more efficient to have the
processes serving requests write whatever they would have written to
the "separate logger" to a flat file instead, and to process the
contents of that file off-line.

rst




Mime
View raw message