httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brian Havard" <>
Subject Re: cvs commit: apache-2.0/src/lib/apr/lib apr_execve.c apr_pools.c apr_tables.c
Date Wed, 24 Nov 1999 14:00:38 GMT
On Wed, 24 Nov 1999 04:45:44 -0800 (PST), Greg Stein wrote:

>By "talking Apache here", I meant "Apache-using-APR". In that condition,
>everything ever written for Apache assumes that a memory allocation will
>SUCCEED. So, in light of Apache... yes, we should be killing the process
>if we can't get the memory.
>I also maintain that we must at least attempt to log the error somewhere,
>rather than silently die.


>And I say no. The current model of "alloc and you'll get it" means that
>Apache can be very fast. It doesn't have to worry about not getting
>If you *do* have to worry about it, then you start putting checks on every
>darn function call. You ever see what that looks like? Go look at some COM
>code in Windows. You have one line of work, three lines of error handling.
>It is absolutely horrible. Further, the time to check a result, when it is
>typically successful is just wasted time. Lastly, people will just start
>getting lazy and not putting in checks. Then you end up with a case where
>a NULL pointer gets hit at some arbitrary point in the code, a long ways
>away from the (failed) allocation. Tracking that back is a bitch.

How about using something like the C++ method where you can register an out
of memory handler function (set_new_handler())? That way Apache can register
a handler that does the log & abort step without forcing all APR using apps
to have the same behaviour.

 |  Brian Havard                 |  "He is not the messiah!                   |
 |  |  He's a very naughty boy!" - Life of Brian |

View raw message