httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ivan Zahariev <>
Subject mod_fcgi: Excessive memory usage when large files are uploaded
Date Tue, 17 Jan 2017 08:06:15 GMT

If a large amount of data is POST'ed to a process running mod_fcgid, the 
Apache child uses an excessive amount of memory when processing it.

The client request is properly received and the following statement from 
the documentation is true: "Once the amount of request body read from 
the client exceeds FcgidMaxRequestInMem bytes, the remainder of the 
request body will be stored in a temporary file."

The problem occurs when the temporary file is being sent to the FastCGI 
handling process via its IPC socket. Here is the corresponding function 
which sends the prepared "output_brigade" to the IPC socket:

The documentation of apr_bucket_read() clearly states that "if buckets 
are read in a loop, and aren't deleted after being processed, the 
potentially large bucket will slowly be converted into RAM resident heap 
buckets. If the file is larger than available RAM, an out of memory 
condition could be caused."

I need your guidance, in order to fix this properly. I've researched a 
bit and see the following possible options to fix this:

 1. Delete each bucket after sending it to the "ipc_handle". I've looked
    through the call tree and the *output_brigade is last used by
    proc_write_ipc(). Therefore, it should be safe to empty it while
    being processed there.
 2. Take the same approach as mod_http2, which handles FILE buckets in a
    different way. Instead of using apr_bucket_read(), they process FILE
    buckets by apr_file_read() and manage the data buffer manually. This
    way the original *output_brigade won't be modified and automatically
    split by apr_bucket_read(). This requires more coding work.

Best regards.

View raw message