httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Stein <>
Subject my ideal resolution to small writes
Date Tue, 23 Jan 2001 04:04:41 GMT
There are two types of small writes:

1) small writes to ap_r* by legacy modules or those who don't want to learn
   the bucket API.

2) users of the bucket API that simply generate lots of little bits of data

My ideal resolution to the above two problems is to apply two separate

Problem (1):
  Use my patch to glom up the ap_r* bits into larger chunks.

Problem (2):
  Use a "tail-bucket" (need a better name :-) system that drops the little
  bits into a bucket at the end of the brigade. The little bits are always
  properly ordered (and already inserted!) into the brigade, without
  reliance on the user calling a [flush] function. Appending new buckets to
  the brigade follows standard operation, and the new bucket is properly
  sequenced after the glomming-bucket.

I would be happy to (partially!) code (2) if my textual descriptions have
not been adequate. For example, I could implement an apr_brigade_write()
that shows the mechanism.

  By their very definition, apr_brigade_write, _printf, _putstrs, etc have
  an inherent problem dealing with large amounts of data. Consider: upon
  return from these functions, the data will "disappear." That implies the
  brigade_write function must copy the data to a heap/malloc buffer. (note
  that a transient bucket cannot be used)

  As long as the caller can *ensure* they aren't calling with large values,
  then the apr_brigade_write functions are useful. If they make a mistake
  and pass in 100k, then the app is in trouble.

  Ideally, an app that generates a 100k block will use other parts of the
  brigade/bucket API to manage data of that size.


Greg Stein,

View raw message