Return-Path: Delivered-To: apmail-new-httpd-archive@apache.org Received: (qmail 24477 invoked by uid 500); 12 Nov 2000 04:53:04 -0000 Mailing-List: contact new-httpd-help@apache.org; run by ezmlm Precedence: bulk Reply-To: new-httpd@apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list new-httpd@apache.org Received: (qmail 24465 invoked from network); 12 Nov 2000 04:53:03 -0000 Errors-To: From: "William A. Rowe, Jr." To: Subject: RE: Implementing split() on pipe buckets? Date: Sat, 11 Nov 2000 22:53:02 -0600 Message-ID: <002401c04c64$704eb1e0$92c0b0d0@roweclan.net> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 (Normal) X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook 8.5, Build 4.71.2173.0 Importance: Normal In-Reply-To: X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2919.6600 X-Spam-Rating: locus.apache.org 1.6.2 0/1000/N > From: rbb@covalent.net [mailto:rbb@covalent.net] > Sent: Saturday, November 11, 2000 10:26 PM > > > I agree with you that duplicate can't work on a pipe or socket, but I really > > disagree here. We discussed at the filtering meeting that there are several > > key operations that must -always- be implemented, and -always- work. These > > would be create, destroy, read and split. Anything else is negotiable. > > No, at the filter meeting, the only functions that we said would always > work are create and read. Everything is negotiable. In this case, a > split on a pipe or socket just can't be done cleanly. I explained why in > another message just now. Destroy always works (it simply may be a noop if it is immortal, or may be deferred if the refcount isn't 0.) I'm not going to argue split either way, since the pipe/socket cases wern't fully considered at that meeting. The cases we are discussing are all fifo problems. Can't we have a common wrapper that handles (with the extra error conditions) any fifo bucket, outside of the explicit and atomic calls (split, duplicate) that offer predictable fifo behavior for filters that don't care to work around these issues themselves? e.g. ap_bucket_split_any, ap_bucket_duplicate_any, etc. They can carry their documented shortcomings, and the author who uses them does so at a cost? These aren't implemented bucket-by-bucket, so won't carry the costs to the bucket author. They are predictable, so the filter author must be prepared to handle additional error results (potential read+split or read+duplicate errors.) It's granular, saves code, and doesn't polute buckets. Is that a good comprimize?