Return-Path: Delivered-To: apache-cvs-archive@hyperreal.org Received: (qmail 3825 invoked by uid 6000); 8 Mar 1998 06:07:53 -0000 Received: (qmail 3819 invoked by alias); 8 Mar 1998 06:07:52 -0000 Delivered-To: apache-2.0-cvs@hyperreal.org Received: (qmail 3817 invoked by uid 143); 8 Mar 1998 06:07:51 -0000 Date: 8 Mar 1998 06:07:51 -0000 Message-ID: <19980308060751.3816.qmail@hyperreal.org> From: dgaudet@hyperreal.org To: apache-2.0-cvs@hyperreal.org Subject: cvs commit: apache-2.0/docs stacked_io Sender: apache-cvs-owner@apache.org Precedence: bulk Reply-To: new-httpd@apache.org dgaudet 98/03/07 22:07:51 Modified: docs stacked_io Log: some comments Revision Changes Path 1.2 +26 -1 apache-2.0/docs/stacked_io Index: stacked_io =================================================================== RCS file: /export/home/cvs/apache-2.0/docs/stacked_io,v retrieving revision 1.1 retrieving revision 1.2 diff -u -r1.1 -r1.2 --- stacked_io 1998/02/09 09:30:01 1.1 +++ stacked_io 1998/03/08 06:07:50 1.2 @@ -1,3 +1,5 @@ +[djg: comments like this are from dean] + This past summer, Alexei and I wrote a spec for an I/O Filters API... this proposal addresses one part of that -- 'stacked' I/O with buff.c. @@ -262,6 +264,14 @@ will be needed. This continues till B's buffer fills up, then B will write to C's buffer -- with the same effect. +[djg: I don't think this is the issue I was really worried about -- +in the case of shrinking transformations you are already doing +non-trivial amounts of CPU activity with the data, and there's +no copying of data that you can eliminate anyway. I do recognize +that there are non-CPU intensive filters -- such as DMA-capable +hardware crypto cards. I don't think they're hard to support in +a zero-copy manner though.] + The maximum additional number of bytes which will be copied in this scenario is on the order of nk, where n is the total number of bytes, and k is the number of filters doing shrinking transformations. @@ -291,6 +301,10 @@ sent to the next filter without any additional copying. This should provide the minimal necessary memory copies. +[djg: Unfortunately this makes it hard to support page-flipping and +async i/o because you don't have any reference counts on the data. +But I go into a little detail that already in docs/page_io.] + Function chaining In order to avoid unnecessary function chaining for reads and writes, @@ -323,6 +337,9 @@ NO_WRITEV is set; hence, it should deal with that case in a reasonable manner. +[djg: We can't guarantee atomicity of writev() when we emulate it. +Probably not a problem, just an observation.] + ************************************************************************* Code in buff.c @@ -457,7 +474,7 @@ } ----- -If the btransmitfile function is called on a buffer which doesn't +If the btransmitfile function is called on a buffer which doesn't implement it, the system will attempt to read data from the file identified by the file_info_ptr structure and use other methods to write to it. @@ -491,6 +508,14 @@ calls to bflush. The user-supplied flush function will be called then, and also before close is called. The user-supplied flush should not call flush on the next buffer. + +[djg: Poorly written "expanding" filters can cause some nastiness +here. In order to flush a layer you have to write out your current +buffer, and that may cause the layer below to overflow a buffer and +flush it. If the filter is expanding then it may have to add more to +the buffer before flushing it to the layer below. It's possible that +the layer below will end up having to flush twice. It's a case where +writev-like capabilities are useful.] Closing Stacks and Filters