Return-Path: Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 52193 invoked by uid 500); 20 Mar 2003 15:08:19 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 52179 invoked from network); 20 Mar 2003 15:08:19 -0000 Message-ID: <69EB8DF424C8FE409AF8FFAF1B19787A16A662@hqexch03.citrix.com> From: Juan Rivera To: dev@httpd.apache.org Subject: Reusing buffers when reading from socket Date: Thu, 20 Mar 2003 10:08:15 -0500 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2656.59) Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01C2EEF2.88213828" X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_001_01C2EEF2.88213828 Content-Type: text/plain In the socket_bucket_read function (apr_bucket_socket.c) it reads from the socket using an 8K buffer. Now, if you only get 100 bytes, the rest of the buffer is wasted. Right? I guess HTTP typically gets large chucks of data at a time but when implementing other protocols this 8K buffer might be inefficient. I wonder if there could be a generic way to solve this problem. I was thinking that when the socket bucket gets small chunks of data (<100 bytes), it will hold that bucket and it will reuse it next time., creating a refcount bucket pointing to the same buffer but with an offset start. This way, memory will be used more efficiently when you are getting small chunks of data at a time. When you get large chunks of data it will behave just like it does today. Has anybody looked into this before? _____ Juan C. Rivera Citrix Systems, Inc Tel: (954)229-6391 ------_=_NextPart_001_01C2EEF2.88213828 Content-Type: text/html Content-Transfer-Encoding: quoted-printable

In the socket_bucket_read function (apr_bucket_socket.c) it reads from the socket using an 8K = buffer.

 

Now, if you only get 100 bytes, the rest of the buffer is wasted. Right?

 

I guess HTTP typically gets large chucks of data at a time but when = implementing other protocols this 8K buffer might be = inefficient.

 

I wonder if there could be a generic way to solve this problem. I was = thinking that when the socket bucket gets small chunks of data (<100 bytes), = it will hold that bucket and it will reuse it next time., creating a refcount bucket pointing to the = same buffer but with an offset start.

 

This way, memory will be used more efficiently when you are getting small = chunks of data at a time. When you get large chunks of data it will behave just = like it does today.

 

Has anybody looked into this before?

 


 

Juan C. Rivera

 

Citrix Systems, Inc

 

Tel: (954)229-6391

 

------_=_NextPart_001_01C2EEF2.88213828--