Return-Path: Delivered-To: apmail-httpd-users-archive@httpd.apache.org Received: (qmail 27436 invoked by uid 500); 7 Mar 2002 21:14:03 -0000 Mailing-List: contact users-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: users@httpd.apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list users@httpd.apache.org Received: (qmail 27403 invoked from network); 7 Mar 2002 21:14:01 -0000 Received: from home.kronodoc.fi (HELO gamay.kronodoc.fi) (195.255.175.66) by daedalus.apache.org with SMTP; 7 Mar 2002 21:14:01 -0000 Received: by gamay.kronodoc.fi (Postfix, from userid 501) id CEC58BA3F; Thu, 7 Mar 2002 23:14:03 +0200 (EET) Received: from localhost (localhost [127.0.0.1]) by gamay.kronodoc.fi (Postfix) with ESMTP id BAF093E37 for ; Thu, 7 Mar 2002 23:14:03 +0200 (EET) Date: Thu, 7 Mar 2002 23:14:03 +0200 (EET) From: Marko Asplund To: users@httpd.apache.org Subject: shielding web applications from slow clients Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N we're developing an Apache FastCGI based web application whose performance we'd like to opmize. on the short term a major rehaul of the applications internal architecture is not possible so we'd like to concentrate on external architecture at this point. the application itself is very memory resource intensive so only a small number (~10) of application processes can be running on the server concurrently. the application serves static (files, icons, HTML) and dynamic (application user interface pages) content. it also receives file uploads. one potential optimization is to try and minimize the amount of time each client reserves a server process per request. as an extreme example a user might be downloading or uploading a 10 Mb file over a 9600 bps modem link. if there was one such download per each application process the application would be unavailable for other users for hours. similarly serving smaller dynamic pages to slow clients can reserve the server process for longer than would be necessary. this is probably a very common problem for web application developers. what kinds of solutions are other people using for this problem? we've been thinking about different buffering. all of the scenarios we've discussed so far include adding some kind of front-end proxy server. the proxy server could then buffer request data until the entire request has been received before forwarding it to the application server. similarly the proxy could read all of the response from the server to a buffer as quickly as possible. what's the best way for implementing such a buffering? note that a caching proxy is not appropriate in this case (except for a very limited amount of data) because the application does authentication and authorization for every request. would increasing TCP/IP buffer size work for smaller pages? for file transfer some kind of filesystem buffering is probably the only viable solution. are there any software solutions available for this? is it possible to increase the size of proxy server application level buffers? any pointers and suggestions would be greatly appreciated. -- aspa --------------------------------------------------------------------- The official User-To-User support forum of the Apache HTTP Server Project. See for more info. To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org For additional commands, e-mail: users-help@httpd.apache.org