Return-Path: Delivered-To: apmail-apr-dev-archive@www.apache.org Received: (qmail 76770 invoked from network); 18 Apr 2005 17:49:34 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 18 Apr 2005 17:49:34 -0000 Received: (qmail 12450 invoked by uid 500); 18 Apr 2005 17:49:32 -0000 Delivered-To: apmail-apr-dev-archive@apr.apache.org Received: (qmail 12402 invoked by uid 500); 18 Apr 2005 17:49:31 -0000 Mailing-List: contact dev-help@apr.apache.org; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Delivered-To: mailing list dev@apr.apache.org Received: (qmail 12383 invoked by uid 99); 18 Apr 2005 17:49:31 -0000 X-ASF-Spam-Status: No, hits=0.1 required=10.0 tests=FORGED_RCVD_HELO X-Spam-Check-By: apache.org Received-SPF: pass (hermes.apache.org: local policy) Received: from ajax.cnchost.com (HELO ajax.cnchost.com) (207.155.248.31) by apache.org (qpsmtpd/0.28) with ESMTP; Mon, 18 Apr 2005 10:49:28 -0700 Received: from rcsv650.rowe-clan.net (c-24-13-128-132.hsd1.il.comcast.net [24.13.128.132]) by ajax.cnchost.com id NAA14886; Mon, 18 Apr 2005 13:49:20 -0400 (EDT) [ConcentricHost SMTP Relay 1.17] Errors-To: Message-Id: <6.2.1.2.2.20050418115916.065e88e0@pop3.rowe-clan.net> X-Mailer: QUALCOMM Windows Eudora Version 6.2.1.2 Date: Mon, 18 Apr 2005 12:08:12 -0500 To: Mladen Turk From: "William A. Rowe, Jr." Subject: Re: [WIN32] alternative apr_pollset implementation proposal Cc: Paul Querna , APR Developer List In-Reply-To: <4263D8A2.5010204@apache.org> References: <4263A807.8070808@apache.org> <4263D53F.9070800@force-elite.com> <4263D8A2.5010204@apache.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-Virus-Checked: Checked X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N At 10:56 AM 4/18/2005, Mladen Turk wrote: >Paul Querna wrote: > >>Hmm. I am not really happy with this loop. I don't think this will be >>very fast with thousands of Sockets. I am far from an expert on Win32, >>but why can't 'WSAWaitForMultipleEvents' be used, instead of iterating >>every socket? > >It has a 64 handles limit, so you will need to have multiple threads, >that could lead to hundreds of them, >or like in mpm_winnt wait_for_many_object call and have multiple >WaitForMultipleEvents calls and check after the timeout. Exactly the reason we have the existing limit. >Unlike WaitForMultipleObjects, for sockets we have WSAEnumNetworkEvents >that will check the state of the socket, so there is no need to call >something that will timeout in a loop with a Sleep already being there. > >Compared with classical select call on 1K sockets, found that the >implementation is faster, probably because I used smaller Sleep :). Sleep() is not the answer. I do have a productive suggestion though that we kicked around once or twice. Spin up helper threads (we can even keep a cache of them) to handle each group of 64 events, and have them pop an event to the parent thread once finished. at 64x63 events, this could be quite respectible. Keep in mind, as you consider new designs for apr_poll, that the longest standing issue is polling on files. If you can solve both issues in one whack, we would trip over each other getting your patch committed :)