Return-Path: X-Original-To: apmail-subversion-dev-archive@minotaur.apache.org Delivered-To: apmail-subversion-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9DE36921A for ; Fri, 10 Feb 2012 09:05:52 +0000 (UTC) Received: (qmail 81672 invoked by uid 500); 10 Feb 2012 09:05:52 -0000 Delivered-To: apmail-subversion-dev-archive@subversion.apache.org Received: (qmail 81164 invoked by uid 500); 10 Feb 2012 09:05:44 -0000 Mailing-List: contact dev-help@subversion.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@subversion.apache.org Received: (qmail 81115 invoked by uid 99); 10 Feb 2012 09:05:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 Feb 2012 09:05:38 +0000 X-ASF-Spam-Status: No, hits=0.7 required=5.0 tests=RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [81.103.221.49] (HELO mtaout03-winn.ispmail.ntl.com) (81.103.221.49) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 Feb 2012 09:05:30 +0000 Received: from aamtaout03-winn.ispmail.ntl.com ([81.103.221.35]) by mtaout03-winn.ispmail.ntl.com (InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id <20120210090508.EQCK24131.mtaout03-winn.ispmail.ntl.com@aamtaout03-winn.ispmail.ntl.com>; Fri, 10 Feb 2012 09:05:08 +0000 Received: from cpc2-farn6-0-0-cust204.6-2.cable.virginmedia.com ([86.16.124.205]) by aamtaout03-winn.ispmail.ntl.com (InterMail vG.3.00.04.00 201-2196-133-20080908) with ESMTP id <20120210090508.LEUS13318.aamtaout03-winn.ispmail.ntl.com@cpc2-farn6-0-0-cust204.6-2.cable.virginmedia.com>; Fri, 10 Feb 2012 09:05:08 +0000 Received: by cpc2-farn6-0-0-cust204.6-2.cable.virginmedia.com (Postfix, from userid 1000) id 6228736199; Fri, 10 Feb 2012 09:05:06 +0000 (GMT) From: Philip Martin To: Johan Corveleyn Cc: Greg Stein , Paul Burba , dev@subversion.apache.org Subject: Re: Failing svnrdump_tests.py#43 with 1.7.x on Windows References: <20120209164915.GB4576@daniel3.local> Date: Fri, 10 Feb 2012 09:05:06 +0000 In-Reply-To: (Johan Corveleyn's message of "Fri, 10 Feb 2012 01:02:25 +0100") Message-ID: <87ipjfhxdp.fsf@stat.home.lan> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Cloudmark-Analysis: v=1.1 cv=R50lirqlHffDPPkwUlkuVa99MrvKdVWo//yz83qex8g= c=1 sm=0 a=dmETdrZ3I3YA:10 a=wfXjZge-9r4A:10 a=sOaj6hhuNKAA:10 a=kj9zAlcOel0A:10 a=pGLkceISAAAA:8 a=TJdL4LeRAAAA:8 a=a3MFROThKRGbfJ45lywA:9 a=CjuIK1q_8ugA:10 a=MSl-tDqOz04A:10 a=HpAAvcLHHh0Zw7uRqdWCyQ==:117 Johan Corveleyn writes: > I'm finally starting to get somewhere: > > (all the below is with a trunk client with MAX_NR_OF_CONNS=1, vs. the > affected mod_dav_svn) > > When processing 'psi', update_editor.c#handle_fetch exits early. It > skips the last block of 'if (APR_STATUS_IS_EOF(status))' (containing > the 'close_file' call), because status==730035. That seems to be > WSAEWOULDBLOCK (WinError.h): "A non-blocking socket operation could > not be completed immediately." > > After that, for some reason, the close_directory is driven first, and > only later handle_fetch is run again to finish off psi. Ah! So it's using fetch_ctx->read_headers to allow multiple calls to handle_fetch. I understand that now. > That might explain the Windows-ness a bit (specific socket behavior), > combined with some randomness (sometimes it does "complete > immediately", sometimes it doesn't). It still doesn't explain the > relatedness to 1.7.x. > > Open questions: > > - What's the link with Philip's change in liveprops.c ('if (kind != > svn_node_file) return DAV_PROP_INSERT_NOTSUPP;') ? None really. That change causes the server to avoid sending a checksum="###error###" line for direcories, but all that matters in this context is that changing a server response affects the timing of all the server responses. svnrdump/serf falls over at random depending on the order in which it receives responses from the the (multi-threaded/multi-process) server. It can fall over using a 1.7.2 client against a 1.6 server. By changing the timing we cause svnrdump/serf to fall over at random in a different place. > - Why the WSAEWOULDBLOCK error? Maybe that's expected and normal, and > shouldn't cause a problem in and of itself? Yes. The response to a GET request could be large. So the client may have to loop to read it all. That loop happens outside handle_fetch. > - Why, after exiting handle_fetch early, does ra_serf first do the > close_directory first? I don't know much about how ra_serf organizes > these things. That's where the server timing comes in. After the first call to handle_fetch for psi if the next event is the rest of the GET then we go back into handle_fetch and things are OK. But sometimes some other event happens first and we do close_directory. > (actually, the WSAEWOULDBLOCK error also happens earlier during the > update-drive, when adding trunk/D/G/rho, but it doesn't cause a > problem -- no close_directory interference, just a clean "rerun" of > handle_fetch). -- uberSVN: Apache Subversion Made Easy http://www.uberSVN.com