apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Collins-Sussman <suss...@collab.net>
Subject confusion about largefile support
Date Tue, 31 May 2005 16:18:52 GMT
[Posting to both apr and subversion dev lists.]

I know that largefile support is kludgey in APR 0.9x, but  
unfortunately, thousands of Subversion users are still using that  
branch (and httpd 2.0.x) because of the binary compatibility issue.

Someone privately pointed out to me today that Subversion isn't  
passing the APR_LARGEFILE flag to *any* apr_file_io calls anywhere.   
Are we sitting on time bomb?

(Subversion issue 1819 (http://subversion.tigris.org/issues/ 
show_bug.cgi?id=1819) discusses some problems we had with  
apr_file_copy(), but we worked around it by writing our own copy  
implementation that doesn't use offsets.)

We've not seen any issues reported, but something came into our  
users@ list over the weekend.  A woman was doing an 'svnadmin load'  
of a large dumpfile into an FSFS repository, and got this error:

<<< Started new transaction, based on original revision 1046
      * adding path : trunk/some folder ... done.
      * adding path : trunk/some folder/some_file.zip ...File size
limit exceeded.

The phrase "File size limit exceeded" comes from libc, so now I'm  
wondering if the largefile flag is the problem here.  Perhaps the  
fsfs revision file is > 2GB, and apr_file_open() is tripping over  
it?  (This woman is using the Fedora 3 apr-0.9.4-24.2 rpm, by the way.)

In any case:  I'm wondering if we should be passing APR_LARGEFILE to  
all apr_file_io calls.  Is it necessary?  Should we expect problems  
if we don't?

Here's a link to the original users@subversion.tigris.org thread:


View raw message