apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William A. Rowe, Jr." <wr...@rowe-clan.net>
Subject Re: [PATCH] Fix for Subversion issue #1789
Date Mon, 19 Feb 2007 09:16:40 GMT
Jonathan Gilbert wrote:
> 
> This is a repost of a message sent to the Subversion development mailing
> list containing a patch for APR.

Thanks for forwarding.  In the future, you might want to also log this sort
of patch to issues.apache.org/bugzilla/ - to catch the maintainer's attention
the PatchAvailable bug keyword marks it as very-low-hanging fruit.

Question, are you the author?  If not, could that author please submit us
their patch directly so we follow our IP constraints?  Otherwise this
discussion is off the table till I have time to have someone else look at
this patch who hasn't followed the emails.

> Windows has a bug in its WriteFile API. When the file handle is a console
> output handle and the data chunk is greater than some value around 60 KB,
> instead of doing either a complete write (which would be ideal) or a
> partial write (which is permitted by the API contract), it returns an
> incorrect error causing Subversion to display an error message such as:

I've heard 32000 at one time /shrug.  It might vary by os release.

> svn: Can't write to stream: Not enough storage is available to process this
> command.

Hmmm - so we have a reproduceable error result.  Worth noting...

> Clearly, as the actual bug is in Windows, we can't fix the bug itself, so
> we need a work-around, and common sense dictates that that work-around be
> as close as possible to the API. So, the patch is for APR.

Agreed.

> The obvious work-around is to take large writes and divide them up into
> smaller writes. APR already uses separate files for file I/O on different
> platforms, which means the work-around will be automatically segregated to
> Win32 only. Even with the work-around in place, writing a large amount of
> data in 48 KB blocks is not going to be significantly slower compared to
> the ideal single API call. So, rather than add complexity and try to figure
> out whether the file handle is a console file handle, I simply apply the
> work-around to *all* file writes done through apr_file_write on Win32.

Forget it :)  I believe you underestimate things on a major level; consider
someone copying an ISO image.  VERY measurable in terms of kernel state
transitions, depending on what the destination driver/device is.

> I've attached a patch which achieves this, however after an hour and a half
> of trying to work with the Subversion build system for Windows (which I am
> not familiar with), I was not able to get a working build. So, I present
> this patch as "carefully reviewed & probably good but untested". I was
> careful to preserve the semantics of the border case where *nbytes is
> initially 0; as before, the function still calls WriteFile() with a count
> of 0 once.

Thanks for the heads up :)

> If anyone would like to give me some hints on building under Windows, that
> would also be great. I used "gen-make.py -t vcproj" as per the
> instructions, but ended up with projects that do not reference their
> dependencies and DLLs that do not export any functions at all.

No clue at all, FWIW if all you are building is apr, there is a visual
studio .dsw (workspace) that modern studios will convert to the .sln
solution file on open.  The -win32-src.zip sources include a makefile
generated from the .dsw/.dsp files.  (.sln/.vcproj files won't export
to .mak files, so we've refused to migrate so far.)

As far as gen-make.py - I don't personally use it so can't vouch for it.

> + * "Not enough storage is available to process this command."
> + */
> +#define MAXIMUM_SYNCHRONOUS_WRITE_SIZE (48 * 1024)

Safer to set this to 32000 per earlier console pipe documention.

I'd veto the solution below, as I noted above you make a really horrid
assumption that writing small chunks would be efficient for writing
out tens or hundreds of megabytes to disk.  But we have a small,
saving grace...

"Can't write to stream: Not enough storage is available to process this
command."

What if we caught that exception after the existing writefile statement,
and in the specific case of that error, we were to use your patch below
to shuttle in one 32000 segment at a time?  It would significantly slow
those writes for drivers or pipes subject to this limit, BUT I believe
that's the rare exception (most writes being within the size) and we
benefit from not slowing down 'just for consoles'.

What are your thoughts on that approach?

> @@ -309,9 +319,48 @@
>              if (thefile->pOverlapped) {
>                  thefile->pOverlapped->Offset     = (DWORD)thefile->filePtr;
>                  thefile->pOverlapped->OffsetHigh =
> (DWORD)(thefile->filePtr >> 32);
> +                rv = WriteFile(thefile->filehand, buf, (DWORD)*nbytes,
> +                               &bwrote, thefile->pOverlapped);
>              }
> -            rv = WriteFile(thefile->filehand, buf, (DWORD)*nbytes, &bwrote,
> -                           thefile->pOverlapped);
> +            else {
> +                /* The write may need to be divided up into chunks due to a
> +                 * bug in WriteFile when it is given a console file handle.
> +                 */
> +                bwrote = 0;
> +                do {
> +                    /* Determine the next chunk's size and write it. */
> +                    DWORD chunk_size = MAXIMUM_SYNCHRONOUS_WRITE_SIZE;
> +
> +                    if (chunk_size > *nbytes) {
> +                        chunk_size = *nbytes;
> +                    }
> +
> +                    rv = WriteFile(thefile->filehand, buf,
> +                                   chunk_size, &chunk_size, NULL);
> +
> +                    /* If an error occurs but some bytes were written, report
> +                     * success for those bytes only. A subsequent call will
> +                     * return the same error without any bytes having been
> +                     * written.
> +                     */
> +                    if (!rv) {
> +                        if (bwrote > 0) {
> +                            /* By making rv non-zero, the return-value test
> +                             * below will return APR_SUCCESS.
> +                             */
> +                            rv = TRUE;
> +                        }
> +                        break;
> +                    }
> +
> +                    /* Tally the bytes that were written and advance the
> +                     * buffer pointer. */
> +                    bwrote += chunk_size;
> +
> +                    buf = chunk_size + (char *)buf;
> +                    *nbytes -= chunk_size;
> +                } while (*nbytes > 0);
> +            }
>              if (thefile->append) {
>                  apr_file_unlock(thefile);
>                  apr_thread_mutex_unlock(thefile->mutex);


Mime
View raw message