httpd-apreq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joe Schaefer <joe+gm...@sunstarsys.com>
Subject Re: Problem with posting lots of data
Date Tue, 24 Feb 2004 23:29:08 GMT
"Paul G. Weiss" <paul@weiss.name> writes:

> But for n>442 the numbers don't match.

That is somewhat expected, since the parsers will currently 
give up when the number of fields exceeds 200  (the actual
number depends on the amount of data still inside the input 
brigade at that particular moment.)

> That isn't all.  It works fine through http 

By "it works fine", you mean the script below works for n < 200?

> (as long as n < FIELD_LIMIT),

FIELD_LIMIT is going away, along with the max_fields and read_ahead
attributes of apreq_config_t.

> but fails through https.  So you would think that the problem is with
> mod_ssl/openssl.  However, if you use CGI instead of Apache::Request
> (see below - comment out the Apache::Request line and uncomment the
> CGI line), it works fine, even through https!  

I'm curious- what happens if both of them are uncommented?
(In mp2, Apache::Request and CGI.pm can both be used simultaneously
within the same request).

What about using "multipart/form-data" for the encoding, instead
of the default ("application/x-www-form-urlencoded")?  Does that
change anything?

> This tells me that it isn't mod_ssl/openssl alone, but in the
> interaction between it and libapreq2, perhaps some difference in how
> the data is buffered?
> 
> Can anyone confirm this?
>
> I'm using
> 	Apache 2.0.48
> 	mod_perl/1.99_12
> 	Perl/v5.8.1
> 	mod_ssl/2.0.48
> 	OpenSSL/0.9.7a

Not easily- at the moment my current build box looks nothing like 
yours (no SSL in particular).  Anyone else able to take a crack?

-- 
Joe Schaefer


Mime
View raw message